I am running an LP optimization using CVXPY and testing a number of solvers. Previously, when running the same mathematical formulation of my problem I received consistent results, regardless of which solver used (Gurobi, CBC, or ECOS). Since middle of 2021, the results with ECOS have maintained consistency but the other solvers have discrepancies.
For example, I am solving a power distribution problem where the results provide the dispatch of power generators throughout an operating horizon.
Previous Result (and with current ECOS)
G1 = [0,0,0,140,140,140,140,140,140,140,140,140,140,0,0,0,0,0,0,0,0,0,0,0,0]
Current Result
G1 = [0,0,0,140,140,140,140,0,140,140,140,140,140,0,0,0,0,0,0,0,0,0,0,0,0]
When I add a penalty to my objective function in order to avoid this behaviour, solve times go up quite substantially (5x).
My questions is whether anyone has seen similar behaviour and found a solution.
Related
Is there any way to force 'hybr' method of scipy.optimize 'root' to keep working even after it finds that convergence its too slow? In my problem, the solver nearly reaches desired precision, but right before it, the algorithm terminates because of slow convergence... Is it possible to make 'hybr' more 'self-confident'?
I use the root-finding algorithm root from scipy.optimize module to solve a system of two algebraic, non-linear equations. Since the equations have to be solved many times for various parameter values it is important to find a numerical method that would be most stable for this problem.
I have compared the performance of all the methods provided by scipy.optimize module. To visualize their performance I have used the following procedure:
The algebraic equations were rearranged so that they have zero on the R.H.S.
Then, at each step made by the algorithm, the sum of the L.H.S. squared of all the equations was computed and printed.
In my case, the most efficient method is the default "hybr". Other build-in methods either do not converge at all or are significantly slower. Unfortunately, in some cases the desired method gives up too fast. Lowering the precision and/or providing additional options to the functions did not help.
Gurobi and CPLEX are solvers that have been very popular in recent years. CPLEX is easier for academics in terms of the license. It is also said to be very high in performance. But Gurobi is claimed to be the fastest solver in recent years, with continuous improvements. However, it is said that its performance decreases when the number of constraints increases.
In terms of speed and performance, which solver is generally recommended specifically for large-scale problems with the quadratic objective function, which have not too many constraints?
Will their use within Python affect their performance?
Math programming is inherently hard and there will likely always be instances where one solver is faster than another. Often, problems are solved quickly just because some heuristic was "lucky".
Also, the size of a problem alone is not a reliable measure for its difficulty. There are tiny instances that are still unsolved while we can solve instances with millions of constraints in a very short amount of time.
When you're looking for the best performance, you should analyze the solver's behavior by inspecting the log file and then try to adjust parameters accordingly. If you have the opportunity to test out different solvers you should just go for it to have even more options available. You should be careful about recommendations for either of the established, state-of-the-art solvers - especially without hands-on computational experiments.
You also need to consider the difficulty of the modeling environment/language and how much time you might need to finish the modeling part.
To answer your question concerning Gurobi's Python interface: this is a very performant and popular tool for all kinds of applications and is most likely not going to impact the overall solving time. In the majority of cases, the actual solving time is still the dominant factor while the model construction time is negligible.
As mattmilten already said, if you compare the performance of the major commercial solvers on a range of problems you will find instances where one is clearly better than the others. However that will depend on many details that might seem irrelevant. We did a side-by-side comparison on our own collection of problem instances (saved as MPS files) that were all generated from the same C++ code on different sub-problems of a large optimisation problem. So they were essentially just different sets of data in the same model and we still found big variations across the solvers. It really does depend on the details of your specific problem.
I'm trying to solve an order minimization problem with python. Therefore I distribute M orders over N workers. Every worker has a basic energy-level X_i which is gathered in the vector X. Also, every order has a specific energy consumption E_j which is gathered in E. With that being said I'm trying to solve the following problem
where Y is some optimal energy level, with the norm beeing the 2-norm. Under the constraints, that any column adds up to exactly one, since an order should be done and could only be done by one worker. I looked at scipy.optimize but it doesn't seem to support this sort of optimization as far as I can tell.
Does one know any tools in Python for this sort of discrete optimization problem?
The answer depends on the norm. If you want the 2-norm, this is a MIQP (Mixed Integer Quadratic Programming) problem. It is convex, so there are quite a number of solvers around (e.g. Cplex, Gurobi, Xpress -- these are commercial solvers). It can also be handled by an MINLP solver such as BonMin (open source). Some modeling tools that can help are Pyomo and CVXPY.
If you want the 1-norm, this can be formulated as a linear MIP (Mixed Integer Programming) model. There are quite a few MIP solvers such as Cplex, Gurobi, Xpress (commercial) and CBC, GLPK (open source). Some modeling tools are Pyomo, CVXPY, and PuLP.
Consider for the sake of simplicity the following equation (Burgers equation):
Let's solve it using scipy (in my case scipy.integrate.ode.set_integrator("zvode", ..).integrate(T)) with a variable time-step solver.
The issue is the following: if we use the naïve implementation in Fourier space
then the viscosity term nu * d2x(u[t]) can cause an overshoot if the time step is too big. This can lead to a fair amount of noise in the solutions, or even to (fake) diverging solutions (even with stiff solvers, on slightly more complex version of this equation).
One way to regularize this is to evaluate the viscosity term at step t+dt, and the update step becomes
This solution works well when programmed explicitly. How can I use scipy's variable-step ode solver to implement it ? To my surprise I haven't found any documentation on this fairly elementary thorny issue...
You actually can't, or on the other extreme, odeint or ode->zvode already does that to any given problem.
To the first, you would need to give the two parts of the equation separately. Obviously, that is not part of the solver interface. Look at DDE and SDE solvers where such a partition of the equation is actually required.
To the second, odeint and ode->zvode use implicit multi-step methods, which means that the values of u(t+dt) and the right side there enter the computation and the underlying local approximation.
You could still try to hack your original approach into the solver by providing a Jacobian function that only contains the second derivative term, but quite probably you will not achieve an improvement.
You could operator-partition the ODE and solve the linear part separately introducing
vhat(k,t) = exp(nu*k^2*t)*uhat(k,t)
so that
d/dt vhat(k,t) = -i*k*exp(nu*k^2*t)*conv(uhat(.,t),uhat(.,t))(k)
I am having trouble solving an optimisation problem in python, involving ~20,000 decision variables. The problem is non-linear and I wish to apply both bounds and constraints to the problem. In addition to this, the gradient with respect to each of the decision variables may be calculated.
The bounds are simply that each decision variable must lie in the interval [0, 1] and there is a monotonic constraint placed upon the variables, i.e each decision variable must be greater than the previous one.
I initially intended to use the L-BFGS-B method provided by the scipy.optimize package however I found out that, while it supports bounds, it does not support constraints.
I then tried using the SQLSP method which does support both constraints and bounds. However, because it requires more memory than L-BFGS-B and I have a large number of decision variables, I ran into memory errors fairly quickly.
The paper which this problem comes from used the fmincon solver in Matlab to optimise the function, which, to my knowledge, supports the application of both bounds and constraints in addition to being more memory efficient than the SQLSP method provided by scipy. I do not have access to Matlab however.
Does anyone know of an alternative I could use to solve this problem?
Any help would be much appreciated.