How do i pass tolerances and other parameters through CVXPY when using the CPLEX solver?
from cvxpy import Problem, Minimize
from cvxpy.settings import CPLEX
costs = ...
constraints = ...
prob = Problem(Minimize(costs), constraints)
prob.solve(solver=CPLEX, ...)
I see a page of CPLEX Parameters though it is unclear which ones apply to my quadratic problem. Also, the CVXPY documentation has pass through options for other solvers but not CPLEX.
This will change in the future (see this pull request), but with cvxpy 1.0.6, you can do the following (NOTE: this is undocumented behavior; see below for more):
prob.solve(solver=CPLEX, advance=0)
The advance=0 will turn "off" the advanced start switch parameter. So, if the parameter name is parameters.advance in the CPLEX Python API, you would pass in the part after parameters. (i.e., advance) and the value as a keyword argument. Any extra keyword arguments that are passed to the solve method are interpreted this way. For debugging, you should probably set verbose=True (one of the standard keyword arguments to solve) to turn on the engine log; the parameter settings will be displayed at the top of the log.
This behavior was not documented for good reason. It doesn't allow you to set parameters like data consistency checking and modeling assistance. That parameter name in the CPLEX Python API is parameters.read.datacheck but read.datacheck cannot be used as a keyword argument in Python (it would result in a syntax error).
As a workaround, consider using the ILOG_CPLEX_PARAMETER_FILE environment variable, which is documented here.
EDIT: the workaround above is no longer necessary with cvxpy 1.0.8. That is, you should be able to set all of the parameters now regardless of where they are in the parameter hierarchy. You need to use the optional cplex_params argument, though. It's nice to combine this with verbose=True so that you can see the parameter settings in the engine log. For example:
prob.solve(solver=cvxpy.CPLEX,
verbose=True,
cplex_params={"mip.tolerances.absmipgap": 1e-07,
"benders.strategy": 3})
Related
I am having a non-linear minimization problem apparently with non-convexity. I use the Pyomo framework for an energy system operation optimization model, where a once configured optimization model needs to be evaluated in sequential hours (I create the optimization problem at the beginning, defining the variables, constraints and objective function for the specific system and then I try to solve this created set up for the "simulation " time frame (e.g. for every hour in a given year), changing only the energy demand parameter, ...minimizing operation costs). I have noticed that for some random hours an optimum cannot be found. In most of these failed cases I get "max iteration number reached", sometimes "restoration failed" result.
To overcome this problem I would like to use the Pyomo "multistart" method (pyo.Solverfactory('multistart').solve(model)), which by default uses the IPOPT solver. I had been using it previously as well, but then I had the syntax:
pyo.Solverfactory('ipopt', executable=...ipopt.exe)
In this new case with multistart though I cannot define the executable for the IPOPT solver. Could you please help me how to solve this problem?(...or suggest alternatives to multistart to overcome the starting point issue of non-convex minimization)
So far I have tried:
pyo.Solverfactory('multistart', executable=...ipopt.exe).solve(model)
pyo.Solverfactory('multistart').solve(model, solver='ipopt', executable=...ipopt.exe)
Thanks a lot!
There should be an argument to pass in a dictionary of keyword arguments to the solver. See solver_args (https://pyomo.readthedocs.io/en/latest/contributed_packages/multistart.html#using-multistart-solver)
I am trying to register a python function and its gradient as a tensorflow operation.
I found many useful examples e.g.:
Write Custom Python-Based Gradient Function for an Operation? (without C++ Implementation)
https://programtalk.com/python-examples/tensorflow.python.framework.function.Defun/
Nonetheless I would like to register attributes in the operation and use these attributes in the gradient definition by calling op.get_attr('attr_name').
Is this possible without going down to C implementation?
May you give me an example?
Unfortunately I don't believe it is possible to add attributes without using a C++ implementation of the operation. One feature that may help though is that you can define 'private' attributes by prepending an underscore to the start. I'm not sure if this is well documented or what the long-term guarantees are, but you can try setting '_my_attr_name' and you should be able to retrieve it later.
How to pass MIP gap parameter to Gurobi with PULP?
I tried:
prob.solve(GUROBI_CMD(epgap = 0.9))
No luck
I used this wiki for all my failed attempts
Looking at the code, i would assume, that you have to give the arguments as defined in gurobi's docs (these are then passed when calling gurobi's cli), compatible with pulp's function-signature.
prob.solve(GUROBI_CMD(options=['MIPGap=0.9']))
But i probably recommend using the python-interface if you got gurobipy working (read gurobi's docs). This would look like:
prob.solve(GUROBI(epgap = 0.9))
I want to run parameter studies in different modelica building libraries (buildings, IDEAS) with python: For example: change the infiltration rate.
I tried: simulateModel and simulateExtendedModel(..."zone.n50", [value])
My questions:Why is it not possible to translate the model and then change the parameter: Warning: Setting zone.n50 has no effect in model. After translation you can only set literal start-values and non-evaluated parameters.
It is also not possible to run: simulateExtendedModel. When i go to command line in dymola and write for zone.n50, then i get the actual value (that i have defined in python), but in the result file (and the plotted variable) it is always the standard n50 value.So my question: How can I change values ( befor running (and translating?) the simulation?
The value for the parameter is also not visible in the variable browser.
Kind regards
It might be a strcutrual parameter, these are evaluated also. It should work if you explicitly set Evaluate=False for the parameter that you want to study.
Is it not visible in the variable browser or is it just greyed out and constant? If it is not visible at all you should check if it is protected.
Some parameters cannot be changed after compilation, even with Evaluate=False. This is the case for parameters that influence the structure of the model, for example parameters that influence a discretization scheme and therefore influence the number of equations.
Changing such parameters requires to recompile the model. You can still do this in a parametric study though, I think you can use Modelicares to achieve this (http://kdavies4.github.io/ModelicaRes/modelicares.exps.html)
I am using scipy.optimize.basinhopping for finding the minima of a scalar function. I wonder whether it is possible to disable the local minimization part of scipy.optimize.basinhopping? As we can see from the output message below, minimization_failures and nit are nearly the same, indicating that the local minimization part may be useless for the global optimization process of basinhopping --- reason why I would like to disable the local minimization part, for the sake of efficiency.
You can avoid running the minimizer by using a custom minimizer that does nothing.
See the discussion on "Custom minimizers" in the documentation of minimize():
**Custom minimizers**
It may be useful to pass a custom minimization method, for example
when using a frontend to this method such as `scipy.optimize.basinhopping`
or a different library. You can simply pass a callable as the ``method``
parameter.
The callable is called as ``method(fun, x0, args, **kwargs, **options)``
where ``kwargs`` corresponds to any other parameters passed to `minimize`
(such as `callback`, `hess`, etc.), except the `options` dict, which has
its contents also passed as `method` parameters pair by pair. Also, if
`jac` has been passed as a bool type, `jac` and `fun` are mangled so that
`fun` returns just the function values and `jac` is converted to a function
returning the Jacobian. The method shall return an ``OptimizeResult``
object.
The provided `method` callable must be able to accept (and possibly ignore)
arbitrary parameters; the set of parameters accepted by `minimize` may
expand in future versions and then these parameters will be passed to
the method. You can find an example in the scipy.optimize tutorial.
Basically, you need to write a custom function that returns an OptimizeResult and pass it to basinhopping via the method part of minimizer_kwargs, for example
from scipy.optimize import OptimizeResult
def noop_min(fun, x0, args, **options):
return OptimizeResult(x=x0, fun=fun(x0), success=True, nfev=1)
...
sol = basinhopping(..., minimizer_kwargs=dict(method=noop_min))
Note: I don't know how skipping local minimization affects the convergence properties of the basinhopping algorithm.
You can use minimizer_kwargs to specify to minimize() what options your prefer to the local minimization step. See the dedicated part of the docs.
It is then up to what type of solver you ask minimize for. You can try setting a larger tol to make the local minimization step terminate earlier.
EDIT, in reply to the comment "What if I want to disable the local minimization part completely?"
The basinhopping algorithm from the docs works like:
The algorithm is iterative with each cycle composed of the following
features
random perturbation of the coordinates
local minimization accept or
reject the new coordinates based on the minimized function value
If the above is accurate there is no way to skip the local minimization step entirely, because its output is required by the algorithm to proceed further, i.e. keep or discard the new coordinate. However, I am not an expert of this algorithm.