I am learning Pyomo Abstract Modeling from a Book.
I have an example that has an objective functionEquation is here to minimize the cost of establishing a warehouse at optimal locations to build warehouses to meet delivery demands.
The authors modeled the objective with this script Script is here.
Here in the script "model.d" is "Param" and "model.x" is "Var"
Why he has used Param for "model.d" and "Var" for "model.x"?
Please take spare precious time to help me to get out of this.
Not only in pyomo, but in general, for Operations Researchs or Optimization. A Parameter is a given value that you know prior solving the problem. On the other hand, a Variable is a value that you will find solving the problem in order to get the best solution.
Suposse that in your problem model.d is the cost of constructing the warehouse model.x. This means that for each potential warehouse x, construct it cost d. This assumme that if your building a warehouse, you know the capital cost of constructing such a warehouse, therefore, it is known before solving the problem, then model.d is a parameter. model.x is variable since you don't know whether if construct it or not. You want want the model to tell you that, therefore, it is a variable
Related
I have a dynamic control problem with about 35 independent variables, many intermediate variables and a few equations, mainly to enforce a sensible mass balance by limiting certain dependent variables (representing stream flows) to be positive.
Initially the variables were declared using the m.Var() constructor but I consequently upgraded them to MV an CV variables to capitalize on flexibility of tuning attributes such as COST, DCOST, etc. that these classes add.
I noticed that IPOPT (3.12) does not produce a solution (Error reported : EXIT: Converged to a point of local infeasibility. Problem may be infeasible) when a set of variables are configured as MV's, yet when one is instantiated as a Var a successful solution is returned. I re-instantiated the variable as MV and systematically removed constraints to try and pinpoint the constraining equation. I discovered that the set of initial conditions I provided for the independent variables constituted an infeasibility (it resulted in value of -0.02 on a stream that have a positive constraint on it). Although RTOL can probably be used to solve the problem for this case, I do not think it is correct general approach. I have tried COLDSTART=2 but do not know how to interpret the presolve.txt file generated.
Firstly, is there some standard functionality to assist with this situation or should one make sure the initial guesses represent a feasible solution.
Secondly, why would the inability to produce a successful solution only manifest when the variable is declared as an MV as opposed to the less decorated Var?
The m.Var() creates an additional degree of freedom for the optimizer while m.Param() creates a quantity that is determined by the user. The m.Var() types can be upgraded to m.SV() as state variables or m.CV() as controlled variables. The m.Param() type is upgraded to m.FV() for fixed values or m.MV() for manipulated variables. If the STATUS parameter is turned on for those types then they also become degrees of freedom. More information on FV, MV, SV, and CV types is given in the APMonitor documentation and Gekko Documentation. The problem likely becomes feasible because of the additional degree of freedom. Try setting the m.MV() to an m.SV() to retain the degree of freedom from the m.Var() declaration.
Initial solutions are often easiest to obtain from a steady-state simulation. There are additional details in this paper:
Here is a flowchart that I typically use:
There are additional details in the paper on how COLDSTART options work. If the solver reports a successful solution, then there should be no constraint violations.
To get the final solution after optimization in pyscipopt, we can do
# define x to be a vector of x_ij variables
model.data = x
model.optimize()
X = model.getVal(x)
I would like to get the LP relaxation solutions at every node of the branch and bound tree. One method for doing this would be to use model.getVal(t_x_ij) for every (transformed) variable 'x_ij'. Is there a more efficient way of doing this than looping over all the transformed variables?
Please let me know if you need any further clarifications.
If you are solving a MIP, you would need to get the LP solution values during the solving process. You need to implement a callback that is executed whenever a new node LP is solved.
You might want to check out TreeD, a project I created to inspect and visualize various LP-related information during the MIP solving process of PySCIPOpt.
I am having a non-linear minimization problem apparently with non-convexity. I use the Pyomo framework for an energy system operation optimization model, where a once configured optimization model needs to be evaluated in sequential hours (I create the optimization problem at the beginning, defining the variables, constraints and objective function for the specific system and then I try to solve this created set up for the "simulation " time frame (e.g. for every hour in a given year), changing only the energy demand parameter, ...minimizing operation costs). I have noticed that for some random hours an optimum cannot be found. In most of these failed cases I get "max iteration number reached", sometimes "restoration failed" result.
To overcome this problem I would like to use the Pyomo "multistart" method (pyo.Solverfactory('multistart').solve(model)), which by default uses the IPOPT solver. I had been using it previously as well, but then I had the syntax:
pyo.Solverfactory('ipopt', executable=...ipopt.exe)
In this new case with multistart though I cannot define the executable for the IPOPT solver. Could you please help me how to solve this problem?(...or suggest alternatives to multistart to overcome the starting point issue of non-convex minimization)
So far I have tried:
pyo.Solverfactory('multistart', executable=...ipopt.exe).solve(model)
pyo.Solverfactory('multistart').solve(model, solver='ipopt', executable=...ipopt.exe)
Thanks a lot!
There should be an argument to pass in a dictionary of keyword arguments to the solver. See solver_args (https://pyomo.readthedocs.io/en/latest/contributed_packages/multistart.html#using-multistart-solver)
Ist it possible to formulate a min-max-optimization problem of the following form in pyomo:
min(max(g_m(x)) s.t. L
where g_m are nonlinear functions (actually constrains of another model) and L is a set of linear constrains?
How would I create the expression for the objective function of the model?
The problem is that using max() on a list of constraint-objects returns only the constraint possessesing the maximum value at a given point.
I think yes, but unless you find a clever way to reformulate your model, it might not be very efficent.
You could solve all possiblity of max(g_m(x)), then select the solution with the lowest objective function value.
I fear that the max operation is not something you can add to a minimization model, since it is not a mathematical operation, but a solver operation. This operation is on the problems level. Keep in mind that when solving a model, Pyomo requires as argument only one sense of optimization (min or max), thus making it unable to understand min-max sense. Even if it did, how could it knows what to maximize or minimize? This is why I suggest you to break your problem in two, unless you work on its formulation.
I am solving a minimization linear program using COIN-OR's CLP solver with PULP in Python.
The variables that are included in the problem are a subset of the total number of possible variables and sometimes my pricing heuristic will pick a subset of variables that result in an infeasible solution. After which I use shadow prices to price new variables in.
My question is, if the problem is infeasible, I still get values from calling prob.constraints[c].pi, but those values don't always seem to be "valid" or "good" per se.
Now, a solver like Gurobi won't even let me call the shadow prices after an infeasible solve.
Actually Stu, this might work! - the "dummy var" in my case could be the source/sink node, which I can loosen the flow constraints on, allowing infinite flow in/out but with a large cost. This makes the solution feasible with a very bad high optimal cost; then the pricing of the new variables should work and show me which variables to add to the problem on the next iteration. I'll give it a try and report back. My only concern is that the bigM cost coefficient on the source/sink node may skew the pricing of the variables making all of them look relatively attractive. This would be counter productive bc adding most of the variables back into the problem will defeat the purpose of my column generation in the first place. I'll test it...