Is there any differences between Pyomo Set and Python set? - python

First time pyomo user here.
Is there any difference between declaring sets in an optimization model as
model.A = Set(initialize = list_values)
and using a built in python set like this one?
A = set(list_values)
Are there any advantages of one over the other?

First you can call the model variables, sets, parameters in the constraint rules via:
def some_constraint_rule(model):
return model.some_set
So you cant do something like this via python sets because the model you input in the constraint rule does not have your python set.
Second you cannot index python sets (so far I know, therefore u need a dict??), but with pyomo sets, u can do much more stuff eg.:
m.t = pyomo.Set(
within=m.index,
initialize=something,
ordered=True,
doc='some documentation for your set?')

Related

How do I dynamically add variables to list in Pyomo?

As part of a BuildAction rule that gets triggered on creation of a concrete model, I am dynamically creating additional "internal" decision variables (dependent on data supplied at construction time).
As well as creating these variables (which get used in constraint expressions), I know that I also need to add them to the model to avoid the "Variable 'XXX' is not part of the model being written out, but appears in an expression used on this model." error.
The VarList class seems designed for this (by analogy to the ConstraintList class which I am already successfully using for dynamically created constraints). However, I cannot find documentation for how to populate a VarList from pre-created variables. I can create a VarList and add variables to it, but this does not give me the control I need over how the variables are created...
import pyomo.environ as pyo
self.vl = pyo.VarList()
newVar = self.vl.add() # this does not give me control over the variable creation
# and I can't set all required properties of newVar, once created
It seems that I should be able to create a VarList by passing a dictionary of variables, but I cannot find documentation or examples that show how this works.
VarList works pretty similar to an IndexedVar in Pyomo. You need to understand a couple of things:
The variable index is constantly changing. This means that you need to check the actual length to avoid adding variables that you won't use, or to use variable that have not been added.
The VarList().add() method adds variables of the same type as in VarList(). For example, if VarList() was created as an Integer or NonNegativeReal variable, all variables you will add will be Integer or NonNegativeReal, respectively.
Here's example to show you how it works:
import pyomo.environ as pyo
# Create the model
model = pyo.ConcreteModel()
# Add variables in a loop
model.x = pyo.VarList(domain=pyo.Integers)
for i in range(2):
model.x.add() # Add a new index to defined variable x
# Adding constraints
# Indexed variable starts at 1 and not in 0
model.myCons1 = pyo.Constraint(expr=2*model.x[1] + 0.5*model.x[2] <=20)
model.myCons2 = pyo.Constraint(expr=2+model.x[1] + 3*model.x[2] <=25)
# Add an objective
model.Obj = pyo.Objective(expr=model.x[1] + model.x[2], sense=pyo.maximize)
# Solve using Gurobi
solver = pyo.SolverFactory('gurobi')
solver.solve(model, tee=True)
# Display the x variable results
model.x.display()
This leads to the following output (showing only the relevant part):
Optimal solution found (tolerance 1.00e-04)
Best objective 1.300000000000e+01, best bound 1.300000000000e+01, gap 0.0000%
x : Size=2, Index=x_index
Key : Lower : Value : Upper : Fixed : Stale : Domain
1 : None : 8.0 : None : False : False : Integers
2 : None : 5.0 : None : False : False : Integers
If you start using the kernel modeling layer you can also use the pyomo.kernel.variable_list class, which works pretty similar to this approach. You can check it in the Pyomo documentation with the difference that you can assign different types of variables to the same list.
I don't fully understand what you are modeling, but you can always use an AbstractModel() class and then populate it with some external data (from a .dat file or a dict, etc.) using model.create_instance(data=data). In this way, your model is always parameterized by some defined sets.

Python: sharing a dictionary using the multiprocessing capability of scipy.optimize.differential_evolution

I am running an optimisation problem using the module scipy.optimize.differential_evolution. The code I wrote is quite complex and I will try to summarise the difficulties I have:
the objective function is calculated with an external numerical model (i.e. I am not optimising an analytical function). To do that I created a specific function that runs the model and another one to post process the results.
I am constraining my problem with some constraints. The constraints are not constraining the actual parameters of the problem but for some dependent variables that can be obtained only at the end of the simulation of my external numerical model. Each constraint was defined with a separate function
The problem with 2. is that the external model might be run twice for the same set of parameters: the first time to calculate the objective function and the second time to calculate the dependent variables to be assessed for the constraints. To avoid that and speed up my code I created a global dictionary were I save the results of my dependent variables for each set of parameters (as a look up table) every time the external model is called. This will prevent the function that assesses the constraints to run the model again for the same set of parameters.
This works very well when I use a single CPU optimisation. However, it is my understanding that the function differential_evolution allows also multiprocessing by setting an appropriate value to the option "workers" (see here https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#r108fc14fa019-1). My problems is that I have no idea how to update a global/shared variable when I enable the multiprocessing capability.
The webpage above states:
"If workers is an int the population is subdivided into workers sections and evaluated in parallel (uses multiprocessing.Pool) [...]"
So I deduced that I have to find a way to modify a shared variable when multiprocessing.pool is used. In this regard I found these solutions:
Shared variable in python's multiprocessing
multiprocessing.Pool with a global variable
Why multiprocessing.Pool cannot change global variable?
Python sharing a dictionary between parallel processes
I think the last one is appropriate for my case. However, I am not sure how I have to set up my code and the workers option of the differential_evolution function.
Any help will be appreciated.
My code is something like:
def run_external_model(q):
global dict_obj, dict_dep_var
....
obj, dep_var = post_process_model(q)
dict_var_dep[str(q)] = dep_var
dict_obj[str(q)] = obj
def obj(q):
global dict_obj
if str(q) not in list(dict_obj.keys()):
run_external_model(q)
return dict_obj[str(q)]
def constraint(q):
global dict_dep_var
if str(q) not in list(dict_dep_var.keys()):
run_external_model(q)
return dict_dep_var[str(q)]
dict_obj = {}
dict_dep_var = {}
nlcs = scipy.optimize.NonLinearConstraint(constraint, 0., np.inf)
q0 = np.array([q1, .... , qn])
b = np.array([(0, 100.)] * len(q0))
solution = scipy.optimize.differential_evolution(objective, bounds=(b), constraints=(nlcs), seed=1)
The code above works with a single core. I am trying a solution to share the dictionaries dict_obj and dict_dep_var

Issue with Gurobi - Adding usercuts with callback function

I am currently working on a MILP formulation that I want to solve using Gurobi with a branch-and-cut approach. My model is a variation of a classic Pickup and Delivery Problem with Time Windows (PDPTW), for which several classes of valid inequalities are defined. As the branch-and-bound solver runs, I want to add those inequalities (i.e., I want to add cuts), if certain conditions in the current node are met. My issue is as follows:
My variables are defined as dictionaries, which makes it easy to use them when formulating constraints because I can easily use their original indexing. An example of how I define variables is provided below
tauOD = {}
# Start- End-Service time of trucks
for i in range(0,Nt):
tauOD[i,0]=model.addVar(lb=0.0, ub=truckODTime[i][0],
vtype=GRB.CONTINUOUS,name='tauOD[%s,%s]'%(i,0))
tauOD[i,1]=model.addVar(lb=0.0, ub=truckODTime[i][1],
vtype=GRB.CONTINUOUS,name='tauOD[%s,%s]'%(i,1))
Once my model is defined in terms of variables, constraints, and cost function, in a classic branch-and-bound problem I would simply use model.optimize() to start the process. In this case, I am using the command model.optimize(my_callback), where my_callback is the callback function I defined to add cuts. My issue is that the callback function, for some reasons, does not like model variables defined as dictionaries. The only workaround I found is as follows:
model._vars = model.getVars() #---> added this call right before the optimization starts
model.optimize(mycallback)
and then inside the callback I can now retrieve variables using their ordering, not their indices as follows:
def mycallback(model,where):
if where == GRB.Callback.MIPNODE:
status = model.cbGet(GRB.Callback.MIPNODE_STATUS)
# If current node was solved to optimality, add cuts to strenghten
# linear relaxation
if status == GRB.OPTIMAL:
this_Sol = model.cbGetNodeRel(model._vars) # Get variables of current solution
# Adding a cut
model.cbCut(lhs=this_Sol[123]+this_Sol[125],sense=GRB.LESS_EQUAL,rhs=1) #---> Dummy cut just
# for illustration
# purposes
The aforementioned cut is just a dummy example to show that I can add cuts using the order variables are sequenced in my solution, and not their indexing. As example, I would like to be able to write a constraint inside my callback as
x[0,3,0] + x[0,5,0] <= 1
but the only thing I can do is to write
this_Sol[123] + this_Sol[125] <= 1 (assuming x[0,3,0] is the 124-th variable of my solution vector, and x[0,5,0] is the 126-th). Although knowing the order of variables is doable, because it depends on how I create them when setting up the model, it is a much more challenging process (and error-prone) rather than being able to use the indices, as I do when defining the original constraints of my model (see below for an example):
###################
### CONSTRAINTS ###
###################
# For each truck, one active connection from origin depot
for i in range(0,Nt):
thisLHS = LinExpr()
for j in range(0,sigma):
thisLHS += x[0,j+1,i]
thisLHS += x[0,2*sigma+1,i]
model.addConstr(lhs=thisLHS, sense=GRB.EQUAL, rhs=1,
name='C1_'+str(i))
Did any of you experience a similar problem? A friend of mine told me that Gurobi, for some reasons, does not like variables defined as dictionaries inside a callback function, but I do not know how to circumvent this.
Any help would be greatly appreciated.
Thanks!
Alessandro
You should make a copy of the variables by their dicts.
To get the variable index, you also have to make a copy of the lists os indexes.
Try this:
model._I = model.I
model._J = model.J
model._K = model.K
model._x = model.x
You need theses indexes lists so you can loop each target variable x to verify some condition. As you would do writing a normal constraint for your model.
Then inside your callback you can make the index iterations:
def mycallback(model,where):
if where == GRB.Callback.MIPNODE:
x = model.cbGetSolution(model._x)
for i in model._I:
if sum([x[i,j,k] for j in model._J for k in model._K]) > 1:
Add_the_cut()

How to make statsmodels GLM.fit_constrained result picklable/store-and-reloadable

A GLS (or thus also OLS) regression with constraints on parameters can readily be run using statsmodels GLM.fit_constrained() method, as with the code below (or here).
How can I make the GLMresults object resulting from such a statsmodels GLM.fit_constrained() regression picklable, so that the estimation result can be stored for re-use for prediction in a new session anytime later?
The GLMresults object obtained from fit_constrained() and containing the relevant estimation result has its .save() method that would normally readily pickle the object into a file.
This .save() works for the result from a standard (unconstrained) GLM regression, sm.glm.fit(). However, it doesn't work with the result for sm.glm.fit_unconstrained(). Instead, it throws a pickling error, seemingly because patsy DesignMatrixBuilder is not Picklable, so it links to the never resolved issue here. This at least for my Python 3.6.3 (running on Windows).
An example:
import statsmodels
import statsmodels.api as sm
import pandas as pd
# Define exapmle data & Constraints:
import numpy as np
df = pd.DataFrame(np.random.randint(0,100,size=(100, 5)), columns=list('ABCDF'))
y = df['A']
X = df[['B','C','D','F']]
constraints = ['B + C + D', 'C - F'] # Add two linear constraints on parameters: B+C+D = 0 & C-F = 0
statsmodels.genmod.families.links.identity()
OLS_from_GLM = sm.GLM(y, X)
# Unconstrained regression:
result_u = OLS_from_GLM.fit()
result_u.save('myfile_u.pickle') # This works
# Constrained regression - save() fails
result_c = OLS_from_GLM.fit_constrained(constraints)
result_c.save('myfile_c.pickle') # This fails with pickling error (tested in Python 3.6.3 on Windows): "NotImplementedError: Sorry, pickling not yet supported. See https://github.com/pydata/patsy/issues/26 if you want to help."
Is there a way to readily make the result from fit_unconstrained() picklable i.e./or storable?
I below suggest a first workaround answer; it is trivial and works well for me so far. I do not know, however, whether it is truly advisable or whether its risks are large and/or any preferable alternative solution exists.
I got this to work by simply removing (commenting out) the line
res._results.constraints = lc
in the function definition of fit_constrained() within statsmodels' active generalized_linear_model.py script (in my case in the virtualenv folder \env\Lib\site-packages\statsmodels\genmod\generalized_linear_model.py).
Idling this line seems to have created no problem for my work; I can now readily save and reload the pickled file and use it to make correct predictions based on the stored estimation; the imposed parameter constraints remain respected and predictions made using .predict() remain unchanged after reloading.
I wonder though whether there is any major risk attached to this procedure. I am not familiar with the inner workings of the statsmodels library, or with its glm.fit_constrained() method in particular. i reckon it's unadvisable to change anything in a pre-existing module one does not understand. However, it is the only way I am conveniently able to impose various constraints to my GLM parameters and to be able to save the regression results to readily re-use it for prediction in a later session.

Gradient of a nonlinear pyomo constraint at a given point

I (repeatedly) need numeric gradient information of a nonlinear pyomo constraint con at a given point (i.e. the variables of the corresponding pyomo model are all set to a specific value). I have read this post and decided that the (slightly modified) lines
from pyomo.core.base.symbolic import differentiate
var_list = list(model.component_objects(Var, active=True))
grad_num = [value(partial) for partial in differentiate(g_nu.body, wrt_list=vars)]
should serve my purpose.
However, the example below already fails, presumably due to the appearance of the exponential function:
from pyomo.environ import *
model = ConcreteModel()
model.x_1 = Var()
model.x_2 = Var()
model.constr = Constraint(expr = 2*(model.x_1)**4+exp(model.x_2)<=3)
model.x_1.set_value(1)
model.x_2.set_value(1)
varList = list(model.component_objects(Var, active=True))
grad = [value(partial) for partial in differentiate(model.constr.body, wrt_list=varList)]
DeveloperError: Internal Pyomo implementation error:
"sympy expression type 'exp' not found in the operator map for expression >exp(x1)"
Please report this to the Pyomo Developers.
So, my question is: Can pyomo generally differentiate expressions like the exponential function/ square root etc. and is my example just an unfortunate coincidence which can be easily fixed? I will deal with various models from the MINLPLIB and some tool for differentiating the appearing expressions is crucial.
This error existed through Pyomo 5.2 and was resolved in Pyomo 5.3. Upgrading to 5.3 fixes the problem, and your example works fine (after adding from pyomo.core.base.symbolic import differentiate).

Categories

Resources