Variable indexed by an indexed Set with Pyomo - python

im trying to figure out how to index a variable with an indexed Set:
For example:
model = AbstractModel()
model.J = Set()
model.O = Set(model.J)
I want to define a variable indexed over both Sets. Can Someone help me? I tried the following:
model.eb=Param(model.J, model.O)
which gives
TypeError("Cannot index a component with an indexed set")
Has anyone any suggestions on how to define this variable properly?

Pyomo doesn't support indexed Sets like that (I'm actually unaware of use cases for indexed sets in Pyomo, although they seem to be a thing in GAMS). You could approach this as follows (using ConcreteModel here, for illustration):
Define Sets for all unique values of jobs and operations (I assume you have some data structure which maps the operations to the jobs):
import pyomo.environ as po
import itertools
model = po.ConcreteModel()
map_J_O = {'J1': ['O11', 'O12'],
'J2': ['O21']}
unique_J = map_J_O.keys()
model.J = po.Set(initialize=unique_J)
unique_O = set(itertools.chain.from_iterable(map_J_O.values()))
model.O = po.Set(initialize=unique_O)
Then you could define a combined Set which contains all valid combinations of J and O:
model.J_O = po.Set(within=model.J * model.O,
initialize=[(j, o) for j in map_J_O for o in map_J_O[j]])
model.J_O.display()
# Output:
#J_O : Dim=0, Dimen=2, Size=3, Domain=J_O_domain, Ordered=False, Bounds=None
# [('J1', 'O11'), ('J1', 'O12'), ('J2', 'O21')]
Create the parameter using the combined Set:
model.eb = po.Param(model.J_O)
This last line will throw an error the parameter is initialized using any non-valid combination of J and O. Alternatively, you can also initialize the parameter for all combinations
po.Param(model.J * model.O)
and only initialize for the valid combinations, but this might bite you later. Also, model.J_O might be handy also for variables and constraints, depending on your model formulation.

Related

adding variables in the case of symmetric TSP

I have to implement a symmetrical TSP. When I go to add the variables:
x = m.addVars (Costs.Keys (), vtype = GRB.BINARY, obj = Costs, name = 'x')
It gives me the error:
'list' object has no attribute 'Key '.
I have the data saved in a list, is there a way to add the variable or do I necessarily have to save the data in a dictionary?
Seems like Costs is a list. Typically, use a dictionary of distances, indexed by the edge pairs i,j. See the Gurobi tsp.py example code for an illustration.

Pythonic Way of Storing Multiple Set of Lists (same names - different params) to be Called by Functions

I've built a few generalized functions that loop over a set of parameters to help in a price optimisation exercise. Given that each product has a different set of costs inputs the amount of parameters for each can vary. The issue is that each product will have a different set of configs, so I need to store these somehow. To build my quick demo to test my code I just added suffix _1, _2, _3, etc... but now looking for a more structured way to build and maintain it.
import pandas as pd
#Configs params - Product 1
factors_1 = ['BaseAmount','Factor1','Factor2','Factor3','Factor4','Factor5']
operation_1 = [None,'x','x','x','x','+']
custom_functions_1 = [None,None,None,None,'custom_function_1(df, 0.15)',None] #To call custom function
rounding_1 = [None,False,True,True,False,True]
rounding_decimals_1 = [None,None,1,0,None,0]
operation_summary_1 = pd.DataFrame(list(zip(operation_1, custom_functions_1, rounding_1, rounding_decimals_1)),
index = factors_1,
columns =['operation', 'custom_functions', 'rounding','rounding_decimals'])
operation_summary_1
#Configs params - Product 2
factors_2 = ['BaseAmount','Factor1','Factor6','Factor5','Factor7']
operation_2 = [None,'x','x','+','+']
custom_functions_2 = [None,None,None,'custom_function_2(df, 0.15)',None] #To call custom function
rounding_2 = [None,False,True,True,True]
rounding_decimals_2 = [None,None,0,0,0]
operation_summary_2 = pd.DataFrame(list(zip(operation_2, custom_functions_2, rounding_2, rounding_decimals_2)),
index = factors_2,
columns =['operation', 'custom_functions', 'rounding','rounding_decimals'])
operation_summary_2
What I'm looking for is a recommendation on the best way to store these lists for 100s of products which I would want to load and then iterate on as lists. I was thinking classes could be one good way of storing these, but I don't have much experience with those.
Thinking of doing something like the following, but first not sure how to get it to work and more importantly not sure it's good coding practice.
class product_1:
def __init__(self):
self.factors = ['BaseAmount','Factor1','Factor2','Factor3','Factor4','Factor5']
product_1.self.factors
you need to create a data holder, the best you can get is a dataclass, https://docs.python.org/3/library/dataclasses.html
you create one class that describes your data container. Then you call to this class as many times as you need to create new instances, see examples in the link above

Extract gurobi muti-dimensional variable value and form a numpy array

I want to know that when I defined a multi-dimension variables in Gurobi, how can I extract all the value of the solution and organize them in to a Numpy array according to the original coordinate of the variable.
I have the following decision variables defined in Gurobi using Python API:
for i in range(N):
for t in range(M):
Station_Size[i,t] = m.addVar(ub=Q, name = 'Station_Size_%s_%s' %(i,t))
for j in range(N):
Admission[i,j,t] = m.addVar(ub = Arrival_Rate[t,i,j], obj=-1, name = 'Admission_Rate_%s_%s_%s' %(i,j,t))
Return[i,j,t] = m.addVar(name = 'Return_Rate_%s_%s_%s' %(i,j,t))
I have the problem solved and I have three dictionary:
Station_Size, Admission and Return
I know that the solution can be accessed as:
Station_Size[i,t].X, Admission[i,j,t].X and Return[i,j,t].X
I want to creat three Numpy array such that:
Array_Station_Size[i,t] = Station_Size[i,t].X
Array_Admission[i,j,t] = Admission[i,j,t].X
I can definitely do this by creating three loops and creat the Numpy Array element by element. It's do-able if the loop doesn't take a lot of time. But I just want to know if there is a better way to do this. Please comment if I did not make myself clear.
Assuming your model's name is m, do the following:
Array_Station_Size = m.getAttr('x', Station_Size)
which is a gurobipy.tupledict now.
see gurobi doc here
http://www.gurobi.com/documentation/8.1/quickstart_windows/py_results.html
I figured this problem out.
Do the following:
Array_Station_Size = np.array()
Array_Station_Size[i,] = [Station_Size[i,t].X for t in rang(T)]

How to vectorize a json dictionary using R wrapped in python?

High level description of what I want: I want to be able to receive a json response detailing certain values of fields/features, say {a: 1, b:2, c:3} as a flask (json) request. Then I want to convert the resulting python_dict into an r dataframe with rpy2(a single row of one), and feed it into a model in R which is expecting to receive a set of input where each column is a factor in r. I usually use python for this sort of thing, and serialize a vectorizer object from sklearn -- but this particular analysis needs to be done an R.
So here is what I'm doing so far.
import rpy2.robjects as robjects
from rpy2.robjects.packages import STAP
model = os.path.join('model', 'rsource_file.R')
with open(model, 'r') as f:
string = f.read()
model = STAP(string, "model")
data_r = robjects.DataFrame(data)
data_factored = model.prepdata(data_r)
result = model.predict(data_factored)
the relevant r functions from rsource_code are:
prepdata = function(row){
for(v in vars) if(typeof(row[,v])=="character") row[,v] = as.factor(row[,v], levs[0,v])
modm2=model.matrix(frm, data=tdz2, contrasts.arg = c1,xlev = levs)
}
where contrasts and levels have been pre-extracted from an existing dataset likeso:
#vars = vector of columns of interest
load(data.Rd)
for(v in vars) if(typeof(data[,v])=="character") data[,v] = as.factor(data[,v])
frm = ~ weightedsum_of_things #function mapped, causes no issue
modm= (model.matrix(frm,data=data))
levs = lapply(data, levels)
c1 = attributes(modm)$contrasts
calling prepdata does not give me what I want, which is for the newly dataframe(from the json request data_r) to be properly turned into a vector of "factors" with the same encoding by which the elements of the data.Rd database where transformed.
Thank you for your assistance, will upvote.
More detail: So what my code is attempting to do is map the labels() method over a the dataset to extract a list of lists of possible "levels" for a factor -- and then for matching values in the new input, call factor() with the new data row as well as the corresponding set of levels, levs[0,v].
This throws an error that you can't use factor if there isn't more than one level. I think this might have something to do with the labels/level difference? I'm calling levs[,v] to get the element of the return value of lapply(data, levels) corresponding to the "title" v (a string). I extracted the levelsfrom the data set -- but referencing them in the body of prep_data this way doesn't seem to work. Do I need to extract labels instead? if so, how can I do that?

py2neo - How can I use merge_one function along with multiple attributes for my node?

I have overcome the problem of avoiding the creation of duplicate nodes on my DB with the use of merge_one functions which works like that:
t=graph.merge_one("User","ID","someID")
which creates the node with unique ID. My problem is that I can't find a way to add multiple attributes/properties to my node along with the ID which is added automatically (date for example).
I have managed to achieve this the old "duplicate" way but it doesn't work now since merge_one can't accept more arguments! Any ideas???
Graph.merge_one only allows you to specify one key-value pair because it's meant to be used with a uniqueness constraint on a node label and property. Is there anything wrong with finding the node by its unique id with merge_one and then setting the properties?
t = graph.merge_one("User", "ID", "someID")
t['name'] = 'Nicole'
t['age'] = 23
t.push()
I know I am a bit late... but still useful I think
Using py2neo==2.0.7 and the docs (about Node.properties):
... and the latter is an instance of PropertySet which extends dict.
So the following worked for me:
m = graph.merge_one("Model", "mid", MID_SR)
m.properties.update({
'vendor':"XX",
'model':"XYZ",
'software':"OS",
'modelVersion':"",
'hardware':"",
'softwareVesion':"12.06"
})
graph.push(m)
This hacky function will iterate through the properties and values and labels gradually eliminating all nodes that don't match each criteria submitted. The final result will be a list of all (if any) nodes that match all the properties and labels supplied.
def find_multiProp(graph, *labels, **properties):
results = None
for l in labels:
for k,v in properties.iteritems():
if results == None:
genNodes = lambda l,k,v: graph.find(l, property_key=k, property_value=v)
results = [r for r in genNodes(l,k,v)]
continue
prevResults = results
results = [n for n in genNodes(l,k,v) if n in prevResults]
return results
The final result can be used to assess uniqueness and (if empty) create a new node, by combining the two functions together...
def merge_one_multiProp(graph, *labels, **properties):
r = find_multiProp(graph, *labels, **properties)
if not r:
# remove tuple association
node,= graph.create(Node(*labels, **properties))
else:
node = r[0]
return node
example...
from py2neo import Node, Graph
graph = Graph()
properties = {'p1':'v1', 'p2':'v2'}
labels = ('label1', 'label2')
graph.create(Node(*labels, **properties))
for l in labels:
graph.create(Node(l, **properties))
graph.create(Node(*labels, p1='v1'))
node = merge_one_multiProp(graph, *labels, **properties)

Categories

Resources