I'm trying to implement tabu search algorithm, but I keep getting the following error. NameError: name 'instance_dict' is not defined.
def Objfun(instance_dict, solution, show = False):
dict = instance_dict
t = 0 #starting time
objfun_value = 0
for job in solution:
C_i = t + dict[job]["processing_time"]
d_i = dict[job]["due_date"]
T_i = max(0, C_i - d_i)
W_i = dict[job]["weight"]
objfun_value += W_i * T_i
t = C_i
if show == True:
print("The Objective function value for {} solution schedule is: {}".format(solution ,objfun_value))
return objfun_value
solution_1 = [1,2,5,6,8,9,10,3,4,7]
solution_2 = [2,3,5,10,6,8,9,4,7,1]
Objfun(instance_dict, solution_1, show=True)
Objfun(instance_dict, solution_2, show=True)
The last two lines of your code Objfun(instance_dict, solution_1, show=True) are referencing a variable, instance_dict, that is not defined anywhere. Thus the error, as you are trying to use an undefined variable.
Related
My goal is to write the following constraint with the doxplex.mp.model with Python API:
P[t,j] >= s[t-j+1] - sum(s[k=t-j+2] from k to t) for t = 1,....,N, j = 1,...,t
my code.
from docplex.mp.model import Model
clsp = Model(name = 'capacitated lot sizing problem')
no_of_period = 8
period_list = [t for t in range(1, no_of_period+1)]
s_indicator = clsp.binary_var_dict(period_list, name = 's_indicator')
p_indicator = clsp.binary_var_matrix(period_list, period_list, name = 'p_indicator')
m_8 = clsp.add_constraints(p_indicator[t,j] >= s_indicator[t-j+1] - clsp.sum(s_indicator[t-j+2] for j in range(1,t+1))
for j in t for t in period_list )
Output: Keyerror 0
Any help would be appreciated
Your code extract is not correct Python (for j in t). I had to modify it to make it run:
m_8 = clsp.add_constraints\
(p_indicator[t,j] >= s_indicator[t-j+1] - clsp.sum(s_indicator[t-j+2] for j in range(1,t+1))
for j in period_list for t in period_list )
Which gives ma a KeyError: 9 exception.
To fix this, remember that binary_var_matrix creates a Python dictionary, whose keys are the first argument, here period_list, ranging from 1 to 8.
To investigate this I wrote this small code to investigate all possible keys generated by your code:
ke = 0
for t in period_list:
for j in period_list:
ix = t-j+2
if ix not in period_list:
ke += 1
print(f"** Key error[{ke}], t={t},j={j}, t-j+2={t-j+2}")
Prints 22 key errors, for example t=8, j=1 computes (t-j+2)=9 which is outside the dictionary key set.
To summarize: check your indices w.r.t the keys of the variable dictionaries in your model.
I am working on an optimization problem based on a kind of "historical memory", ie the only decision variable takes into account its "historical" value at the previous index or indexes. Specifically, I get the following table of objectives:
https://i.imgur.com/vfzbQK0.png
The same is also true for the constraints, all depending on the same variable newTr for certain indices.
https://i.imgur.com/4CK3qWu.png
The problem is that when I try to solve the model I get the following error: ValueError: More than one active objective defined for input model; Cannot write legal LP file
Objectives: obj[1] obj[2]
The var newTr is defined as:
model.newTr = pyo.Var(month, bounds = (0, 5), domain=pyo.NonNegativeIntegers, initialize = 0)
The obj function is defined as:
def obj(model, i):
if i == 1 :
return cost_att*(initial_att - model.resig[i]) + cost_tr*model.newTr[i]
elif i == 2 :
return cost_att*(initial_att - model.resig[i]) + cost_tr*(model.newTr[i-1] + model.newTr[i])
else :
return cost_att*(initial_att - model.resig[i] + model.newTr[i-2]) + cost_tr*(model.newTr[i-1] + model.newTr[i])
model.obj = pyo.Objective(month, rule=obj)
How can i solve it?
I don't know why the number of breaches is not counted - if I do it this way:
def test(runs):
runs = runs
for i in range(0,runs):
#some initialized parameter
#initialize v[:,0]=v_0
for i in range(range(0, int(timeSteps)-1, 1)):
#calculate v here using Runge Kutta method
v = v_0 + v
#check if a certain threshold has been reached
if max(v[:]-v_0) > 50:
print("breach")
test(10)
then I get 5 times the word breach as output
but if I do it like this
def test(runs):
runs=runs
count = 0
for i in range(0,runs):
#some initialized parameter
#initialize v[:,0]=v_0
for i in range(range(0, int(timeSteps)-1, 1)):
#calculate v here using Runge Kutta method
v = v_0 + v
#check if a certain threshold has been reached
if max(v[:]-v_0) > 50:
count += 1
return count
test(10)
then i get the initialized value count=0 and not count=5 as return value. why does this not work?
You are not storing the returned value in any variable,
Change:
test(10)
To:
returnedValue = test(10)
print(returnedValue)
You can also do this:
print(test(10))
in the second loop, you use the variable i when it is already declared in the loop just above. Try to modify it using another variable like j:
def test(runs):
runs = runs
for i in range(0,runs):
#some initialized parameter
#initialize v[:,0]=v_0
for j in range(range(0, int(timeSteps)-1, 1)):
#calculate v here using Runge Kutta method
v = v_0 + v
#check if a certain threshold has been reached
if max(v[:]-v_0) > 50:
print("breach")
test(10)
I am currently tasekd in a Distributed DataBase class to create an implementation of kmeans with map reduce based approach (yes i know that there is a premade function for it but the task is specifically to do your own approach), and while i have figured out the approach itself, i am struggling with implementing it with the appropriate use of the map and reduce functions.
def Find_dist(x,y):
sum = 0
vec1= list(x)
vec2 = list(y)
for i in range(len(vec1)):
sum = sum +(vec1[i]-vec2[i])*(vec1[i]-vec2[i])
return sum
def mapper(cent, datapoint):
min = Find_dist(datapoint,cent[0])
closest = cent[0]
for i in range(1,len(cent)):
curr = Find_dist(datapoint,cent[i])
if curr < min:
min = curr
closest = cent[i]
yield closest,datapoint
def combine(x):
Key = x[0]
Values = x[1]
sum = [0]*len(Key)
counter = 0
for datapoint in Values:
vec = list(datapoint[0])
counter = counter+1
sum = sum+vec
point = Row(vec)
result = (counter,point)
yield Key, result
def Reducer(x):
Key = x[0]
Values = x[1]
sum = [0]*len(Key)
counter = 0
for datapoint in Values:
vec = list(datapoint[0])
counter = counter+1
sum = sum+vec
avg = [0]*len(Key)
for i in range(len(Key)):
avg[i] = sum[i]/counter
centroid = Row(avg)
yield Key, centroid
def kmeans_fit(data,k,max_iter):
centers = data.rdd.takeSample(False,k,seed=42)
for i in range(max_iter):
mapped = data.rdd.map(lambda x: mapper(centers,x))
combined = mapped.reduceByKeyLocally(lambda x: combiner(x))
reduced = combined.reduceByKey(lambda x: Reducer(x)).collect()
flag = True
for i in range(k):
if(reduced[i][1] != reduced[i][0] ):
for j in range(k):
centers[i] = reduced[i][1]
flag = False
break
if (flag):
break
return centers
data = spark.read.parquet("/mnt/ddscoursedatabricksstg/ddscoursedatabricksdata/random_data.parquet")
kmeans_fit(data,5,10)
My main issue for the most part is I encounter difficulty in the usage of dataframes and the map, reducebykeylocally and reducebykey fucntions.
Currently the run fails at calling reduceByKeyLocally(lambda x: combiner(x)) because "ValueError: not enough values to unpack (expected 2, got 1)", and i really need to get this all working properly soon, so please, anyone i would love assistance on this, and thank you in advance, I will be very grateful for any help!
I am using Gurobi 7 with Python 2.7 and want to implement the following linear optimization problem:
I have translated the above to Python and Gurobi using the following code:
T = range(1,17520)
# Create variables - defined as dictionaries
p = {} # power
s = {} # SOC
b = {} # buy
for t in T:
p[t] = m.addVar(vtype = GRB.CONTINUOUS, lb = -R, ub = R, name = "power_{}".format(t))
s[t] = m.addVar(vtype = GRB.CONTINUOUS, lb = 0, ub = E, name = "SOC_{}".format(t))
b[t] = m.addVar(vtype = GRB.CONTINUOUS, lb = 0, name = "Buy_{}".format(t))
# constraints
for t in T:
m.addConstr(b[t] == demand[t] + p[t], name = "balance_{}".format(t))
if t == 0:
m.addConstr(s[t] == p[t], name = "charge_{}".format(t))
else:
m.addConstr(s[t] == s[t-1] + p[t], name = "charge_{}".format(t))
# integrate variables and constraints
m.update()
# Objective function
obj = quicksum(
b[t]*SBP[t]
for t in T
)
m.setObjective(obj,GRB.MINIMIZE)
# start optimization
m.optimize
The error message I get (shown below) is probably due to the [t-1] index; however I do not see why this is not accepted by the compiler. Do I need to define this constraints in a different way?
I have not found any other examples of gurobi optimization problems being defined with this structure (variable is a function of the preceding variable etc.) but this is a very typical structure for LP problems.
Any help you can provide is greatly appreciated.
OK turns out I was confused with Python's zero-indexing; I have defined the set T as a range from 1 to 17520 and yet I subsequently define constraints for variables indexed in 0.
My problem was fixed by defining the set T as
T = range(0,17519)