DOcplexException: Expecting iterable Error - python

I am dealing with a vehicle routing problem with multiple working days. I want to add constraints that if a working day is 6 or its multipliers, a worker must work only 240 minutes. Therefore, I put two loops for working days (k) and workers (w) and each k and w, I want to make sure total travel and service times do not exceed a prespesified time (S). As you can see on my codes, I used if else structure for 240 and 480 minutes daily times. However, I got "DOcplexException: Expecting iterable Error" and saw this:
OcplexException: Expecting iterable, got: 480x_0_1_1_1+100x_0_2_1_1+480x_0_3_1_1+20x_0_4_1_1+300x_0_5_1_1+100x_0_6_1_1+200x_0_7_1_1+480x_0_8_1_1+80x_0_9_1_1+200x_0_10_1_1+120x_0_11_1_1+260x_0_12_1_1+280x_0_13_1_1+340x_0_14_1_1+400x_0_15_1_1+120x_0_16_1_1+80x_0_17_1_1+50x_0_18_1_1+20x_0_19_1_1+320x_0_20_1_1+180x_0_21_1_1+50x_0_22_1_1+100x_0_23_1_1+80x_0_24_1_1+140 ...
The constraints are following:
# for w in W:
# for k in D:
# if k%6==0:
# mdl.add_constraints(mdl.sum((distanceList[i][j]*6/7000)*x[i, j, w, k] for i in N for j in N if j != i) + mdl.sum(serviceTime[j-1]*x[i, j, w, k] for i in V for j in N if j != i) <= S*0.5)
# else:
# mdl.add_constraints(mdl.sum((distanceList[i][j]*6/7000)*x[i, j, w, k] for i in N for j in N if j != i) + mdl.sum(serviceTime[j-1]*x[i, j, w, k] for i in V for j in N if j != i) <= S)
I really appreciate for your help! Thanks.

Model.add_constraints() expects either a list or a comprehension (both work), in other terms, something that can be iterated upon. Based on the (non-executable) code snippet you posted, I have the impression the argument of add_constraints is a constraint:
Model.add_constraints( mdl.sum(a[i,w,k]) + mdl.sum(b[i,w,k]) <= S)
which is not accepted.
You should transform the argument to add_constraints to a Python comprehension, something like:
Model.add_constraints( mdl.sum(a[i,w,k]+mdl.sum(b[i,w,k] <= S for w in W for k in K)
Thus, add_constraints receives a comprehension as expected. If parts of the constraints depend on the indices w,k use auxiliary functions in the comprehension. Let me give an example with the rhs:
def my_rhs(wk,k): # rhs of the constraint as a function of w,k
... return w+k % 42 # silly example
Model.add_constraints( mdl.sum(a[i,w,k]+mdl.sum(b[i,w,k] <= my_rhs(w,k) for w in W for k in K)

Related

Find the number in a given range so that the gcd of the number with any element of a given list will always be 1

Given a number M and a list A which contains N elements (A1, A2,...)
Find the all the numbers k so that:
1=<k=<M which satisfied gcd(Ai, k) is always equal to 1
Here's my code, the only problem for it is that it uses loops in each other, which slow the process if my inputs are big, how can I fix it so that it requires less time?
N, M = [int(v) for v in input().split()]
A = [int(v) for v in input().split()]
from math import gcd
cnt = 0
print(N)
for k in range(1, M+1):
for i in range(N):
if gcd(k, A[i]) == 1:
cnt += 1
if cnt == N:
print(k)
cnt = 0
inputs example: (first line contains N and M, second contains the list A1, A2,...)
3 12
6 1 5
Here's a fast version that eliminates the nested loops:
N, M = [int(v) for v in input().split()]
A = [int(v) for v in input().split()]
from math import gcd
print(N)
l = 1
for v in A:
l = l*v//gcd(l, v)
for k in range(1, M+1):
if gcd(l, k) == 1:
print(k)
It works by first taking the LCM, l, of the values in A. It then suffices to check if the GCD of k and l is 1, which means there are no common factors with any of the values in A.
Note: If you're using a newer version of Python than I am (3.9 or later), you can import lcm from math and replace l = l*v//gcd(l, v) with l = lcm(l, v).
Or, as Kelly Bundy pointed out, lcm accepts an arbitrary number of arguments, so the first loop can be replaced with l = lcm(*A) if you're using 3.9 or later.
Just another approach using sympy.theory, factorint and Python sets which from the point of view of speed has on my machine no advantage compared to the math.lcm() or the math.gcd() based solutions if applied to small sizes of lists and numbers, but excels at very large size of randomized list:
M = 12
lstA = (6, 1, 5)
from sympy.ntheory import factorint
lstAfactors = []
for a in lstA:
lstAfactors += factorint(a)
setA = set(lstAfactors)
for k in range(1, M+1):
if not (set(factorint(k)) & setA):
print(k)
The code above implements the idea described in the answer of Yatisi coded by Tom Karzes using math.gcd(), but is using sympy.ntheory factorint() and set() instead of math gcd().
In terms of speed the factorint() solution seems to be fastest on the below tested data:
# ======================================================================
from time import perf_counter as T
from math import gcd, lcm
from sympy import factorint
from random import choice
#M = 3000
#lstA = 100 * [6, 12, 18, 121, 256, 1024, 361, 2123, 39]
M = 8000
lstA = [ choice(range(1,8000)) for _ in range(8000) ]
# ----------------------------------------------------------------------
from sympy.ntheory import factorint
lstResults = []
lstAfactors = []
sT=T()
for a in lstA:
lstAfactors += factorint(a)
setA = set(lstAfactors)
for k in range(1, M+1):
if not (set(factorint(k)) & setA):
lstResults += [k]
print("factorint:", T()-sT)
#print(lstResults)
print("---")
# ----------------------------------------------------------------------
lstResults = []
sT=T()
#l = 1
#for a in lstA:
# l = (l*a)//gcd(l, a) # can be replaced by:
l = lcm(*lstA) # least common multiple divisible by all lstA items
# ^-- which runs MAYBE a bit faster than the loop with gcd()
for k in range(1, M+1):
if gcd(l, k) == 1:
lstResults += [k]
print("lcm() :", T()-sT)
#print(lstResults)
print("---")
# ----------------------------------------------------------------------
lstResults = []
sT=T()
l = 1
for a in lstA:
l = (l*a)//gcd(l, a) # can be replaced by:
#l = lcm(*lstA) # least common multiple divisible by all lstA items
# ^-- which runs MAYBE a bit faster than the loop with gcd()
for k in range(1, M+1):
if gcd(l, k) == 1:
lstResults += [k]
print("gcd() :", T()-sT)
#print(lstResults)
print("---")
# ----------------------------------------------------------------------
import numpy as np
A = np.array(lstA)
def find_gcd_np(M, A, to_gcd=1):
vals = np.arange(1, M + 1)
return vals[np.all(np.gcd(vals, np.array(A)[:, None]) == to_gcd, axis=0)]
sT=T()
lstResults = find_gcd_np(M, A, 1).tolist()
print("numpy :", T()-sT)
#print(lstResults)
print("---")
printing
factorint: 0.09754624799825251
---
lcm() : 0.10102138598449528
---
gcd() : 0.10236155497841537
---
numpy : 6.923375226906501
---
The timing results change extremely for the second data variant in the code provided above printing:
factorint: 0.021642255946062505
---
lcm() : 0.0010238440008834004
---
gcd() : 0.0013772319070994854
---
numpy : 0.19953695288859308
---
where the factorint based approach is 20x and the numpy based approach 200x times slower than the gcd/lcm based one.
Run the timing test yourself online. It won't run the case of large data, but it can at least demonstrate that the numpy approach is 100x times slower than the gcd one:
factorint: 0.03271647123619914
---
lcm() : 0.003286922350525856
---
gcd() : 0.0029655308462679386
---
numpy : 0.41759901121258736
1 https://ato.pxeger.com/run?1=3VXBitswED0W9BXDGoq16-zaSbtsAzmk0GN6KLmUkAbFkdcismQkZbc-9Et62Uv7Uf2ajiwnNt1Du7ClUIORpXmaefNmLH39Xjeu1Orh4dvBFaObHy--RDB7locURlfgRMUBQFS1Ng5qbopNrg_KcQPMwjKAKubKHnSb7xKQeRVstqnq5mQrWO60EcoFo2Fqh0NnzEstck6iBfqCGUzSNCWRtG6OkyxN4RxW1wlkY3xv_JglMH7tV9LxqwQm136ejSf4-WZNhk46H6suQoxhb3mcJTdpSikU2sAGhIKw7BdhTUgEo2d5BjJcKldybZrHaiDDD9wepLOe9GrdUg5mGxbscraMKfFkmSfrAVOCOQ6RF7PeZ8wosbxNHId4AAte9n3KKNziIqOtOxAFKO0g9pt6Z3sU6qV3NO9gEEIfWWPk1X5N6hZ8dto3PUsAaY_skpIoGPtN9AhHlc7oMyo-4DXULpK-kUj0i4ZRmwqaYnnO6NUV9m8sE2AUIsiZgi0Hw2vJcr6DbTMF4rHY3_G53-9RkjOL7aurSiuoMK6oJYeduBNWbPFr2wCTsg0HwvHKYsxPoxHclyIvwRyUhcX849t3yGorfFtY_3-5EoNjw4DUuoZ7gf-Yp_bb6nX89xQPAsj-oFo-F-oRT6nW3y5WqNXjdn9SpaL_rVSt139Wqu7YUgd_pOPxr2rijxdVXzJjWNOeMZTseAGFULsNkt2oOl4kME_AaT-fHZO_Y9Iet56kgQvIaGs23B2MalErj5EyxsFn75eSPuScrqYJvNeKr1sRQxjsic_CzlLat9Owyx6zy-il01JYF5-0C1k-UepwC3eX8fFS_gk)
This is probably more a math question than a programming question, however, here comes my take: Depending on M and A, it might be better to
Find the prime divisors of the Ai (have a look at this) and put them in a set.
Either remove (sieve) all multiples of these primes from list(range(1,M+1)), which you can do (more) efficiently by smart ordering, or find all primes smaller or equal to M (which could even be pre-computed) that are not divisors of any Ai and compute all multiples up to M.
Explanation: Since gcd(Ai,k)=1 if and only if Ai and k have no common divisors, they also have no prime divisors. Thus, we can first find all prime divisors of the Ai and then make sure our k don't have any of them as divisors, too.
Using numpy with vectorised operations will be a good alternative when your input range M goes up to hundreds and higher and A is stably small (is about as your current A):
import numpy as np
def find_gcd_np(M, A, to_gcd=1):
vals = np.arange(1, M + 1)
return vals[np.all(np.gcd(vals, np.array(A)[:, None]) == to_gcd, axis=0)]
Usage:
print(find_gcd_np(100, [6, 1, 5], 1))

Compare values in two lists and return the similar or nearest value from a third list

I will like to find similar values of j in v and return x. when the value in j is not equal to v, I will like the code the detect the 2 value in v that j values between. so if j falls between v1 and v2, I will like the code to return max(x1,x2)- ( ((j-v1)/(v2-v1))*(max(x1,x2)-min(x1,x2)))
v= [100,200,300,400,500,600,700,800,900,1000,1100]
x= [67,56,89,21,90,54,38,93,46,17,75]
j= [200,300,400,460,500,600,700,800,870,900,950]
for i in range(len(v)-1):
if v[i] > j and V[i+1] < j:
p = max(x[i],x[i+1])- ( ((j-v[i])/(v[i+1]-v[i]))*(max(x[i],x[i+1])-min(x[i],x[i+1])))
elif v[i] ==j:
b= x[i]
print(p,b)
"""
n = [x[i] for i, v_ele in enumerate(v) if v_ele in j]
p= [x[i] for i, v_ele in enumerate(v) if v_ele > j and v_ele
print(n)
"""
I will like my answers to return
[56,89,21,48.6,90,54,38,93,60.1,46,31.5]
We can do this using the following two helper functions. Note that I think there may be a slight error for the fourth element in your expected output in the question - I get that value as 62.4 while you have 48.6.
Code:
def get_v_indx(j_ele, v):
if j_ele in v:
return v.index(j_ele)
else:
for i, ele in enumerate(v):
if ele > j_ele:
return i-1+(j_ele-v[i-1])/(ele-v[i-1])
def get_x_ele(i, x):
try:
return x[i]
except TypeError:
return x[int(i)] + (x[int(i)+1]-x[int(i)])*(i-int(i))
Usage:
>>> [get_x_ele(get_v_indx(j_ele, v), x) for j_ele in j]
[56, 89, 21, 62.4, 90, 54, 38, 93, 60.1, 46, 31.5]
Well, there is a couple of problems with your code:
First, after your if statement, there is a colon missing and one of your "v"s is capitalized. Computers are unforgiving - no typo's! ;)
Then, in that same line, you try to compare an integer (v[i] and v[i+1]) with a complete list of integers (j). Instead you need to go through your j list and compare each element in that list. I introduced an index ij for going through j.
When you compare that j value to v, you got your >< signs mixed up. Since your v list is ordered from small to big, it is impossible to be smaller than element i and simultaneously bigger than element i+1. ;)
Lastly, I'm not sure why you would store your value in two different variables (p and b) instead of the same. I did just that and called it a and printed it in an example alongside the j value it comes from.
v= [100,200,300,400,500,600,700,800,900,1000,1100]
x= [67,56,89,21,90,54,38,93,46,17,75]
j= [200,300,400,460,500,600,700,800,870,900,950]
for ij in range(len(j)):
for i in range(len(v)-1):
if v[i] < j[ij] and v[i+1] > j[ij]:
a = max(x[i],x[i+1])- ( ((j[ij]-v[i])/(v[i+1]-v[i]))*(max(x[i],x[i+1])-min(x[i],x[i+1])))
elif v[i] == j[ij]:
a = x[i]
print(j[ij]," leads to ",a)
Output:
200 leads to 56
300 leads to 89
400 leads to 21
460 leads to 48.6
500 leads to 90
600 leads to 54
700 leads to 38
800 leads to 93
870 leads to 60.1
900 leads to 46
950 leads to 31.5

VRP heterogeneous site-dependency

In my code, I managed to implement different vehicle types (I think) and to indicate the site-dependency. However, it seems that in the output of my optimization, vehicles can drive more then one route. I would like to implement that my vehicle, once it returns to the depot (node 0), that a new vehicle is assigned to perform another route. Could you help me with that? :)
I'm running on Python Jupyter notebook with the Docplex solver
all_units = [0,1,2,3,4,5,6,7,8,9]
ucp_raw_unit_data = {
"customer": all_units,
"loc_x": [40,45,45,42,42,42,40,40,38,38],
"loc_y" : [50,68,70,66,68,65,69,66,68,70],
"demand": [0,10,30,10,10,10,20,20,20,10],
"req_vehicle":[[0,1,2], [0], [0], [0],[0], [0], [0], [0], [0], [0]],
}
df_units = DataFrame(ucp_raw_unit_data, index=all_units)
# Display the 'df_units' Data Frame
df_units
Q = 50
N = list(df_units.customer[1:])
V = [0] + N
k = 15
# n.o. vehicles
K = range(1,k+1)
# vehicle 1 = type 1 vehicle 6 = type 2 and vehicle 11 = type 0
vehicle_types = {1:[1],2:[1],3:[1],4:[1],5:[2],6:[2],7:[2],8:[2],9:
[2],10:[2],11:[0],12:[0],13:[0],14:[0],15:[0]}
lf = 0.5
R = range(1,11)
# Create arcs and costs
A = [(i,j,k,r) for i in V for j in V for k in K for r in R if i!=j]
Y = [(k,r) for k in K for r in R]
c = {(i,j):np.hypot(df_units.loc_x[i]-df_units.loc_x[j],
df_units.loc_y[i]-df_units.loc_y[j]) for i,j,k,r in A}
from docplex.mp.model import Model
import docplex
mdl = Model('SDCVRP')
# decision variables
x = mdl.binary_var_dict(A, name = 'x')
u = mdl.continuous_var_dict(df_units.customer, ub = Q, name = 'u')
y = mdl.binary_var_dict(Y, name = 'y')
# objective function
mdl.minimize(mdl.sum(c[i,j]*x[i,j,k,r] for i,j,k,r in A))
#constraint 1 each node only visited once
mdl.add_constraints(mdl.sum(x[i,j,k,r] for k in K for r in R for j in V
if j != i and vehicle_types[k][0] in df_units.req_vehicle[j]) == 1 for i
in N)
##contraint 2 each node only exited once
mdl.add_constraints(mdl.sum(x[i,j,k, r] for k in K for r in R for i in V
if i != j and vehicle_types[k][0] in df_units.req_vehicle[j]) == 1 for j
in N )
##constraint 3 -- Vehicle type constraint (site-dependency)
mdl.add_constraints(mdl.sum(x[i,j,k,r] for k in K for r in R for i in V
if i != j and vehicle_types[k][0] not in
df_units.req_vehicle[j]) == 0 for j in N)
#Correcte constraint 4 -- Flow constraint
mdl.add_constraints((mdl.sum(x[i, j, k,r] for j in V if j != i) -
mdl.sum(x[j, i, k,r] for j in V if i != j)) == 0 for i in
N for k in K for r in R)
#constraint 5 -- Cumulative load of visited nodes
mdl.add_indicator_constraints([mdl.indicator_constraint(x[i,j,k,r],u[i] +
df_units.demand[j]==u[j]) for i,j,k,r in A if i!=0 and j!=0])
## constraint 6 -- one vehicle to one route
mdl.add_constraints(mdl.sum(y[k,r] for r in R) <= 1 for k in K)
mdl.add_indicator_constraints([mdl.indicator_constraint(x[i,j,k,r],y[k,r]
== 1) for i,j,k,r in A if i!=0 and j!=0])
##constraint 7 -- cumulative load must be equal or higher than demand in
this node
mdl.add_constraints(u[i] >=df_units.demand[i] for i in N)
##constraint 8 minimum load factor
mdl.add_indicator_constraints([mdl.indicator_constraint(x[j,0,k,r],u[j]
>= lf*Q) for j in N for k in K for r in R if j != 0])
mdl.parameters.timelimit = 15
solution = mdl.solve(log_output=True)
print(solution)
I expect every route to be visited with another vehicle, however the same vehicles perform multiple routes. Also, now the cumulative load is calculated for visited nodes, I would like to have this for the vehicle on the routes so that the last constraint (minimum load factor) can be performed.
I understand K indices are for vehicles and R are for routes. I ran your code and got the follwing assignments:
y_11_9=1
y_12_4=1
y_13_7=1
y_14_10=1
y_15_10=1
which seem to show many vehicles share the same route.
This is not forbidden by the sum(y[k,r] for r in R) <=1) constraint,
as it forbids one vehicle from working several routes.
Do you want to limit the number of assigned vehicles to one route to 1, as this is the symmetrical constraint from constraint #6?
If I got it wrong, plese send the solution you get and the constraint you want to add.
If I add the symmetrical constraint, that is, limit assignments vehicles to routes to 1 (no two vehicles on the same route), by:
mdl.add_constraints(mdl.sum(y[k, r] for r in R) <= 1 for k in K)
mdl.add_constraints(mdl.sum(y[k, r] for k in K) <= 1 for r in R)
I get a solution with the same cost, and only three vehicle-route assignments:
y_11_3=1
y_12_7=1
y_15_9=1
Still, I guess the best solution would be to add some cost factor of using a vehicle, and introducing this into the final objective. This might also reduce the symmetries in the problem.
Philippe.

Gurobi Error Divisor must be a constant when making a more complex objective function

When I use a fairly straight forward cost function for my optimization objective function, gurobi gives back an answer but when I complicate things with math.log() functions or even with i**2 instead of i*i it produces an error similar to one of the following:
GurobiError: Divisor must be a constant
TypeError: a float is required
TypeError: unsupported operand type(s) for ** or pow(): 'Var' and 'int'
I tried to reformulate math.log((m-i)/i) to math.log(m-i)- math.log(i) this produces the float is required error. changing i*i to i**2 produces unsupported error.
Now my question is: is it just impossible to make a more complex function within Gurobi? or am I making an mistake elsewhere.
Here is a snippit of my model
from gurobipy import *
import pandas as pd
import numpy as np
import time
import math
start_time = time.time()
# example NL (i, 20, 0.08, -6.7, 301)
def cost(i, j, k, l, m):
cost = (j - l)*i + k*i*i - l*(m - i) * (math.log((m - i) / i ))
return cost
def utility(i, j, k, l):
utility = j + k*i + l*i*i
return utility
"""
def cost(i, j, k, l):
cost = j + k*i + .5*l*i*i
return cost
"""
# assign files to use as input and as output
outputfile = 'model1nodeoutput.csv'
inputfile = 'marketclearinginput.xlsx'
# define dataframes
dfdemand = pd.read_excel(inputfile, sheetname="demand", encoding='utf8')
dfproducer = pd.read_excel(inputfile, sheetname="producer", encoding='utf8')
m = Model("1NodeMultiPeriod")
dofprod = [m.addVar(lb=3.0, ub=300, name=h) for h in dfproducer['name']]
dofdem = [m.addVar(lb=3.0, ub=300, name=h) for h in dfdemand['name']]
# Integrate new variables
m.update()
# Set objective
m.setObjective(quicksum([utility(i, j, k, l) for i, j, k, l
in zip(dofdem, dfdemand['c'], dfdemand['a'], dfdemand['b'])]) -
quicksum([cost(i, j, k, l, m) for i, j, k, l, m
in zip(dofprod, dfproducer['c'], dfproducer['a'], dfproducer['b'], dfproducer['Pmax'])]),
GRB.MAXIMIZE)
# Set constraints
# Set constraints for producers
for i, j, k in zip(dofprod, dfproducer['Pmin'], dfproducer['Pmax']):
m.addConstr(i >= j)
m.addConstr(i <= k)
# Set constraints for demand
for i, j, k in zip(dofdem, dfdemand['Pmin'], dfdemand['Pmax']):
m.addConstr(i >= j)
m.addConstr(i <= k)
# Build the timestamp list, pd or np unique both possible, pd faster and preserves order
# Timestamps skips the first 3 symbols (example L1T2034 becomes 2034)
timestamps = pd.unique([i.varName[3:] for i in dofprod])
# Set constraint produced >= demanded (this should be te last constraint added for shadow variables)
for h in timestamps:
m.addConstr(quicksum([i for i in dofprod if i.varName.endswith(h)]) >=
quicksum([i for i in dofdem if i.varName.endswith(h)]))
m.optimize()
Your problem might have to do with the Gurobi quicksum() function. Perhaps try sum().

Dynamic programming solution to maximizing an expression by placing parentheses

I'm trying to implement an algorithm from Algorithmic Toolbox course on Coursera that takes an arithmetic expression such as 5+8*4-2 and computes its largest possible value. However, I don't really understand the choice of indices in the last part of the shown algorithm; my implementation fails to compute values using the ones initialized in 2 tables (which are used to store maximized and minimized values of subexpressions).
The evalt function just takes the char, turns it into the operand and computes a product of two digits:
def evalt(a, b, op):
if op == '+':
return a + b
#and so on
MinMax computes the minimum and the maximum values of subexpressions
def MinMax(i, j, op, m, M):
mmin = 10000
mmax = -10000
for k in range(i, j-1):
a = evalt(M[i][k], M[k+1][j], op[k])
b = evalt(M[i][k], m[k+1][j], op[k])
c = evalt(m[i][k], M[k+1][j], op[k])
d = evalt(m[i][k], m[k+1][j], op[k])
mmin = min(mmin, a, b, c, d)
mmax = max(mmax, a, b, c, d)
return(mmin, mmax)
And this is the body of the main function
def get_maximum_value(dataset):
op = dataset[1:len(dataset):2]
d = dataset[0:len(dataset)+1:2]
n = len(d)
#iniitializing matrices/tables
m = [[0 for i in range(n)] for j in range(n)] #minimized values
M = [[0 for i in range(n)] for j in range(n)] #maximized values
for i in range(n):
m[i][i] = int(d[i]) #so that the tables will look like
M[i][i] = int(d[i]) #[[i, 0, 0...], [0, i, 0...], [0, 0, i,...]]
for s in range(n): #here's where I get confused
for i in range(n-s):
j = i + s
m[i][j], M[i][j] = MinMax(i,j,op,m,M)
return M[0][n-1]
Sorry to bother, here's what had to be improved:
for s in range(1,n)
in the main function, and
for k in range(i, j):
in MinMax function. Now it works.
The following change should work.
for s in range(1,n):
for i in range(0,n-s):

Categories

Resources