I have the following code(python 3) for adding constraints to pulp(v 2.3). It needs to add up to 400000 constraints(100^2 S, 4 A).
def linearProgram(self, error = 1e-12):
lp_problem = p.LpProblem('best-Vpi', p.LpMinimize)
#create problem variables
V = p.LpVariable.dicts("V",range(self.S), cat = "Continuous")
#objective function
for i in range(self.S):
self.v.append(V[i])
lp_problem += p.lpSum(self.v)
#constraints
for s in range(self.S):
for a in range(self.A):
pv = p.LpAffineExpression([(V[x],self.T[s][a][x]) for x in range(self.S)])
constraint = p.lpSum([self.PR[s][a], self.gamma*pv ])
lp_problem += V[s] >= constraint
status = lp_problem.solve(p.PULP_CBC_CMD(msg = 0)) #solve
I can't seem to be able to optimise it further..
I even tried multiprocessing, but it gave a lot of errors-
def __addconstraints(self, S0, S1, lp_problem):
for s in range(S0, S1):
for a in range(self.A):
pv= p.lpDot(self.T[s][a],self.v)
lp_problem += self.v[s] >= p.lpSum([self.PR[s][a], self.gamma*pv])
..................
#in linearProgram
if self.S%4:
s0, s1 = 0, self.S//3
else:
s0, s1 = 0, self.S//4
incr = s1
processes = []
for x in range(4):
proc = multiprocessing.Process(target=self.__addconstraints, args=(s0, s1, lp_problem))
processes.append(proc)
proc.start()
s0 = s1
s1 = min(s1+incr, self.S)
for proc in processes:
proc.join()
hard code for episodic? no need (due to initialization of mdp)
if self.mdptype=="episodic":
for state in self.end:
lp_problem += V[state] == 0
I am new to both pulp and multiprocessing, so I don't really have an idea what I'm doing :p
Any kind of help is appreciated.
In your code, you first build a p.LpAffineExpression, then you apply a p.lpSum and finally you do a third operation on the result V[s] >= constraint. The two last operations may increase the time because the expression is being copied each time.
From my experience, the fastest times I've gotten are doing the following:
# _vars_tup is a list of (key, value) pairs where each key is a variable and each value is a coefficient.
# it's like initializing a dictionary.
# CONSTANT is a python number (not a pulp variable)
model += p.LpAffineExpression(_vars_tup, constant=CONSTANT) >= 0
The idea is to reduce the number of times you do operations with p.LpAffineExpression objects, because a copy is done at each operation. So, build the list of variables and coefficients (_vars_tup) for ALL the variables present in the constraint and then at the last step create the p.LpAffineExpression and compare it with a constant.
An equivalent way would be (although I haven't tried it):
const = p.LpConstraint(e=p.LpAffineExpression(_vars_tup, constant=_constant), sense = p.LpConstraintGE, rhs = -CONSTANT)
model.addConstraint(other)
Related
My goal is to write the following model using docplex.mp.model in python. which ptj takes binary variable {0,1}.
[summation from of Ptj from j=1 to t][t = 1,.....,8]
here is the code I wrote:
N = 8
(period_list = [t for t in range(1, no_of_period+1)])
(j = period_list)
p = Mode.binary_var_dict(period_list, name = 'p')
for t in period_list:
for j in range(1,t+1):
Model.add_constraints(Model.sum(p[t,j]) == 1)
but I got an error. Could anyone help me with this problem please?
Your code has numerous issues.
First, you need to create one instance of docplex.mp.model.Model to add constraints to: all your calls to Model.<fn> should be rewritten as mdl.<fn> as they are instance methods.
Second, the variable dict you create has periods as keys, that is, 1,2,..P
so querying p[t,j] is sure to crash with KeyError. If you need a square matrix of variables for each couple of periods, use Model.binary_var_matrix.
Third: Model.add_constraints (with a final S) expects an iterable, but you are passing one constraint, this is also sure to crash.
Lastly, using ranges starting at 1 is not the simplest nor the safest choice with Docplex.
Here is a code, freely derived from your sample, which I guess is close to what you need:
pr = range(1, no_of_period+1)
from docplex.mp.model import Model
m = Model()
p = m.binary_var_matrix(pr, pr, name = 'p')
m.add_constraints( (m.sum(p[t,j] for j in pr) == 1) for t in pr)
print(m.lp_string)
and the output is:
Minimize
obj:
Subject To
c1: p_1_1 + p_1_2 + p_1_3 = 1
c2: p_2_1 + p_2_2 + p_2_3 = 1
c3: p_3_1 + p_3_2 + p_3_3 = 1
[..snip..]
There are number of jobs to be assigned to number of resources each with a score (performance indicator) and cost. The resource assignment problem (RAP) objective is to maximize assignment scores considering the budget. Constraints: Each resource can handle at most one job and each job if it is filled should be done by one resource. Also, there is a limited budget to spend.
I have tackled the problem in two ways: CVXPY using gurobi solver and gurobi packages. My challenge is I can't program it in a memory-efficient way with cvxpy. There are hundreds of constraint list comprehensions! How can I can improve efficiency of my code in cvxpy? For example, is there a better way to define dictionary variables in cvxpy similar to gurobi?
ms is dictionary of format {('firstName lastName', 'job'), score_value}
cst is dictionary of format {('firstName lastName', 'job'), cost_value}
job is set of jobs
res is set of resources {'firstName lastName'}
G (or g in gurobi implementation) is a dictionary with jobs as keys and values of 0 or 1 whether that job is filled due to budget limit (0 if filled and 1 if not)
thanks
github link including codes and memory profiling comparison
gurobi implementation:
m = gp.Model("RAP")
assign = m.addVars(ms.keys(), vtype=GRB.BINARY, name="assign")
g = m.addVars(job, name="gap")
m.addConstrs((assign.sum("*", j) + g[j] == 1 for j in job), name="demand")
m.addConstrs((assign.sum(r, "*") <= 1 for r in res), name="supply")
m.addConstr(assign.prod(cst) <= budget, name="Budget")
job_gap_penalty = 101 # penatly of not filling a job
m.setObjective(assign.prod(ms) -job_gap_penalty*g.sum(), GRB.MAXIMIZE)
m.optimize()
cvxpy implenentation:
X = {}
for a in ms.keys():
X[a] = cp.Variable(boolean=True, name="assign")
G = {}
for g in job:
G[g] = cp.Variable(boolean=True, name="gap")
constraints = []
for j in job:
X_r = 0
for r in res:
X_r += X[r, j]
constraints += [
X_r + G[j] == 1
]
for r in res:
X_j = 0
for j in job:
X_j += X[r, j]
constraints += [
X_j <= 1
]
constraints += [
np.array(list(cst.values())) # np.array(list(X.values())) <= budget,
]
obj = cp.Maximize(np.array(list(ms.values())) # np.array(list(X.values()))
- job_gap_penalty * cp.sum(list(G.values())))
prob = cp.Problem(obj, constraints)
prob.solve(solver=cp.GUROBI, verbose=False)
Here is the memory profiling comparison:
memeory profiling for cvxpy
memory profiling for gurobi
Previously, I tried to solve thru defining dictionary variables similar to gurobi but at is not available in cvxpy, the code was not efficient when scaling up. But now I solved it thru matrix variables and then converting to dictionary variables which super fast!
assign_scores = np.array(list(ms.values())).reshape(len(res), len(job))
assign_cost = np.array(list(cst.values())).reshape(len(res), len(job))
# make a bool matrix variable with the shape of number of resources and jobs
x = cp.Variable(shape=(len(res), len(job)), boolean=True, name="assign")
# make a bool vector variable with the shape of number of jobs
g = cp.Variable(shape=(len(job), ), boolean=True, name="gap")
constraints = []
# each job can be assigned to at most one resource or remains unfilled due to budget cap
constraints += [cp.sum(x[:, j]) + g[j] == 1 for j in range(len(job))]
# each resource can be assigned to at most one job
constraints += [cp.sum(x[r, :]) <= 1 for r in range(len(res))]
# budget cap
constraints += [cp.sum(cp.multiply(assign_cost, x)) <= budget]
# pentalty if a job is not filled
job_gap_penalty=101
# objective is to maiximize performance score
obj = cp.Maximize(cp.sum(cp.multiply(assign_scores, x) - job_gap_penalty * cp.sum(g)))
prob = cp.Problem(obj, constraints)
prob.solve(solver=cp.GUROBI, verbose=True)
I do have a piece of code that compute partitions of a set of (potentialy duplicated) integers. But i am interested in the set of possible partition and there multiplicity.
You can for exemple launch the follwoing code :
import numpy as np
from collections import Counter
import pandas as pd
def _B(i):
# for a given multiindex i, we defined _B(i) as the set of integers containg i_j times the number j:
if len(i) != 1:
B = []
for j in range(len(i)):
B.extend(i[j]*[j])
else:
B = i*[0]
return B
def _partition(collection):
# from here: https://stackoverflow.com/a/62532969/8425270
if len(collection) == 1:
yield (collection,)
return
first = collection[0]
for smaller in _partition(collection[1:]):
# insert `first` in each of the subpartition's subsets
for n, subset in enumerate(smaller):
yield smaller[:n] + ((first,) + subset,) + smaller[n + 1 :]
# put `first` in its own subset
yield ((first,),) + smaller
def to_list(tpl):
# the final hierarchy is
return list(list(i) if isinstance(i, tuple) else i for i in tpl)
def _Pi(inst_B):
# inst_B must be a tuple
if type(inst_B) != tuple :
inst_B = tuple(inst_B)
pp = [tuple(sorted(p)) for p in _partition(inst_B)]
c = Counter(pp)
Pi = c.keys()
N = list()
for pi in Pi:
N.append(c[pi])
Pi = [to_list(pi) for pi in Pi]
return Pi, N
if __name__ == "__main__":
import cProfile
pr = cProfile.Profile()
pr.enable()
sh = (3, 3, 3)
rez = list()
rez_sorted= list()
rez_ref = list()
for idx in np.ndindex(sh):
if sum(idx) > 0:
print(idx)
Pi, N = _Pi(_B(idx))
print(pd.DataFrame({'Pi': Pi, 'N': N * np.array([np.math.factorial(len(pi) - 1) for pi in Pi])}))
pr.disable()
# after your program ends
pr.print_stats(sort="tottime")
This code computes, for several examples of tuples of integer numbers (generated by np.ndindex) the partitions and counts i need. Everything happens in the _partition and the _Pi functions, this is were you should look at.
If you look closely at how these two functions are working, you'll see that they comput eevery potential partition and THEN count up how many times they appeared. For small problems, this is fine, but if the size of the prolbme increase, this starts to take a looooot of time. Try setting sh = (5,5,5), you'll see what i mean;
So the problem is the following :
Is there a way to compute directly the partitions and there number of occurences instead ?
Edit: I cross-posted on mathoverflow there, and they propose a solution in this article, in corrolary 2.10 (page 10 of the pdf). The problem could be solved by implmenting the sets p(v,r) in this corrolary.
I was hoping, as in the univariate case, that those sets would have a nice recursive expression but i ould not find one yet.
More Edit : This problem is equivalent to finding all (multiset)-partitions of a multiset. If the solution for finding (set)-partitions of a set is given by Bell partial polynomials, here we need multivariate version of these polynomials.
I am looking for an optimization algorithm that takes a text file encoded with 0s, 1s, and -1s:
1's denoting target cells that requires Wi-Fi coverage
0's denoting cells that are walls
1's denoting cells that are void (do not require Wi-Fi coverage)
Example of text file:
I have created a solution function along with other helper functions, but I can't seem to get the optimal positions of the routers to be placed to ensure proper coverage. There is another file that does the printing, I am struggling with finding the optimal location. I basically need to change the get_random_position function to get the optimal one, but I am unsure how to do that. The area covered by the various routers are:
This is the kind of output I am getting:
Each router covers a square area of at most (2S+1)^2
Type 1: S=5; Cost=180
Type 2: S=9; Cost=360
Type 3: S=15; Cost=480
My code is as follows:
import numpy as np
import time
from random import randint
def is_taken(taken, i, j):
for coords in taken:
if coords[0] == i and coords[1] == j:
return True
return False
def get_random_position(floor, taken , nrows, ncols):
i = randint(0, nrows-1)
j = randint(0, ncols-1)
while floor[i][j] == 0 or floor[i][j] == -1 or is_taken(taken, i, j):
i = randint(0, nrows-1)
j = randint(0, ncols-1)
return (i, j)
def solution(floor):
start_time = time.time()
router_types = [1,2,3]
nrows, ncols = floor.shape
ratio = 0.1
router_scale = int(nrows*ncols*0.0001)
if router_scale == 0:
router_scale = 1
row_ratio = int(nrows*ratio)
col_ratio = int(ncols*ratio)
print('Row : ',nrows, ', Col: ', ncols, ', Router scale :', router_scale)
global_best = [0, ([],[],[])]
taken = []
while True:
found_better = False
best = [global_best[0], (list(global_best[1][0]), list(global_best[1][1]), list(global_best[1][2]))]
for times in range(0, row_ratio+col_ratio):
if time.time() - start_time > 27.0:
print('Time ran out! Using what I got : ', time.time() - start_time)
return global_best[1]
fit = []
for rtype in router_types:
interim = (list(global_best[1][0]), list(global_best[1][1]), list(global_best[1][2]))
for i in range(0, router_scale):
pos = get_random_position(floor, taken, nrows, ncols)
interim[0].append(pos[0])
interim[1].append(pos[1])
interim[2].append(rtype)
fit.append((fitness(floor, interim), interim))
highest_fitness = fit[0]
for index in range(1, len(fit)):
if fit[index][0] > highest_fitness[0]:
highest_fitness = fit[index]
if highest_fitness[0] > best[0]:
best[0] = highest_fitness[0]
best[1] = (highest_fitness[1][0],highest_fitness[1][1], highest_fitness[1][2])
found_better = True
global_best = best
taken.append((best[1][0][-1],best[1][1][-1]))
break
if found_better == False:
break
print('Best:')
print(global_best)
end_time = time.time()
run_time = end_time - start_time
print("Run Time:", run_time)
return global_best[1]
def available_cells(floor):
available = 0
for i in range(0, len(floor)):
for j in range(0, len(floor[i])):
if floor[i][j] != 0:
available += 1
return available
def fitness(building, args):
render = np.array(building, dtype=int, copy=True)
cov_factor = 220
cost_factor = 22
router_types = { # type: [coverage, cost]
1: {'size' : 5, 'cost' : 180},
2: {'size' : 9, 'cost' : 360},
3: {'size' : 15, 'cost' : 480},
}
routers_used = args[-1]
for r, c, t in zip(*args):
size = router_types[t]['size']
nrows, ncols = render.shape
rows = range(max(0, r-size), min(nrows, r+size+1))
cols = range(max(0, c-size), min(ncols, c+size+1))
walls = []
for ri in rows:
for ci in cols:
if building[ri, ci] == 0:
walls.append((ri, ci))
def blocked(ri, ci):
for w in walls:
if min(r, ri) <= w[0] and max(r, ri) >= w[0]:
if min(c, ci) <= w[1] and max(c, ci) >= w[1]:
return True
return False
for ri in rows:
for ci in cols:
if blocked(ri, ci):
continue
if render[ri, ci] == 2:
render[ri, ci] = 4
if render[ri, ci] == 1:
render[ri, ci] = 2
render[r, c] = 5
return (
cov_factor * np.sum(render > 1) -
cost_factor * np.sum([router_types[x]['cost'] for x in routers_used])
)
Here's a suggestion on how to solve the problem; however I don't affirm this is the best approach, and it's certainly not the only one.
Main idea
Your problem can be modelised as a weighted minimum set cover problem.
Good news, this is a well known optimization problem:
It is easy to find algorithm descriptions for approximate solutions
A quick search on the web shows many implementations of approximation algorithms in Python.
Bad news, this is a NP-hard optimization problem:
If you need an exact solution: algorithms will work only for "small" sized problems in a reasonable amount of time(in your case: size of the problem <=> number of "1" cells).
Approximate (a.k.a greedy) algorithms are trade-off between computation requirements, and a risk do deliver far from optimal solutions in certain cases.
Note that the following part does not prove that your problem is NP-hard. The general minimum set cover problem is NP-hard. In your case the subsets have several properties that might help to design a better algorithm. I have no idea how though.
Translating into a cover set problem
Let's define some sets:
U: the set of "1" cells (requiring Wifi).
P(U): the power set of U (the set of subsets of U).
P: the set of cells on which you can place a router (not sure if P=U in your original post).
T: the set of router type (3 values in your case).
R+: positive Real number (used to describe prices).
Let's define a function (pseudo Python):
# Domain of definition : T,P --> R+,P(U)
# This function takes a router type and a position, and returns
# a tuple containing:
# - the price of a router of the given type.
# - the subset of U containing all the position covered by a router
# of the given type placed at the given position.
def weighted_subset(routerType, position):
pass # TODO: implementation
Now, we define a last set, as the image of the function we've just described: S=weighted_subset(T,P). Each element of this set is a subset of U, weighted by a price in R+.
With all this formalism, finding the router types & positions that:
gives coverage to all the desirable locations
minimize the cost
Is equivalent to finding a sub-collection of S:
whose union of their P(U) is equal to U
which minimise the sum of the associated weights
Which is the weighted minimal set cover problem.
I would like to query the value of an exponentially weighted moving average at particular points. An inefficient way to do this is as follows. l is the list of times of events and queries has the times at which I want the value of this average.
a=0.01
l = [3,7,10,20,200]
y = [0]*1000
for item in l:
y[int(item)]=1
s = [0]*1000
for i in xrange(1,1000):
s[i] = a*y[i-1]+(1-a)*s[i-1]
queries = [23,68,103]
for q in queries:
print s[q]
Outputs:
0.0355271185019
0.0226018371526
0.0158992102478
In practice l will be very large and the range of values in l will also be huge. How can you find the values at the times in queries more efficiently, and especially without computing the potentially huge lists y and s explicitly. I need it to be in pure python so I can use pypy.
Is it possible to solve the problem in time proportional to len(l)
and not max(l) (assuming len(queries) < len(l))?
Here is my code for doing this:
def ewma(l, queries, a=0.01):
def decay(t0, x, t1, a):
from math import pow
return pow((1-a), (t1-t0))*x
assert l == sorted(l)
assert queries == sorted(queries)
samples = []
try:
t0, x0 = (0.0, 0.0)
it = iter(queries)
q = it.next()-1.0
for t1 in l:
# new value is decayed previous value, plus a
x1 = decay(t0, x0, t1, a) + a
# take care of all queries between t0 and t1
while q < t1:
samples.append(decay(t0, x0, q, a))
q = it.next()-1.0
# take care of all queries equal to t1
while q == t1:
samples.append(x1)
q = it.next()-1.0
# update t0, x0
t0, x0 = t1, x1
# take care of any remaining queries
while True:
samples.append(decay(t0, x0, q, a))
q = it.next()-1.0
except StopIteration:
return samples
I've also uploaded a fuller version of this code with unit tests and some comments to pastebin: http://pastebin.com/shhaz710
EDIT: Note that this does the same thing as what Chris Pak suggests in his answer, which he must have posted as I was typing this. I haven't gone through the details of his code, but I think mine is a bit more general. This code supports non-integer values in l and queries. It also works for any kind of iterables, not just lists since I don't do any indexing.
I think you could do it in ln(l) time, if l is sorted. The basic idea is that the non recursive form of EMA is a*s_i + (1-a)^1 * s_(i-1) + (1-a)^2 * s_(i-2) ....
This means for query k, you find the greatest number in l less than k, and for a estimation limit, use the following, where v is the index in l, l[v] is the value
(1-a)^(k-v) *l[v] + ....
Then, you spend lg(len(l)) time in search + a constant multiple for the depth of your estimation. I'll provide a code sample in a little bit (after work) if you want it, just wanted to get my idea out there while I was thinking about it
here's the code -
v is the dictionary of values at a given time; replace with 1 if it's just a 1 every time...
import math
from bisect import bisect_right
a = .01
limit = 1000
l = [1,5,14,29...]
def find_nearest_lt(l, time):
i = bisect_right(a, x)
if i:
return i-1
raise ValueError
def find_ema(l, time):
i = find_nearest_lt(l, time)
if l[i] == time:
result = a * v[l[i]
i -= 1
else:
result = 0
while (time-l[i]) < limit:
result += math.pow(1-a, time-l[i]) * v[l[i]]
i -= 1
return result
if I'm thinking correctly, the find nearest is l(n), then the while loop is <= 1000 iterations, guaranteed, so it's technically a constant (though a kind of large one). find_nearest was stolen from the page on bisect - http://docs.python.org/2/library/bisect.html
It appears that y is a binary value -- either 0 or 1 -- depending on the values of l. Why not use y = set(int(item) for item in l)? That's the most efficient way to store and look up a list of numbers.
Your code will cause an error the first time through this loop:
s = [0]*1000
for i in xrange(1000):
s[i] = a*y[i-1]+(1-a)*s[i-1]
because i-1 is -1 when i=0 (first pass of loop) and both y[-1] and s[-1] are the last element of the list, not the previous. Maybe you want xrange(1,1000)?
How about this code:
a=0.01
l = [3.0,7.0,10.0,20.0,200.0]
y = set(int(item) for item in l)
queries = [23,68,103]
ewma = []
x = 1 if (0 in y) else 0
for i in xrange(1, queries[-1]):
x = (1-a)*x
if i in y:
x += a
if i == queries[0]:
ewma.append(x)
queries.pop(0)
When it's done, ewma should have the moving averages for each query point.
Edited to include SchighSchagh's improvements.