I am using the package PuLP to solve several linear programming problems. Note: I am a beginner in LP.
In order to express one of the constraints for my LP problem as a linear expression, I used a logarithmic transformation. In detail, my constraint has the following form:
log(variable 1) + log(variable 2) <= log(1.2) + log(variable 3)
Is it possible to write this into PuLP?
I tried the following as a test:
model += np.log ( pulp.lpSum( [x[i]*label_values[i]*female[i] for i in students ] ) ) <= np.log(1.2)
and received the following error: TypeError: loop of ufunc does not support argument 0 of type LpAffineExpression which has no callable log method It looks like the numpy log method isn't interfacing well with pulp.lpSum.
Any idea on whether or not this is possible?
Thanks!
I am working on some paper replication, but I am having trouble with it.
According to the log, it says that RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation. However, when I check the line where the error is referring to, it was just a simple property setter inside the class:
#pdfvec.setter
def pdfvec(self, value):
self.param.pdfvec[self.key] = value # where the error message is referring to
Isn't in-place operations are something like += or *= etc.? I don't see why this error message appeared in this line.
I am really confused about this message, and I will be glad if any one knows any possible reason this can happen.
For additional information, this is the part where the setter function was called:
def _update_params(params, pdfvecs):
idx = 0
for param in params:
totdim = param.stats.numel()
shape = param.stats.shape
param.pdfvec = pdfvecs[idx: idx + totdim].reshape(shape) # where the setter function was called
idx += totdim
I know this can still lack information for solving the problem, but if you know any possiblity why the error message appeared I would be really glad to hear.
In-place operation means the assignment you've done is modifiying the underlying storage of your Tensor, of which requires_grad is set to True, according to your error message.
That said, your param.pdfvec[self.key] is not a leaf Tensor, because they will be updated during back-propagation. And you tried to assign a value to it , that will interference with autograd, so this action is prohibited by default. You can do this by directly modifying its underlying storage(f.e., with .data).
I am currently working on a MILP formulation that I want to solve using Gurobi with a branch-and-cut approach. My model is a variation of a classic Pickup and Delivery Problem with Time Windows (PDPTW), for which several classes of valid inequalities are defined. As the branch-and-bound solver runs, I want to add those inequalities (i.e., I want to add cuts), if certain conditions in the current node are met. My issue is as follows:
My variables are defined as dictionaries, which makes it easy to use them when formulating constraints because I can easily use their original indexing. An example of how I define variables is provided below
tauOD = {}
# Start- End-Service time of trucks
for i in range(0,Nt):
tauOD[i,0]=model.addVar(lb=0.0, ub=truckODTime[i][0],
vtype=GRB.CONTINUOUS,name='tauOD[%s,%s]'%(i,0))
tauOD[i,1]=model.addVar(lb=0.0, ub=truckODTime[i][1],
vtype=GRB.CONTINUOUS,name='tauOD[%s,%s]'%(i,1))
Once my model is defined in terms of variables, constraints, and cost function, in a classic branch-and-bound problem I would simply use model.optimize() to start the process. In this case, I am using the command model.optimize(my_callback), where my_callback is the callback function I defined to add cuts. My issue is that the callback function, for some reasons, does not like model variables defined as dictionaries. The only workaround I found is as follows:
model._vars = model.getVars() #---> added this call right before the optimization starts
model.optimize(mycallback)
and then inside the callback I can now retrieve variables using their ordering, not their indices as follows:
def mycallback(model,where):
if where == GRB.Callback.MIPNODE:
status = model.cbGet(GRB.Callback.MIPNODE_STATUS)
# If current node was solved to optimality, add cuts to strenghten
# linear relaxation
if status == GRB.OPTIMAL:
this_Sol = model.cbGetNodeRel(model._vars) # Get variables of current solution
# Adding a cut
model.cbCut(lhs=this_Sol[123]+this_Sol[125],sense=GRB.LESS_EQUAL,rhs=1) #---> Dummy cut just
# for illustration
# purposes
The aforementioned cut is just a dummy example to show that I can add cuts using the order variables are sequenced in my solution, and not their indexing. As example, I would like to be able to write a constraint inside my callback as
x[0,3,0] + x[0,5,0] <= 1
but the only thing I can do is to write
this_Sol[123] + this_Sol[125] <= 1 (assuming x[0,3,0] is the 124-th variable of my solution vector, and x[0,5,0] is the 126-th). Although knowing the order of variables is doable, because it depends on how I create them when setting up the model, it is a much more challenging process (and error-prone) rather than being able to use the indices, as I do when defining the original constraints of my model (see below for an example):
###################
### CONSTRAINTS ###
###################
# For each truck, one active connection from origin depot
for i in range(0,Nt):
thisLHS = LinExpr()
for j in range(0,sigma):
thisLHS += x[0,j+1,i]
thisLHS += x[0,2*sigma+1,i]
model.addConstr(lhs=thisLHS, sense=GRB.EQUAL, rhs=1,
name='C1_'+str(i))
Did any of you experience a similar problem? A friend of mine told me that Gurobi, for some reasons, does not like variables defined as dictionaries inside a callback function, but I do not know how to circumvent this.
Any help would be greatly appreciated.
Thanks!
Alessandro
You should make a copy of the variables by their dicts.
To get the variable index, you also have to make a copy of the lists os indexes.
Try this:
model._I = model.I
model._J = model.J
model._K = model.K
model._x = model.x
You need theses indexes lists so you can loop each target variable x to verify some condition. As you would do writing a normal constraint for your model.
Then inside your callback you can make the index iterations:
def mycallback(model,where):
if where == GRB.Callback.MIPNODE:
x = model.cbGetSolution(model._x)
for i in model._I:
if sum([x[i,j,k] for j in model._J for k in model._K]) > 1:
Add_the_cut()
I was studying the AdaDelta optimization algorithm so I tried to implement it in Python, but there is something wrong with my code, since I get the following error:
AttributeError: 'numpy.ndarray' object has no attribute 'sqrt'
I did not find something about what is causing that error. According to the message, it's because of this line of code:
rms_grad = np.sqrt(self.e_grad + epsilon)
This line is similar to this equation:
RMS[g]t=√E[g^2]t+ϵ
I got the core equations of the algorithm in this article: http://ruder.io/optimizing-gradient-descent/index.html#adadelta
Just one more detail: I'm initializing the E[g^2]t matrix like this:
self.e_grad = (1 - mu)*np.square(nabla)
Where nabla is the gradient. Similar to this equation:
E[g2]t = γE[g2]t−1 + (1−γ)g2t
(the first term is equal to zero in the first iteration, just like the line of code above)
So I want to know if I'm initializing the E matrix the wrong way or I'm doing the square root inappropriately. I tried to use the pow() function but it doesn't work. If anyone could help me with this I would be very grateful, I'm trying this for weeks.
Additional details requested by andersource:
Here is the entire source code on github: https://github.com/pedrovbeltran/neural-networks-and-deep-learning/blob/experimental/modified-networks/network2_with_adadelta.py .
I think the problem is that self.e_grad_w is an ndarray of shape (2,) which further contains two additional ndarrays with 2d shapes, instead of directly containing data. This seems to be initialized in e_grad_initializer, in which nabla_w has the same structure. I didn't track where this comes from all the way back, but I believe once you fix this issue the problem will be resolved.
I (repeatedly) need numeric gradient information of a nonlinear pyomo constraint con at a given point (i.e. the variables of the corresponding pyomo model are all set to a specific value). I have read this post and decided that the (slightly modified) lines
from pyomo.core.base.symbolic import differentiate
var_list = list(model.component_objects(Var, active=True))
grad_num = [value(partial) for partial in differentiate(g_nu.body, wrt_list=vars)]
should serve my purpose.
However, the example below already fails, presumably due to the appearance of the exponential function:
from pyomo.environ import *
model = ConcreteModel()
model.x_1 = Var()
model.x_2 = Var()
model.constr = Constraint(expr = 2*(model.x_1)**4+exp(model.x_2)<=3)
model.x_1.set_value(1)
model.x_2.set_value(1)
varList = list(model.component_objects(Var, active=True))
grad = [value(partial) for partial in differentiate(model.constr.body, wrt_list=varList)]
DeveloperError: Internal Pyomo implementation error:
"sympy expression type 'exp' not found in the operator map for expression >exp(x1)"
Please report this to the Pyomo Developers.
So, my question is: Can pyomo generally differentiate expressions like the exponential function/ square root etc. and is my example just an unfortunate coincidence which can be easily fixed? I will deal with various models from the MINLPLIB and some tool for differentiating the appearing expressions is crucial.
This error existed through Pyomo 5.2 and was resolved in Pyomo 5.3. Upgrading to 5.3 fixes the problem, and your example works fine (after adding from pyomo.core.base.symbolic import differentiate).