Using logarithms with PuLP? - python

I am using the package PuLP to solve several linear programming problems. Note: I am a beginner in LP.
In order to express one of the constraints for my LP problem as a linear expression, I used a logarithmic transformation. In detail, my constraint has the following form:
log(variable 1) + log(variable 2) <= log(1.2) + log(variable 3)
Is it possible to write this into PuLP?
I tried the following as a test:
model += np.log ( pulp.lpSum( [x[i]*label_values[i]*female[i] for i in students ] ) ) <= np.log(1.2)
and received the following error: TypeError: loop of ufunc does not support argument 0 of type LpAffineExpression which has no callable log method It looks like the numpy log method isn't interfacing well with pulp.lpSum.
Any idea on whether or not this is possible?
Thanks!

Related

AttributeError: 'gurobipy.QuadExpr' object has no attribute 'getVar'

I am trying to solve a MILP problem using Python with Gurobi Solver. When I solve the model with the constraint (attached below), it appears the error like this: "AttributeError: 'gurobipy.QuadExpr' object has no attribute 'getVar'".
Could you help me to fix this error? Thank you in advance!
mdl.addConstrs((t[i,k] * X[i,j,k] - te1[i] <= 5) >> (z1[i,k] == 1) for i,j,k in arcos if i != 0 and i != 23)
where: t[i,j]: a continuous variable;
X[i,j,k], z1[i,k]: binary variables;
te1[i]: a parameter
The Gurobi documentation demonstrates how to add indicator constraints: Model.addGenConstrIndicator()
And the documentation for Model.addConstrs() also has an example for adding multiple indicator constraints in one call:
model.addConstrs((x[i] == 1) >> (y[i] + z[i] <= 5) for i in range(5))
In general, you need to define a binary variable that serves as the indicator. Your constraints seem to be the other way around, with a condition being fulfilled resulting in whether an indicator variable is set.
This is indeed not a very informative error message. Developers are notorious for not paying much attention to formulating good, meaningful error messages. They should. A better error message could have prevented this post.
Now to the underlying issue. An indicator constraint has the following structure:
binary variable = 0 ==> linear constraint
or
binary variable = 1 ==> linear constraint
You need to reformulate things to fit into this scheme. Or use a big-M formulation.

Recurrent problem using scipy.optimize.fmin

I am encountering some problems when translating the following code from MATLAB to Python:
Matlab code snippet:
x=M_test %M_test is a 1x3 array that holds the adjustment points for the function
y=R_test %R_test is also a 1x3 array
>> M_test=[0.513,7.521,13.781]
>> R_test=[2.39,3.77,6.86]
expo3= #(b,x) b(1).*(exp(-(b(2)./x).^b(3)));
NRCF_expo3= #(b) norm(y-expo3(b,x));
B0_expo3=[fcm28;1;1];
B_expo3=fminsearch(NRCF_expo3,B0_expo3);
Data_raw.fcm_expo3=(expo3(B_expo3,Data_raw.M));
The translated (python) code:
expo3=lambda x,M_test: x[0]*(1-exp(-1*(x[1]/M_test)**x[2]))
NRCF_expo3=lambda R_test,x,M_test: np.linalg.norm(R_test-expo3,ax=1)
B_expo3=scipy.optimize.fmin(func=NRCF_expo3,x0=[fcm28,1,1],args=(x,))
For clarity, the object function 'expo3' wants to go through the adjustment points defined by M_test.
'NRCF_expo3' is the function that wants to be minimised, which is basically the error between R_test and the drawn exponential function.
When I run the code, I obtain the following error message:
B_expo3=scipy.optimize.fmin(func=NRCF_expo3,x0=[fcm28,1,1]),args=(x,))
NameError: name 'x' is not defined
There are a lot of similar questions that I have perused.
If I delete the 'args' from the optimization function, as numpy/scipy analog of matlab's fminsearch
seems to indicate it is not necessary, I obtain the error:
line 327, in function_wrapper
return function(*(wrapper_args+args))
TypeError: <lambda>() missing 2 required positional arguments: 'x' and 'M_test'
There are a lot of other modifications that I have tried, following examples like Using scipy to minimize a function that also takes non variational parameters or those found in Open source examples, but nothing works for me.
I expect this is probably quite obvious, but I am very new to Python and I feel like I am looking for a needle in a haystack. What am I not seeing?
Any help would be really appreciated. I can also provide more code, if that is necessary.
I think you shouldn't use lambdas in your code, make instead a single target function with your three parameters (see PEP8). There is a lot of missing information in you post, but for what I can infer, you want something like this:
from scipy.optimize import fmin
# Define parameters
M_TEST = np.array([0.513, 7.521, 13.781])
X_ARR = np.array([2.39,3.77,6.86])
X0 = np.array([10, 1, 1]) # whatever your variable fcm28 is
def nrcf_exp3(r_test, m_test, x):
expo3 = x[0] * (1 - np.exp(-(x[1] / m_test) ** x[2]))
return np.linalg.norm(r_test - expo3)
fmin(nrcf_exp3, X0, args=(M_TEST, X_ARR))

AdaDelta optimization algorithm using Python

I was studying the AdaDelta optimization algorithm so I tried to implement it in Python, but there is something wrong with my code, since I get the following error:
AttributeError: 'numpy.ndarray' object has no attribute 'sqrt'
I did not find something about what is causing that error. According to the message, it's because of this line of code:
rms_grad = np.sqrt(self.e_grad + epsilon)
This line is similar to this equation:
RMS[g]t=√E[g^2]t+ϵ
I got the core equations of the algorithm in this article: http://ruder.io/optimizing-gradient-descent/index.html#adadelta
Just one more detail: I'm initializing the E[g^2]t matrix like this:
self.e_grad = (1 - mu)*np.square(nabla)
Where nabla is the gradient. Similar to this equation:
E[g2]t = γE[g2]t−1 + (1−γ)g2t
(the first term is equal to zero in the first iteration, just like the line of code above)
So I want to know if I'm initializing the E matrix the wrong way or I'm doing the square root inappropriately. I tried to use the pow() function but it doesn't work. If anyone could help me with this I would be very grateful, I'm trying this for weeks.
Additional details requested by andersource:
Here is the entire source code on github: https://github.com/pedrovbeltran/neural-networks-and-deep-learning/blob/experimental/modified-networks/network2_with_adadelta.py .
I think the problem is that self.e_grad_w is an ndarray of shape (2,) which further contains two additional ndarrays with 2d shapes, instead of directly containing data. This seems to be initialized in e_grad_initializer, in which nabla_w has the same structure. I didn't track where this comes from all the way back, but I believe once you fix this issue the problem will be resolved.

Gradient of a nonlinear pyomo constraint at a given point

I (repeatedly) need numeric gradient information of a nonlinear pyomo constraint con at a given point (i.e. the variables of the corresponding pyomo model are all set to a specific value). I have read this post and decided that the (slightly modified) lines
from pyomo.core.base.symbolic import differentiate
var_list = list(model.component_objects(Var, active=True))
grad_num = [value(partial) for partial in differentiate(g_nu.body, wrt_list=vars)]
should serve my purpose.
However, the example below already fails, presumably due to the appearance of the exponential function:
from pyomo.environ import *
model = ConcreteModel()
model.x_1 = Var()
model.x_2 = Var()
model.constr = Constraint(expr = 2*(model.x_1)**4+exp(model.x_2)<=3)
model.x_1.set_value(1)
model.x_2.set_value(1)
varList = list(model.component_objects(Var, active=True))
grad = [value(partial) for partial in differentiate(model.constr.body, wrt_list=varList)]
DeveloperError: Internal Pyomo implementation error:
"sympy expression type 'exp' not found in the operator map for expression >exp(x1)"
Please report this to the Pyomo Developers.
So, my question is: Can pyomo generally differentiate expressions like the exponential function/ square root etc. and is my example just an unfortunate coincidence which can be easily fixed? I will deal with various models from the MINLPLIB and some tool for differentiating the appearing expressions is crucial.
This error existed through Pyomo 5.2 and was resolved in Pyomo 5.3. Upgrading to 5.3 fixes the problem, and your example works fine (after adding from pyomo.core.base.symbolic import differentiate).

Pyomo: How to use the final data point in an abstract model's objective?

I have a Pyomo model which has the form:
from pyomo.environ import *
from pyomo.dae import *
m = AbstractModel()
m.t = ContinuousSet(bounds=(0,120))
m.T = Param(default=120)
m.S = Var(m.t, bounds=(0,None))
m.Sdot = DerivativeVar(m.S)
m.obj = Objective(expr=m.S[120],sense=maximize)
Note that the objective m.obj relies on the parameter m.T. Attempting to run this gives the error:
TypeError: unhashable type: 'SimpleParam'
Using a value, such as expr=m.S[120] gives the error:
ValueError: Error retrieving component S[120]: The component has not been constructed.
In both cases, my goal is the same: to optimize for the largest possible value of S at the horizon.
How can I create an abstract model which expresses this?
You are hitting on two somewhat separate issues:
TypeError: unhashable type: 'SimpleParam'
Is due to a bug in Pyomo 4.3 where you cannot directly use simple Params as indexes into other components. That said, the fix for this particular problem will not fix your example model.
The trick to fixing your Objective declaration is to encapsulate the Objective expression within a rule:
def obj_rule(m):
return m.S[120]
# or better yet:
# return m.S[m.T]
# or
# return m.S[m.t.last()]
m.obj = Objective(rule=obj_rule,sense=maximize)
The problem is that when you are writing an Abstract model, each component is only being declared, but not defined. So, the Var S is declared to exist, but has not been defined (it is an empty shell with no members). This causes a problem because Python (not Pyomo) attempts to resolve the m.S[120] to a specific variable immediately before calling the Objective constructor. The use of rules (functions) in Abstract models allows you to defer the resolution of the expression until Pyomo is actually constructing the model instance. Pyomo constructs the instance components in the same order that you declared them on the Abstract model, so when it fires the obj_rule, the previous components (S, T, and t) are all constructed and S has valid members at the known points of the ContinuousSet (in this case, the bounds).

Categories

Resources