Pyomo - Objective function as average value of param - python

I am trying to use Pyomo for an LP problem and I would like the objective function to be the mean value of a particular parameter in my dataframe (which I'll call obj_param).
I had previously set this up like so:
model = ConcreteModel()
model.decision_var = Var(list(idx for idx in self.df.index), domain=NonNegativeReals)
model.obj = Objective(
expr= -1 * # because I want to maximize not minimize
sum(model.decision_var[idx] * df.loc[idx,'obj_param'] for idx in df.index)
)
The decision_var here is a column of counts (like "acres of this crop" in the classic farmer problem) and the obj_param is the value of this "crop", so my objective (as written) multiplies the acres of the crop by it's value to maximize the total value.
This makes sense in the farmer problem, but what I'm actually trying to do in my case is to maximize the mean value of each acre. (Forgive the farmer metaphor, it becomes a bit strained here.)
To do this, I change my objective as follows:
model.obj = Objective(
expr= -1 * # because I want to maximize not minimize
sum(model.decision_var[idx] * df.loc[idx,'obj_param'] for idx in df.index) /
sum(model.decision_var[idx] for idx in df.index)
)
Conceptually this looks right to me, but now when I run it I get RuntimeError: Cannot write legal LP file. Objective 'obj' has nonlinear terms that are not quadratic.
I can vaguely understand what this error is saying, but I don't totally see how this equation is non-linear. Either way, more generally I'm asking: is it possible in pyomo to define the objective as an average in the way that I'm trying to do?
Thanks for any help!

Related

Imposing monotonicity with scipy.optimize.minimize

I am trying to minimize a function of a vector of length 20, but I want to constrain the solution to be monotonic, i.e.
x[1] <= x[2]... <= x[20]
I have tried to implement this in the following way using "constraints" for this routine:
cons = tuple([{'type':'ineq', 'fun': lambda x: x[i]- x[i-1]} for i in range(1, len(node_vals))])
res = sp.optimize.minimize(localisation, b, args=(d), constraints = cons) #optimize
However, the results I get are not monotonic, even when the initial guess b is, it seems that the optimizer is completely ignoring the constraints. What could be going wrong? I have also tried changing the constraint to x[i]**3 - x[i+1]**3 to make it "smoother", but it didn't help at all. My objective function, localisation is the integral of solution to an eigenvalue problem whose parameters are defined beforehand:
def localisation(node_vals, domain): #calculate localisation for solutions with piecewise linear grading
f = piecewise(node_vals, domain) #create piecewise linear function using given values at nodes
#plt.plot(domain, f(domain))
M = diff_matrix(f(domain)) #differentiation matrix created from piecewise linear function
m = np.concatenate(([0], get_solutions(M)[1][:, 0], [0]))
integral = num_int(domain, m)
return integral
You didn’t post a minimum reproducible example that we can run. However, did you try to specify which optimization algorithm to use in SciPy? Something like this:
res = sp.optimize.minimize(localisation, b, args=(d), constraints = cons, method=‘SLSQP’)
I'm having a very similar problem but with additional upper and lower bounds on the monotonicity property. I'm tackling the problem like this (maybe it helps you):
Using the Trust-Region Constrained Algorithm given by scipy. This provides us a way of dealing with linear constraints in a matrix-manner:
lb <= A.dot(x) <= ub
where lb & and ub are the lower (upper) bounds of this constraint problem and A is the matrix, representing the linear constraint problem.
every row of matrix A is a linear term which defines a constraint
If, for example, x[0] <= x[1], then this can be transformed into x[0] - x[1] <= 0 which in terms of the linear constraint matrix A looks like this [1, -1,...], provided that the upper bound vector has a 0 value on this level of course (vice versa is also possible but either way, having at least one of both, lower or upper bound, makes this easy)
Setting up enough of these inequalities and at the same time merging a couple of those into a single inequality may create a sufficient matrix to solve this.
Hope this helps a bit, It did the job for my problem.

How to find the non-zero minimum value in the objective function of a mathematical model

It is a mathematical model. x is the 0-1 decision variable, and c is Non-negative coefficients. The tricky problem I'm facing now is how to find that non-zero minimum value in a series of cx. The objective function is as follows.
Now, I want to find the minimum element in this equation which means the minimum number (c) selected by decision virables (x). However, when an element is not selected, cx for this element equals 0. Because c>0, then, the objective function equals 0. That's not what I want.
How to modify this objective function? Adding variables, non-linearity are allowed. I hope to solve this problem with gurobi. How to deal with the modified fuction for gurobi? (Linearization? Gurobi's built-in functions?)
Thanks!
You could introduce an auxiliary continuous variable Zi for every binary decision variable Xi.
Then add constraints for all Zi:
Zi = Xi * Ci + (1-Xi) * BIG_NUMBER
Due to this constraint, Zi is either Ci or BIG_NUMBER.
You can then take the minimum of all Zi as your objective.
Have a look at this article about conditional statements and indicator constraints.
How about this:
def find_non_zero_minimum(cs, xs):
options = [ c*x for c,x in zip(cs,xs)]
options.sort()
return next(filter(None, options)) # return first (smallest element) after filtering out zeros

Scipy optimize to target

I am trying to optimize a function to get it as close to zero as possible.
The function is:
def goal_seek_func(x: float) -> float:
lcos_list_temp = [energy_output[i] * x for i in range(life)]
npv_lcos_temp = npv(cost_capital, lcos_list_temp)
total = sum([cost_energy_capacity,
cost_power_conversion,
balance_of_plant,
cost_construction_commissioning,
npv_o_m,
npv_eol,
npv_cost_charging,
npv_lcos_temp,
])
return total
All the variables calculated previously in the code.
It is a linear equation, where as x gets smaller, so does total.
I am trying to find the value of x where total is as close to 0 as possible.
I have tried to use:
scipy.optimize.minimize_scalar(goal_seek_func)
but this clearly minimizes the equation to -inf. I have read the docs, but cannot see where to define a target output of the function. Where can I define this, or is there a better method?
I am trying to find the value of x where total is as close to 0 as possible.
Then you want to solve the equation goal_seek_func(x) = 0 instead of minimizing goal_seek_func(x). See here for an explanation of why these two things are not the same. That being said, you can easily solve the equation by minimizing some vector norm of your objective function:
res = scipy.optimize.minimize_scalar(lambda x: goal_seek_func(x)**2)
If the objective value res.fun is zero, res.x solves your equation. Otherwise, res.x is at least the best possible value.

How to define complex objective functions in or-tools?

I would like to know how to define a complex objective function using or-tools (if it is possible).
The basic example below shows how to have basic linear problem with Or-tools in python:
solver = pywraplp.Solver('lp_pricing_problem', pywraplp.Solver.GLOP_LINEAR_PROGRAMMING)
# Define variables with a range from 0 to 1000.
x = solver.NumVar(0, 1000, 'Variable_x')
y = solver.NumVar(0, 1000, 'Variable_y')
# Define some constraints.
solver.Add(x >= 17)
solver.Add(x <= 147)
solver.Add(y >= 61)
solver.Add(y <= 93)
# Minimize 0.5*x + 2*y
objective = solver.Objective()
objective.SetCoefficient(x, 0.5)
objective.SetCoefficient(y, 2)
objective.SetMinimization()
status = solver.Solve()
# Print the solution
if status == solver.OPTIMAL:
print("x: {}, y: {}".format(x.solution_value(), y.solution_value())) # x: 17.0, y: 61.0
In this very basic example the objective function is Minimize(0.5*x + 2*y).
What would be the syntax to obtain, for example, the least squares Minimize(x^2 + y^2) or the absolute value of a variable Minimize(abs(x) + y)?
Is it possible to define a sub-function and call it into the objective function? Or should I proceed another way?
Many thanks in advance,
Romain
You've tagged this question with linear-programming, so you already have the ingredients to figure out the answer here.
If you check out this page, you'll see that OR-Tools solves linear programs, as well as few other families of optimization problems.
So the first objective function you mention, Minimize(0.5*x + 2*y) is solvable because it is linear.
The second objective you mention---Minimize(x^2 + y^2)---cannot be solved with OR-Tools because it is nonlinear: those squared terms make it quadratic. To solve this problem you need something that can do quadratic programming, second-order cone programming, or quadratically constrained quadratic programming. All of these methods include linear programming as a subset. The tool I recommend for solving these sorts of problems is cvxpy, which offers a powerful and elegant interface. (Alternatively, you can approximate the quadratic as linear-piecewise, but you will incur many more constraints.)
The last objective you mention, Minimize(c*abs(x) + y) can be solved as a linear program even though abs(x) itself is nonlinear. To do so, we rewrite the objective as min( c*(t1-t2) +y) and add the constraints t1,t2>=0. This works as long as c is positive and you are minimizing (or c is negative and you are maximizing). A longer explanation is here.
There are many such transformations you can perform and one of the skills of a mathematical programmer/operations researcher is to have many of them memorized.

How to find the maximum of a prob in PuLP

I am trying to solve a linear problem in PuLP that minimizes a cost function. The cost function is itself a function of the maximum value of the cost function, e.g., I have a daily cost, and I am trying to minimize the monthly cost, which is the sum of the daily cost plus the maximum daily cost in the month. I don't think I'm capturing the maximum value of the function in the final solution, and I'm not sure how to go about troubleshooting this issue. The basic outline of the code is below:
# Initialize the problem to be solved
prob = LpProblem("monthly_cost", LpMinimize)
# The number of time steps
# price is a pre-existing array of variable prices
tmax = len(price)
# Time range
time = list(range(tmax))
# Price reduction at every time step
d = LpVariable.dict("d", (time), 0, 5)
# Price increase at every time step
c = LpVariable.dict("c", (time), 0, 5)
# Define revenues = price increase - price reduction + initial price
revenue = ([(c[t] - d[t] + price[t]) for t in time])
# Find maximum revenue
max_revenue = max(revenue)
# Initialize the problem
prob += sum([revenue[t]*0.0245 for t in time]) + max_revenue
# Solve the problem
prob.solve()
The variable max_revenue always equals c_0 - d_0 + price[0] even though price[0] is not the maximum of price and c_0 and d_0 both equal 0. Does anyone know how to ensure the dynamic maximum is being inserted into the problem? Thanks!
I don't think you can do the following in PuLP or any other standard LP solvers:
max_revenue = max(revenue)
This is because determining the maximum will require the solver to evaluate revenue equations; so in this case, I don't think you can extract a standard LP model. Such models are in fact non-smooth.
In such situations, you can easily reformulate the problem as follows:
max_revenue >= revenue = ([(c[t] - d[t] + price[t]) for t in time])
This works, as for any value of revenue: max_revenue >= revenue. This in turn helps in extracting a standard LP model from the equations. Hence, the original problem formulation gets extended with additional inequality constraints (the equality constraints and the objective functions should be the same as before). So it could look something like this (word of caution: I have not tested this):
# Define variable
max_revenue = LpVariable("Max Revenue", 0)
# Define other variables, revenues, etc.
# Add the inequality constraints
for item in revenue:
prob += max_revenue >= item
I would also suggest that you have a look at scipy.optimize.linprog. PuLP writes the model in an intermediary file, and then calls installed solver to solve the model. On the other hand, in scipy.optimize.linprog it's all done in python and should be faster. However, if your problem can not be solved using simplex algorithm, or you require other professional solvers (e.g. CPlex, Gurobi, etc.) then PuLP is a good choice.
Also, see the discussion on Data Fitting (page 19) in Introduction to Linear Optimisation by Bertsimas.
Hope this helps. Cheers.

Categories

Resources