I am trying to make this kind of problem without add the equality constraint into the problem. This is useful to don't have equality constraints in my problem, get the standard form in matrix shape and apply other algorithm. This would be a simplification.
def main_building():
m = gp.Model("building")
m.reset() # Reset the problem, keep options.
PelHP = [m.addVar(name = 'PelHP_%d'%t) for t in range(T)] #Eletric Power
PthFC = [m.addVar(name = 'PthFC_%d'%t) for t in range(T)] #Thermal power given by the fan coil
T_ST_BMS= [m.addVar(name = 'T_ST_BMS_%d'%t) for t in range(T)] #Storage temperature
T_build_BMS= [m.addVar(name = 'T_build_BMS_%d'%t) for t in range(T)] #Building temperature
T_ST_BMS[0]=T_ST0+273.15
T_build_BMS[0]=T_BMS0+273.15
for t in range (T-1):
T_ST_BMS[t+1]=((COP_HP*PelHP[t]-PthFC[t]/eta_FC)/C_ST)*dt+T_ST_BMS[t]
for t in range(T-1):
m.addConstr(T_ST_BMS[t+1]<=273.15+60)
m.addConstr(-273.15+30 >= -T_ST_BMS[t+1] )
Objective_b=0
for t in range(T):
m.addConstr(PelHP[t]<=PelHP_max)
m.addConstr(PthFC[t]<=eta_FC*m_air*cp_air*(T_ST_BMS[t]-T_build_BMS[t]))
m.addConstr(0>=-PelHP[t])
m.addConstr(0>=-PthFC[t])
Objective_b=Objective_b+dt*(C_buy*(-(-PelHP[t]))+(T_build_BMS [t]-T_obj)**2
# Set objective:
m.setObjective(Objective_b, gp.GRB.MINIMIZE)
m.optimize()
return m
Gurobi is able to solve it with 10 times in T. But when I increase it, gurobi blocks. Anyone that could help me? The point is that the expression of T_ST_BMS becomes higher with t.
Related
I'm looking to write a set of code that allows me to set risk budget constraints to individual positions in a portfolio, i.e. each position to contribute a set amount of risk to the portfolio, and I'm looking to do it specifically in CVXPY as I have noticed sometimes SCIPY breaks the constraints.
I have the below code, I was wondering if you would be able to provide me with some direction as I have encountered cvxpy "The objective is not DCP" error and I'm not sure how to correct it.
Please also let me know if I should rephrase my question to make it clearer.
import cvxpy as cp
import numpy as np
# covmat is a (31,31) numpy array, covariance matrix calculated from monthly returns
risk_budget = np.repeat(1/31, 31).reshape(-1,1)
def risk_budget_objective(risk_budget, covmat):
n = covmat.shape[0]
# set equal weight
equal_wts = np.repeat(1 / n, n)
# weights vertical
wts = cp.Variable((n,1))
constraints = [cp.sum(wts) == 1.0] # weight constraints
port_variance = cp.square(cp.quad_form(wts, covmat)) # portfolio variance, not volatility
mrc = covmat # wts * 12 # vector # marginal risk contribution annualised
risk_contrib = cp.multiply(mrc, wts) / port_variance # calculate risk contribution
mean_square_diff = cp.sum(cp.square(risk_contrib - risk_budget)) # squared difference and summed
prob = cp.Problem(cp.Minimize(mean_square_diff), constraints) # minimise squared difference
prob.solve(solver=cp.SCS)
if problem.status not in ["infeasible", "unbounded"]:
solution = wts.value
return solution
else:
print('Problem not feasible... resorting to equal weight...')
return equal_wts
Looking into the objects, it seems that the problem lies in the following code:
risk_contrib = cp.multiply(mrc, wts) / port_variance
As the curvature is "UNKOWN" rather than convex.
So I am currently trying to optimize the costs for energy in a household. The optimization is based on a cost factor function which I am trying to minimize.
model = ConcreteModel()
model.t = RangeSet(0, 8759)
def costs(model, t):
return sum(model.cost_factor[t] * model.elec_grid[t] for t in model.t)
model.costs = Objective(rule = costs, sense = minimize)
Due to pv overproduction being a thing I try to negate by using these functions:
model.elec_consumption = Param(model.t, initialize = df['Consumption'])
model.pv = Param(model.t, initialize = df['PV'])
model.excess_pv = Var(model.t, within = NonNegativeReals, initialize = 0)
model.demand = Var(model.t, initialize = 0, within = NonNegativeReals)
def pv_overproduction(model, t):
return model.excess_pv[t] >= model.pv[t] - model.demand[t]
model.pv_overproduction = Constraint(model.t, rule = pv_overproduction)
def lastdeckung(model, t):
return (model.pv[t] - model.excess_pv[t]) + model.elec_grid[t] == model.demand[t]
model.lastdeckung = Constraint(model.t, rule = lastdeckung)
The problem is when the cost factor is negative the optimizer puts model.excess_pv very high so he can crank up the model.elec_grid variable in an effort to minimize the cost factor.
That is obviously not the intention but so far I wasnt able to find a better way to calculate the excess pv. An easy fix would technically be to just have a cost factor which is constantly positive but sadly thats not an option.
I'd appreciate if someone had an idea how to fix this.
The basics are that I want to maximize the usage of the pv electricity in order to reduce costs. At some points there is to mooch pv in the system so in order for that optimization to still work I need to get rid of the excess.
return model.demand[t] == model.elec_consumption[t]
model.demand_rule = Constraint(model.t, rule = demand_rule)
This is the demand. Technically there are more functions but for the the problem solving that is irrelevant. The main problem is that this function doesnt work due to the cost factor being negative sometimes
model.excess_pv[t] >= model.pv[t] - model.demand[t]
Excess_pv aswell as model.demand are variables wheres model.pv is a parameter.
So as far as I got in my problemsearching I need to change my overproduction function in a way that it uses the value from pv - excess_pv if the value is > 0 and should the value be < = 0 its supposed to be zero.
I think the easiest way to do this is to probably just penalize excess production to a greater extent than the maximally negative cost factor.
Why can't you...
excess_pentalty = max(-min(cost) + epsilon, 0) # use maximin to prevent odd behavior if there is no negative cost, which might lead to a negative penalty...
# make obj from components, so we can inspect true cost (w/o penalty) later...
cost = sum(model.cost_factor[t] * model.elec_grid[t] for t in model.t)
overproduction_pentaly = sum(excess_penalty * model.excess_pv[t] for t in model.t)
model.obj = Objective(expr= cost + overproduction_penalty, sense = minimize)
and later if you want the cost independently, you can just check the value of cost, which is a legal pyomo expression.
value(cost)
I think you could also add the expression as a model component, if that is important...
model.cost = ...
model.overproduction_penalty = ...
So the idea of a piecewise function is definitely an option for the problem mentioned in this post. It is quite a fancy and complicated solution though. The idea of penalties is much easier and it also showed a few more flaws in my code. Due to negative cost factor the optimizer tries to maximize grid input which is not wrong but when some variables are not capped the optimizer uses electricity with no efficiency whatsoever. So easiest way as mentionted earlier is to just penalize the grid import from the beginning so there are no negative cost factor during the optimization.
I'm facing a problem while trying to implement the coupled differential equation below (also known as single-mode coupling equation) in Python 3.8.3. As for the solver, I am using Scipy's function scipy.integrate.solve_bvp, whose documentation can be read here. I want to solve the equations in the complex domain, for different values of the propagation axis (z) and different values of beta (beta_analysis).
The problem is that it is extremely slow (not manageable) compared with an equivalent implementation in Matlab using the functions bvp4c, bvpinit and bvpset. Evaluating the first few iterations of both executions, they return the same result, except for the resulting mesh which is a lot greater in the case of Scipy. The mesh sometimes even saturates to the maximum value.
The equation to be solved is shown here below, along with the boundary conditions function.
import h5py
import numpy as np
from scipy import integrate
def coupling_equation(z_mesh, a):
ka_z = k # Global
z_a = z # Global
a_p = np.empty_like(a).astype(complex)
for idx, z_i in enumerate(z_mesh):
beta_zf_i = np.interp(z_i, z_a, beta_zf) # Get beta at the desired point of the mesh
ka_z_i = np.interp(z_i, z_a, ka_z) # Get ka at the desired point of the mesh
coupling_matrix = np.empty((2, 2), complex)
coupling_matrix[0] = [-1j * beta_zf_i, ka_z_i]
coupling_matrix[1] = [ka_z_i, 1j * beta_zf_i]
a_p[:, idx] = np.matmul(coupling_matrix, a[:, idx]) # Solve the coupling matrix
return a_p
def boundary_conditions(a_a, a_b):
return np.hstack(((a_a[0]-1), a_b[1]))
Moreover, I couldn't find a way to pass k, z and beta_zf as arguments of the function coupling_equation, given that the fun argument of the solve_bpv function must be a callable with the parameters (x, y). My approach is to define some global variables, but I would appreciate any help on this too if there is a better solution.
The analysis function which I am trying to code is:
def analysis(k, z, beta_analysis, max_mesh):
s11_analysis = np.empty_like(beta_analysis, dtype=complex)
s21_analysis = np.empty_like(beta_analysis, dtype=complex)
initial_mesh = np.linspace(z[0], z[-1], 10) # Initial mesh of 10 samples along L
mesh = initial_mesh
# a_init must be complex in order to solve the problem in a complex domain
a_init = np.vstack((np.ones(np.size(initial_mesh)).astype(complex),
np.zeros(np.size(initial_mesh)).astype(complex)))
for idx, beta in enumerate(beta_analysis):
print(f"Iteration {idx}: beta_analysis = {beta}")
global beta_zf
beta_zf = beta * np.ones(len(z)) # Global variable so as to use it in coupling_equation(x, y)
a = integrate.solve_bvp(fun=coupling_equation,
bc=boundary_conditions,
x=mesh,
y=a_init,
max_nodes=max_mesh,
verbose=1)
# mesh = a.x # Mesh for the next iteration
# a_init = a.y # Initial guess for the next iteration, corresponding to the current solution
s11_analysis[idx] = a.y[1][0]
s21_analysis[idx] = a.y[0][-1]
return s11_analysis, s21_analysis
I suspect that the problem has something to do with the initial guess that is being passed to the different iterations (see commented lines inside the loop in the analysis function). I try to set the solution of an iteration as the initial guess for the following (which must reduce the time needed for the solver), but it is even slower, which I don't understand. Maybe I missed something, because it is my first time trying to solve differential equations.
The parameters used for the execution are the following:
f2 = h5py.File(r'path/to/file', 'r')
k = np.array(f2['k']).squeeze()
z = np.array(f2['z']).squeeze()
f2.close()
analysis_points = 501
max_mesh = 1e6
beta_0 = 3e2;
beta_low = 0; # Lower value of the frequency for the analysis
beta_up = beta_0; # Upper value of the frequency for the analysis
beta_analysis = np.linspace(beta_low, beta_up, analysis_points);
s11_analysis, s21_analysis = analysis(k, z, beta_analysis, max_mesh)
Any ideas on how to improve the performance of these functions? Thank you all in advance, and sorry if the question is not well-formulated, I accept any suggestions about this.
Edit: Added some information about performance and sizing of the problem.
In practice, I can't find a relation that determines de number of times coupling_equation is called. It must be a matter of the internal operation of the solver. I checked the number of callings in one iteration by printing a line, and it happened in 133 ocasions (this was one of the fastests). This must be multiplied by the number of iterations of beta. For the analyzed one, the solver returned this:
Solved in 11 iterations, number of nodes 529.
Maximum relative residual: 9.99e-04
Maximum boundary residual: 0.00e+00
The shapes of a and z_mesh are correlated, since z_mesh is a vector whose length corresponds with the size of the mesh, recalculated by the solver each time it calls coupling_equation. Given that a contains the amplitudes of the progressive and regressive waves at each point of z_mesh, the shape of a is (2, len(z_mesh)).
In terms of computation times, I only managed to achieve 19 iterations in about 2 hours with Python. In this case, the initial iterations were faster, but they start to take more time as their mesh grows, until the point that the mesh saturates to the maximum allowed value. I think this is because of the value of the input coupling coefficients in that point, because it also happens when no loop in beta_analysisis executed (just the solve_bvp function for the intermediate value of beta). Instead, Matlab managed to return a solution for the entire problem in just 6 minutes, aproximately. If I pass the result of the last iteration as initial_guess (commented lines in the analysis function, the mesh overflows even faster and it is impossible to get more than a couple iterations.
Based on semi-random inputs, we can see that max_mesh is sometimes reached. This means that coupling_equation can be called with a quite big z_mesh and a arrays. The problem is that coupling_equation contains a slow pure-Python loop iterating on each column of the arrays. You can speed the computation up a lot using Numpy vectorization. Here is an implementation:
def coupling_equation_fast(z_mesh, a):
ka_z = k # Global
z_a = z # Global
a_p = np.empty(a.shape, dtype=np.complex128)
beta_zf_i = np.interp(z_mesh, z_a, beta_zf) # Get beta at the desired point of the mesh
ka_z_i = np.interp(z_mesh, z_a, ka_z) # Get ka at the desired point of the mesh
# Fast manual matrix multiplication
a_p[0] = (-1j * beta_zf_i) * a[0] + ka_z_i * a[1]
a_p[1] = ka_z_i * a[0] + (1j * beta_zf_i) * a[1]
return a_p
This code provides a similar output with semi-random inputs compared to the original implementation but is roughly 20 times faster on my machine.
Furthermore, I do not know if max_mesh happens to be big with your inputs too and even if this is normal/intended. It may make sense to decrease the value of max_mesh in order to reduce the execution time even more.
The question is what it says on the tin. I'm trying to numerically solve a boundary value problem and my friend is asking me whether the solver would work for these conditions. The page for the solver doesn't give the conditions we have i.e. bc(y(a),y(b), p) = 0 but the form of our question is y(0) = some constant value and y'(b) = 0, giving our Neumann conditions, Would you need to rewrite the function to have a first order reduction like in the shooting method?
I suppose that you are solving some second order ODE
y''(t) = F(t,y(t), y'(t))
For the first order system use the state vector u = [y, v] = [y, y']. Then the minimum to call the BVP solver is
def ode(t,u): y,v = u; return [v, F(t,y,v)];
def bc(u0, ub): return [u0[0]-y0, ub[1]];
y0 = someconstantvalue;
t_init = [0,b];
u_init = [[y0, y0], [0, 0]];
res = solve_bvp(ode, bc, t_init, u_init)
Always check res.success or at least print out res.message. You might need a better initial guess, for non-linear problems different initial guesses can give different solutions. res.sol contains the solution as an interpolating function, for generating plots it is better to use this function than the sparse collection of internal steps in res.x and res.y
I am trying to allocate customers Ci to financial advisers Pj. Each customer has a policy value xi. I'm assuming that the number of customers (n) allocated to each adviser is the same, and that the same customer cannot be assigned to multiple advisers. Therefore each partner will have an allocation of policy values like so:
P1=[x1,x2,x3] , P2=[x4,x5,x6], P3=[x7,x8,x9]
I am trying to find the optimal allocation to minimise dispersion in fund value between the advisers. I am defining dispersion as the difference between the adviser with the highest fund value (z_max) and the lowest fund value (z_min).
The formulation for this problem is therefore:
where yij=1 if we allocate customer Ci to adviser Pj, 0 otherwise
The first constraint says that zmax has to be greater than or equal to each policy value; since the objective function encourages smaller values of zmax, this means that zmax will equal the largest policy value. Similarly, the second constraint sets zmin equal to the smallest policy value. The third constraint says that each customer must be assigned to exactly one adviser. The fourth says that each adviser must have n customers assigned to him/her.
I have a working solution using the optimization package: PuLP that finds the optimal allocation.
import random
import pulp
import time
# DATA
n = 5 # number of customers for each financial adviser
c = 25 # number of customers
p = 5 # number of financial adviser
policy_values = random.sample(range(1, 1000000), c) # random generated policy values
# INDEXES
set_I = range(c)
set_J = range(p)
set_N = range(n)
x = {i: policy_values[i] for i in set_I} #customer policy values
y = {(i,j): random.randint(0, 1) for i in set_I for j in set_J} # allocation dummies
# DECISION VARIABLES
model = pulp.LpProblem("Allocation Model", pulp.LpMinimize)
y_sum = {}
y_vars = pulp.LpVariable.dicts('y_vars',((i,j) for i in set_I for j in set_J), lowBound=0, upBound = 1, cat=pulp.LpInteger)
z_max = pulp.LpVariable("Max Policy Value")
z_min = pulp.LpVariable("Min Policy Value")
for j in set_J:
y_sum[j] = pulp.lpSum([y_vars[i,j] * x[i] for i in set_I])
# OBJECTIVE FUNCTION
model += z_max - z_min
# CONSTRAINTS
for j in set_J:
model += pulp.lpSum([y_vars[i,j] for i in set_I]) == n
model += y_sum[j] <= z_max
model += y_sum[j] >= z_min
for i in set_I:
model += pulp.lpSum([y_vars[i,j] for j in set_J]) == 1
# SOLVE MODEL
start = time.clock()
model.solve()
print('Optimised model status: '+str(pulp.LpStatus[model.status]))
print('Time elapsed: '+str(time.clock() - start))
Note that I have implemented constraints 1 and 2 slightly differently by including an additional variable y_sum to prevent duplicating an expression with a large number of nonzero elements
The problem
The issue is that for larger values of n,p and c the model takes far too long to optimise. Is it possible to make any changes to how I've implemented the objective function/constraints to make the solution faster?
Try a using a commercial solver like Gurobi with pulp. You should get a substantial decrease in solve time.
Also check your computers memory, if any solver runs out of memory and starts paging to disk the solve time will be very long.
You should monitor the time needed for each part of the program (model declaration and solving)
If the solving is too long, you can use a different solver as suggested above (here is some clue how to do it : https://coin-or.github.io/pulp/guides/how_to_configure_solvers.html).
If the model declaration is too long, you may have to optimise your code (try to use the pulp enabled fuctions as pulp.lpSum rather than python sum for example). You can also fidn some tricks here https://groups.google.com/g/pulp-or-discuss/c/p1N2fkVtYyM and here https://github.com/IBMDecisionOptimization/docplex-examples/blob/master/examples/mp/jupyter/efficient.ipynb