Adding lower and upper bounds in an optimization under constraints - python

so I have this portfolio with 10 assets and I want to minimize the tracking error under constraints.
One of the constraints is that 'Fixed income' must have a weight between 0.2 and 0.3, and the only way i found how to write that is to write two inequality constraints but i'm sure there is a way to have only one function for the lower and upper bond. I could use some help.
Here is the code I have so far:
# Define the objective function
def objective(w):
tracking_error = 0.5 * np.sum(((w - target_weights)/target_weights)**2)
return tracking_error
# Define the constraints
# The sum of the weights must be equal to 1
def constraints1(w):
return np.sum(w) - 1
# Equities must be greater or equal to 0.2
def constraints2(w):
level1_weights = w[df['Level 1'] == 'Equities']
return np.sum(level1_weights) - 0.2
# Fixed Incomes must be smaller or equal to 0.3 and greater or equal to 0.2
def constraints3(w):
level1_weights = w[df['Level 1'] == 'Fixed Income']
return 0.3 - np.sum(level1_weights)
def constraints4(w):
level1_weights = w[df['Level 1'] == 'Fixed Income']
return np.sum(level1_weights) - 0.2
# Real Estates must be greater or equal to 0.05
def constraints5(w):
level2_weights = w[df['Level 2'] == 'Real Estate']
return np.sum(level2_weights) - 0.5
# Solve the optimization problem
w0 = np.ones(10)/10
bnds = [(0, 1) for i in range(df.shape[0])]
cons = [{'type' : 'eq', 'fun': constraints1},
{'type' : 'ineq', 'fun': constraints2},
{'type' : 'ineq', 'fun': constraints3},
{'type' : 'ineq', 'fun': constraints4},
{'type' : 'ineq', 'fun': constraints5}]
result = minimize(objective, w0, bounds=bnds, method='SLSQP',constraints=cons)

Related

Optimization in Sci-Py for n >1000

I am doing a constrained optimization using sci-py's optimize tool for 100K observations, but my results are converging to the initial values for n > 1000. That is if I perform optimization on say 500 observations, I get a good local minimum, however as n tends to 1000 the optimized values are the same as the original values:
Here are my objective function and constraints:
Objective function:
def objective(x):
return np.sum(np.array([-TU_array[i]*(NR_fn(x[i])-LS_fn(x[i],pd_array[i])) for i in range(0,len(x))]))
# where
def NR_fn(x):
R = (1.99525947856706E-13*x**3-0.000000299925*x**2+0.137682384749*x+3648.729074580060)
return R
def LS_fn(x,pd):
L =x*pd
return L
and constraints are :
def constraint1(x):
return np.sum(x)-df["creditlimit"].sum()
def constraint3(x):
return np.sum(np.array([LS_fn(x[i],pd_array[i]) for i in range(0,len(x))]))-525000
b = (df["creditlimit"].min(),df["creditlimit"].max())
bnds = (b,)*len(df)
con1 = {'type': 'eq', 'fun': constraint1}
# con2 = {'type': 'eq', 'fun': constraint2}
con3 = {'type': 'eq', 'fun': constraint3}
cons = ([con1,con3])
x0 = np.array(df["creditlimit"])
solution = minimize(objective,x0,method='SLSQP', bounds=bnds,constraints = cons, options={'maxiter': 200, 'disp': True})
x = solution.x
Any help on how to go about it? Can I use some other tools which can help minimize my function?
I tried changing the constraints and functional form and expected unique values for my observations but I still got initial values.

Problems minimizing a function with 2 constraints - Python

I'm writing a program to minimize a function of several parameters subjected to constraints and bounds. Just in case you want to run the program, the function is given by:
def Fnret(mins):
Bj, Lj, a, b = mins.reshape(4,N)
S1 = 0; S2 = 0
Binf = np.zeros(N); Linf = np.zeros(N);
for i in range(N):
sbi=(Bi/2); sli=(Li/2)
for j in range(i+1):
sbi -= Bj[j]
sli -= Lj[j]
Binf[i]=sbi
Linf[i]=sli
for i in range(N):
S1 += (C*(1-np.sin(a[i]))+T*np.sin(a[i])) * ((2*Bj[i]*Binf[i]+Bj[i]**2)/(np.tan(b[i])*np.cos(a[i]))) +\
(C*(1-np.sin(b[i]))+T*np.sin(b[i])) * ((2*Bj[i]*Linf[i]+Lj[i]*Bj[i])/(np.sin(b[i])))
S2 += (gamma*Bj[0]/(6*np.tan(b[0])))*((Bi/2)*(Li/2) + 4*(Binf[0]+Bj[0])*(Linf[0]+Lj[0]) + Binf[0]*Linf[0])
j=1
while j<(N):
S2 += (gamma*Bj[j]/(6*np.tan(b[j])))*(Binf[j-1]*Linf[j-1] + 4*(Binf[j]+Bj[j])*(Linf[j]+Lj[j]) + Binf[j]*Linf[j])
j += 1
F = 2*(S1+S2)
return F
where Bj,Lj,a, and b are the minimization results given by N-sized arrays with N being an input of the program, I double-checked the function and it is working correctly. My constraints are given by:
def Brhs(mins): # Constraint over Bj
return np.sum(mins.reshape(4,N)[0]) - Bi
def Lrhs(mins): # Constraint over Lj
return np.sum(mins.reshape(4,N)[1]) - Li
cons = [{'type': 'eq', 'fun': lambda Bj: 1.0*Brhs(Bj)},
{'type': 'eq', 'fun': lambda Lj: 1.0*Lrhs(Lj)}]
In such a way that the sum of all components of Bj must be equal to Bi (same thing with Lj). The bounds of the variables are given by:
bounds = [(0,None)]*2*N + [(0,np.pi/2)]*2*N
For the problem to be reproducible, it's important to use the following inputs:
# Inputs:
gamma = 17.
C = 85.
T = C
Li = 1.
Bi = 0.5
N = 3
For the minimization, I'm using the cyipopt library (that is just similar to the scipy.optimize). Then, the minimization is given by:
from cyipopt import minimize_ipopt
x0 = np.ones(4*N) # Initial guess
res = minimize_ipopt(Fnret, x0, constraints=cons, bounds=bounds)
The problem is that the result is not obeying the conditions I imposed on the constraints (i.e. the sum of Bj or Lj values is different than Bi or Li on the results). But, for instance, if I only write one of the two constraints (over Lj or Bj) it works fine for that variable. Maybe I'm missing something when using 2 constraints and I can't find the error, it seems that it doesn't work with both constraints together. Any help would be truly appreciated. Thank you in advance!
P.S.: In addition, I would like that the function result F to be positive as well. How can I impose this condition? Thanks!
Not a complete answer, just some hints in arbitrary order:
Your initial point x0 is not feasible because it contradicts both of your constraints. This can easily be observed by evaluating your constraints at x0. Under the hood, Ipopt typically detects this and tries to find a feasible initial point. However, it's highly recommended to provide a feasible initial point whenever possible.
Your variable bounds are not well-defined. Evaluating your objective at your bounds yields multiple divisions by zero. For example, the denominator np.tan(b[i]) is zero if and only if b[i] = 0, so 0 isn't a valid value for all of your b[i]s. Proceeding similarly for the other terms, you should obtain 0 < b[i] < pi/2 and 0 ≤ a[i] < pi/2. Here, you can model the strict inequalities by 0 + eps ≤ b[i] ≤ pi/2 - eps and 0 ≤ a[i] ≤ pi/2 - eps, where eps is a sufficiently small positive number.
If you really want to impose that the objective function is always positive, you can simply add the inequality constraint Fnret(x) >= 0, i.e. {'type': 'ineq', 'fun': Fnret}.
In Code:
# bounds
eps = 1e-8
bounds = [(0, None)]*2*N + [(0, np.pi/2 - eps)]*N + [(0+eps, np.pi/2 - eps)]*N
# (feasible) initial guess
x0 = eps*np.ones(4*N)
x0[[0, N]] = [Bi-(N-1)*eps, Li-(N-1)*eps]
# constraints
cons = [{'type': 'eq', 'fun': Brhs},
{'type': 'eq', 'fun': Lrhs},
{'type': 'ineq', 'fun': Fnret}]
res = minimize_ipopt(Fnret, x0, constraints=cons, bounds=bounds, options={'disp': 5})
Last but not least, this still doesn't converge to a stationary point, so chances are that there's indeed no local minimum. From here, you can try experimenting with other (feasible!) initial points and double-check the math of your problem. It's also worth providing the exact gradient and constraint Jacobians.
So, based on #joni suggestions, I could find a stationary point respecting the constraints by adopting the trust-constr method of scipy.optimize.minimize library. My code is running as follows:
import numpy as np
from scipy.optimize import minimize
# Inputs:
gamma = 17
C = 85.
T = C
Li = 2.
Bi = 1.
N = 3 # for instance
# Constraints:
def Brhs(mins):
return np.sum(mins.reshape(4,N)[0]) - Bi/2
def Lrhs(mins):
return np.sum(mins.reshape(4,N)[1]) - Li/2
# Function to minimize:
def Fnret(mins):
Bj, Lj, a, b = mins.reshape(4,N)
S1 = 0; S2 = 0
Binf = np.zeros(N); Linf = np.zeros(N);
for i in range(N):
sbi=(Bi/2); sli=(Li/2)
for j in range(i+1):
sbi -= Bj[j]
sli -= Lj[j]
Binf[i]=sbi
Linf[i]=sli
for i in range(N):
S1 += (C*(1-np.sin(a[i]))+T*np.sin(a[i])) * ((2*Bj[i]*Binf[i]+Bj[i]**2)/(np.tan(b[i])*np.cos(a[i]))) +\
(C*(1-np.sin(b[i]))+T*np.sin(b[i])) * ((2*Bj[i]*Linf[i]+Lj[i]*Bj[i])/(np.sin(b[i])))
S2 += (gamma*Bj[0]/(6*np.tan(b[0])))*((Bi/2)*(Li/2) + 4*(Binf[0]+Bj[0])*(Linf[0]+Lj[0]) + Binf[0]*Linf[0])
j=1
while j<(N):
S2 += (gamma*Bj[j]/(6*np.tan(b[j])))*(Binf[j-1]*Linf[j-1] + 4*(Binf[j]+Bj[j])*(Linf[j]+Lj[j]) + Binf[j]*Linf[j])
j += 1
F = 2*(S1+S2)
return F
eps = 1e-3
bounds = [(0,None)]*2*N + [(0+eps,np.pi/2-eps)]*2*N # Bounds
cons = ({'type': 'ineq', 'fun': Fnret},
{'type': 'eq', 'fun': Lrhs},
{'type': 'eq', 'fun': Brhs})
x0 = np.ones(4*N) # Initial guess
res = minimize(Fnret, x0, method='trust-constr', bounds = bounds, constraints=cons, tol=1e-6)
F = res.fun
Bj = (res.x).reshape(4,N)[0]
Lj = (res.x).reshape(4,N)[1]
ai = (res.x).reshape(4,N)[2]
bi = (res.x).reshape(4,N)[3]
Which is essentially the same just changing the minimization technique. From np.sum(Bj) and np.sum(Lj) is easy to see that the results are in agreement with the constraints imposed, which were not working previously.

scipy.opimize.minimize fails with constraints for ordered solution vector

I am trying to solve a linear equations with minimize (algorithm=SLSQP), having a set of constraints: The sum of the solution vector components has to be 1 (or at least very close to it, to minimize the error) and second constraint enforces the vector components to be ordered, with x_0 being largest and x_n having the smallest magnitude. Further I set bounds, as each vector component has
This is my code so far:
from scipy.optimize import minimize
import numpy as np
from scipy.sparse import rand
from scipy.optimize import lsq_linear
#Linear equation Ax = b
def fmin(x,A,b):
y = np.dot(A, x) - b
return np.dot(y, y)
#Test data
b = np.array([172,8,7.4,24,21,0.8,0.1])
A_t = np.array(
[[188,18.4,16.5,3.4,2.1,1.77,0.075],
[405,0,0,99.8,99.8,0,0.0054],
[90.5,0.4,0.009,19.7,15.6,1.06,0.012],
[322,0,0,79,79,0.3,0.3],
[0,0,0,0,0,0,0],
[362,0.25,0.009,89.2,0,0.43,0.019],
[37,1.4,0.2,7.3,1,4.5,0.1],
[26,0.29,0.038,6.1,2.4,0.4,0.053]])
A = A_t.T
#=========================
m = np.shape(A)[0]
n = np.shape(A)[1]
x0 = np.full(n, 0.5)
args = (A,b)
bnds = (( (1*10e-100, 1), )*n) #all x_i must be between 0 and 1
cons = [{'type': 'eq', 'fun': lambda x: 1.0-np.sum(x) }] #sum(x_i) = 1
#consider order of vectors as constraints
for i in range(n-1):
cons = cons + [{'type': 'ineq', 'fun': lambda x : x[i] - x[i+1] }]
res = minimize(fmin, x0, args, method='SLSQP',
bounds=bnds,constraints=cons,tol=1e-2,options={'disp': False})
print ("\n res\n", res)
print("Sum of coefficients {}".format(np.sum(res.x)))
print("Difference vector:\n{}".format(np.dot(A,res.x) - b))
Unfortunately the algorithm false with
res
fun: 317820.09898084006
jac: array([205597.34765625, 481389.625 , 105853.7265625 , 382592.76953125,
0. , 416196.953125 , 42268.78125 , 30196.81640625])
message: 'Positive directional derivative for linesearch'
nfev: 10
nit: 5
njev: 1
status: 8
success: False
x: array([0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5])
Sum of coefficients 4.0
Difference vector:
[5.4325e+02 2.3700e+00 9.7800e-01 1.2825e+02 7.8950e+01 3.4300e+00
1.8220e-01]
I would be very grateful if someone could help me to sort this out. In fact for the test data in this example I know that the right solution should be 0.58 for x_0 and 0.12 for x_2.
Many thanks in advance!
To clarify the discussion in the comments: Your objective function is nonlinear in the optimization variable x. Thus, this is a nonlinear optimization problem. The reason for the error message is quite simple: Your initial guess x0 lies outside the feasible region. You can easily verify that it doesn't fulfill your first constraint 1.0 - np.sum(x) = 0.
However, note also that you need to capture the loop variable i when creating your lambda constraint expressions in a loop. Otherwise, you add the constraint lambda x: x[n-2] - x[n-1] seven times. See, e.g. here for more details.
Creating the constraints correctly by
for i in range(n-1):
cons = cons + [{'type': 'ineq', 'fun': lambda x, i=i: x[i] - x[i+1] }]
and providing the feasible initial guess x0 = np.ones(n)/n yields
fun: 114.90196737679031
jac: array([3550.86690235, 7911.74978828, 1787.36224174, 6290.006423 ,
0. , 7580.9283762 , 757.60069752, 528.41590595])
message: 'Optimization terminated successfully'
nfev: 66
nit: 6
njev: 6
status: 0
success: True
x: array([0.38655391, 0.08763516, 0.08763516, 0.08763516, 0.08763516,
0.08763515, 0.08763516, 0.08763516])
However, for larger problem's it's highly recommended to rewrite your second constraint:
D = np.eye(n) - np.eye(n, k=1)
D[-1, -1] = 0.0
cons = [{'type': 'eq', 'fun': lambda x: 1.0-np.sum(x) }]
cons += [{'type': 'ineq', 'fun': lambda x: D # x}]
Now the solver only needs to evaluate 2 constraints instead of n+1 constraints in each iteration. You can further speed up the solver by providing gradients and jacobians, see this answer for more details.

Numpy Minimize COBYLA Constraints

I am using scipy.minimize with the COBYLA method, but I seem to be unable to write the constraints properly because when I check the values of the objective function, they do not respect those constraints.
Basically, the objective function accepts an array as argument, that must follow two constraints:
Each value in the array must be greater than 0
The sum of the values must be inferior to 1
So far I wrote it this way:
constraints = [{'type': 'ineq', 'fun': lambda x: 1 - sum(x)},
{'type': 'ineq', 'fun': lambda x: x[0]},
{'type': 'ineq', 'fun': lambda x: x[1]}]
However, sometimes I get values greater than 1...
Here is an example:
from __future__ import division
from math import pow, exp
import numpy as np
from scipy.optimize import minimize
nbStudy = 3
nbCYP = 2
raucObserved = [3.98, 2.0, 0.12]
IXmat = np.matrix([[-0.98, 0], [-0.3, -0.98], [7.7, 4.2]])
NBITER = 50
estimatedCR = []
raucPred = []
varR = [0.0085, 0.0048, 0.0110]
sdR = [0.0922, 0.0692, 0.1051]
cnstrts = [{'type': 'ineq', 'fun': lambda x: 1 - sum(x)},
{'type': 'ineq', 'fun': lambda x: x}]
def fun(CR):
dum = []
for i in range(nbStudy):
crix = 0
for j in range(nbCYP):
crix += CR[j] * IXmat[i, j]
raucPredicted = 1 / (1 + crix)
dum.append(pow((np.log(raucPredicted) - np.log(raucObservedBiased[i])), 2) / varR[i])
output = np.sum(dum)
return output
for iter in range(NBITER):
raucObservedBiased = []
for k in range(nbStudy):
raucObservedBiased.append(raucObserved[k] * exp(sdR[k] * np.random.normal()))
initialCR = np.matrix([[(1 / nbCYP) * np.random.uniform()], [(1 / nbCYP) * np.random.uniform()]])
output = minimize(fun, initialCR, method='COBYLA', constraints=cnstrts)
estimatedCR.append(output.x)
Apparently a version problem, the issue has been fixed since. I was using Python 2.7 and Scipy 0.13...
You are not checking that the solver converged (output.success == True) --- and in your case it does not converge. If there is no convergence, nothing is guaranteed about the constraints.

Migrating from PuLP to Scipy

Im using PuLP to solve some minimization problems
with constraints, uper and low bounds.
It is very easy and clean.
But im needing to use only the Scipy and Numpy
modules.
I was reading:
http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html
Constrained minimization of multivariate scalar functions
But im a bit lost... some good soul can post a small example like
this PuLP one in Scipy?
Thanks in advance.
MM
from pulp import *
'''
Minimize 1.800A + 0.433B + 0.180C
Constraint 1A + 1B + 1C = 100
Constraint 0.480A + 0.080B + 0.020C >= 24
Constraint 0.744A + 0.800B + 0.142C >= 76
Constraint 1C <= 2
'''
...
Consider the following:
import numpy as np
import scipy.optimize as opt
#Some variables
cost = np.array([1.800, 0.433, 0.180])
p = np.array([0.480, 0.080, 0.020])
e = np.array([0.744, 0.800, 0.142])
#Our function
fun = lambda x: np.sum(x*cost)
#Our conditions
cond = ({'type': 'eq', 'fun': lambda x: np.sum(x) - 100},
{'type': 'ineq', 'fun': lambda x: np.sum(p*x) - 24},
{'type': 'ineq', 'fun': lambda x: np.sum(e*x) - 76},
{'type': 'ineq', 'fun': lambda x: -1*x[2] + 2})
bnds = ((0,100),(0,100),(0,100))
guess = [20,30,50]
opt.minimize(fun, guess, method='SLSQP', bounds=bnds, constraints = cond)
It should be noted that eq conditions should be equal to zero, while ineq functions will return true for any values greater then zero.
We obtain:
status: 0
success: True
njev: 4
nfev: 21
fun: 97.884100000000345
x: array([ 40.3, 57.7, 2. ])
message: 'Optimization terminated successfully.'
jac: array([ 1.80000019, 0.43300056, 0.18000031, 0. ])
nit: 4
Double check the equalities:
output = np.array([ 40.3, 57.7, 2. ])
np.sum(output) == 100
True
round(np.sum(p*output),8) >= 24
True
round(np.sum(e*output),8) >= 76
True
The rounding comes from double point precision errors:
np.sum(p*output)
23.999999999999996

Categories

Resources