Maximization Convexity of Bond Portfolio - python

I'm new to the community and finally, here I am for my first question!
I'm stuck on a Maximization problem (you can see it below):
Problem Set
My variables are set in this way:
pass; import numpy as np
from scipy.optimize import minimize
n_bond = 4
# Vector of Present Value for each Bond
pv_vec = [ 115.34755980259271, 100.95922248766202,
93.87539028309668, 93.63295880149812
]
# Vector of Duration for each Bond
dur_vec = [4.880937192589091, 5.1050167320620625, 3.713273768775609, 1.0]
# Vector of Convexity for each Bond
conv_vec = [31.66708597860361, 33.73613393551434, 18.098806559076504, 2.0]
# Initial guess
alpha = np.random.random(n_bond)
alpha = alpha / np.sum(alpha)
W0 = 1000 # Initial wealth
K = 3.5 # Target Duration
def obj(self):
return sum(conv_vec[i]*(pv_vec[i]*alpha[i])/W0 for i in range(n_bond))*(-1)
def cons1(self):
return sum(pv_vec[i]*alpha[i] for i in range(n_bond)) - W0
def cons2(self):
return sum(dur_vec[i]*(pv_vec[i]*alpha[i])/W0 for i in range(n_bond)) - K
const = ({'type': 'eq', 'fun': cons1},
{'type': 'eq', 'fun': cons2})
non_neg = []
for i in range(n_bond):
non_neg.append((0, None))
non_neg = tuple(non_neg)
solution = minimize(fun=obj,
x0=alpha,
method='SLSQP',
bounds = non_neg,
constraints=cons,
options = {'disp':True, 'ftol': 1e-20, 'maxiter': 1000})
And now the error message of minimization:
Singular matrix C in LSQ subproblem (Exit mode 6)
Current function value: -2.5052545159234914
Iterations: 1
Function evaluations: 6
Gradient evaluations: 1
fun: -2.5052545159234914
jac: array([0.00000, 0.00000, 0.00000, 0.00000])
message: 'Singular matrix C in LSQ subproblem'
nfev: 6
nit: 1
njev: 1
status: 6
success: False
x: array([0.27557, 0.38912, 0.07314, 0.26217])
I really need your wisdom!

Related

scipy.opimize.minimize fails with constraints for ordered solution vector

I am trying to solve a linear equations with minimize (algorithm=SLSQP), having a set of constraints: The sum of the solution vector components has to be 1 (or at least very close to it, to minimize the error) and second constraint enforces the vector components to be ordered, with x_0 being largest and x_n having the smallest magnitude. Further I set bounds, as each vector component has
This is my code so far:
from scipy.optimize import minimize
import numpy as np
from scipy.sparse import rand
from scipy.optimize import lsq_linear
#Linear equation Ax = b
def fmin(x,A,b):
y = np.dot(A, x) - b
return np.dot(y, y)
#Test data
b = np.array([172,8,7.4,24,21,0.8,0.1])
A_t = np.array(
[[188,18.4,16.5,3.4,2.1,1.77,0.075],
[405,0,0,99.8,99.8,0,0.0054],
[90.5,0.4,0.009,19.7,15.6,1.06,0.012],
[322,0,0,79,79,0.3,0.3],
[0,0,0,0,0,0,0],
[362,0.25,0.009,89.2,0,0.43,0.019],
[37,1.4,0.2,7.3,1,4.5,0.1],
[26,0.29,0.038,6.1,2.4,0.4,0.053]])
A = A_t.T
#=========================
m = np.shape(A)[0]
n = np.shape(A)[1]
x0 = np.full(n, 0.5)
args = (A,b)
bnds = (( (1*10e-100, 1), )*n) #all x_i must be between 0 and 1
cons = [{'type': 'eq', 'fun': lambda x: 1.0-np.sum(x) }] #sum(x_i) = 1
#consider order of vectors as constraints
for i in range(n-1):
cons = cons + [{'type': 'ineq', 'fun': lambda x : x[i] - x[i+1] }]
res = minimize(fmin, x0, args, method='SLSQP',
bounds=bnds,constraints=cons,tol=1e-2,options={'disp': False})
print ("\n res\n", res)
print("Sum of coefficients {}".format(np.sum(res.x)))
print("Difference vector:\n{}".format(np.dot(A,res.x) - b))
Unfortunately the algorithm false with
res
fun: 317820.09898084006
jac: array([205597.34765625, 481389.625 , 105853.7265625 , 382592.76953125,
0. , 416196.953125 , 42268.78125 , 30196.81640625])
message: 'Positive directional derivative for linesearch'
nfev: 10
nit: 5
njev: 1
status: 8
success: False
x: array([0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5])
Sum of coefficients 4.0
Difference vector:
[5.4325e+02 2.3700e+00 9.7800e-01 1.2825e+02 7.8950e+01 3.4300e+00
1.8220e-01]
I would be very grateful if someone could help me to sort this out. In fact for the test data in this example I know that the right solution should be 0.58 for x_0 and 0.12 for x_2.
Many thanks in advance!
To clarify the discussion in the comments: Your objective function is nonlinear in the optimization variable x. Thus, this is a nonlinear optimization problem. The reason for the error message is quite simple: Your initial guess x0 lies outside the feasible region. You can easily verify that it doesn't fulfill your first constraint 1.0 - np.sum(x) = 0.
However, note also that you need to capture the loop variable i when creating your lambda constraint expressions in a loop. Otherwise, you add the constraint lambda x: x[n-2] - x[n-1] seven times. See, e.g. here for more details.
Creating the constraints correctly by
for i in range(n-1):
cons = cons + [{'type': 'ineq', 'fun': lambda x, i=i: x[i] - x[i+1] }]
and providing the feasible initial guess x0 = np.ones(n)/n yields
fun: 114.90196737679031
jac: array([3550.86690235, 7911.74978828, 1787.36224174, 6290.006423 ,
0. , 7580.9283762 , 757.60069752, 528.41590595])
message: 'Optimization terminated successfully'
nfev: 66
nit: 6
njev: 6
status: 0
success: True
x: array([0.38655391, 0.08763516, 0.08763516, 0.08763516, 0.08763516,
0.08763515, 0.08763516, 0.08763516])
However, for larger problem's it's highly recommended to rewrite your second constraint:
D = np.eye(n) - np.eye(n, k=1)
D[-1, -1] = 0.0
cons = [{'type': 'eq', 'fun': lambda x: 1.0-np.sum(x) }]
cons += [{'type': 'ineq', 'fun': lambda x: D # x}]
Now the solver only needs to evaluate 2 constraints instead of n+1 constraints in each iteration. You can further speed up the solver by providing gradients and jacobians, see this answer for more details.

How to define a constraint greater than but not equal to zero in Scipy SLSQP

I'm trying to maximize a function in the form x1/x2. I don't want the x2 to go to zero so I'm defining my constraint as x2 > 0. But the Scipy SLSQP method does not take that into consideration and says 'Inequality constraints incompatible. The Scipy documentation states all constraints are non-negative, how I do take values only greater than zero?
Edit: here is the function i have to minimize
def f(x):
T = x[0]
mdot = x[1]
Pin = 1.5 # Input power in watts
f = -(T**2/(2*mdot*Pin))
return f
my constraint for x2>0:
def constraint1(x): ## mdot > 0
return x[1]
cons1 = {'type':'ineq', 'fun': constraint1}
res = minimize(f,[0.5,0.025],method = 'SLSQP',constraints = cons1,callback=callbackF)
the callback function is only there to get x values out at each iteration
the results:
Iter X1 X2 f(X)
1 0.520000 0.000000 19484373451061.960938
2 0.520000 0.000000 19484373451061.960938
Maximum thrust value is 0.5200000003096648
Ideal mass flow rate value is 6.938893903907228e-18
fun: -19484373451061.96
jac: array([-7.49398988e+13, 1.30757417e+21])
message: 'Inequality constraints incompatible'
nfev: 6
nit: 2
njev: 2
status: 4
success: False
x: array([5.2000000e-01, 6.9388939e-18])
As you can see the mdot(x2) goes to zero immediately and I don't how to fix it?
If we replace the constraint by a bound, we can see:
from scipy.optimize import minimize
def f(x):
T = x[0]
mdot = x[1]
Pin = 1.5 # Input power in watts
f = -(T**2/(2*mdot*Pin))
return f
res = minimize(f,[0.5,0.025],method = 'l-bfgs-b',bounds=[(0,10000),(0.0001,15)])
res
fun: -333333333333.3333
hess_inv: <2x2 LbfgsInvHessProduct with dtype=float64>
jac: array([-6.66687012e+07, 3.33300003e+15])
message: b'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
nfev: 9
nit: 2
status: 0
success: True
x: array([1.e+04, 1.e-04])
This corresponds with what one would expect (this problem can be trivially solved by hand).

I'm trying to solve a maximization problem with constraints and bounds with scipy.minimize

I'm trying to solve a constraint maximization problem with bounds in scipy minimize SLSQP. But why am
I getting this message: 'Singular matrix C in LSQ subproblem'? How to resolve this? When I'm removing the constraint and trying to minimize the objective function it's working fine but when I'm trying to maximize it, it shows 'Positive directional derivative for linesearch'. Code follows below:
#Import libs
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize, Bounds
#Initial activity level
x0 = np.array([60, 40])
#Revenue function
def revenue(X):
dollarperTRx = 300
coeff_x1 = 0.234
coeff_x2 = 0.127
predwo= 1.245
nhcp = 400
r = dollarperTRx * nhcp * predwo * (pow(1 + (1+ X[0]),coeff_x1)) * (pow((1 + X[1]),coeff_x2))
return r
#Objective function
def objective(X, sign = -1.0):
return sign * revenue(X)
#Spend
cost_per_promo = np.array([400, 600])
def spend(X):
return np.dot(cost_per_promo, x0.transpose())
#Initial Spend
s0 = spend(x0)
#Spend constraint
def spend_constraint(X):
return spend(X) - s0
#Getting the constraints into a dictionary
cons = ({'type':'eq', 'fun': spend_constraint})
#Bounds
bounds1 = (30, 90)
bounds2 = (20, 60)
#Optimize
minimize(objective, x0, method='SLSQP', constraints = cons, bounds = (bounds1, bounds2))
Your spend function does not depend on your design vector, therefore your constraint also does not depend on it. This makes the problem singular. I changed X0 to the current design vector in your example, that way it converges. You have to verify, if that is what you meant to do with the constraint. But with X0 it always gives 0.
#Import libs
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize, Bounds
#Initial activity level
x0 = np.array([60, 40])
#Revenue function
def revenue(X):
dollarperTRx = 300
coeff_x1 = 0.234
coeff_x2 = 0.127
predwo= 1.245
nhcp = 400
r = dollarperTRx * nhcp * predwo * (pow(1 + (1+ X[0]),coeff_x1)) * (pow((1 + X[1]),coeff_x2))
return r
revenue0 = revenue(x0)
#Objective function
def objective(X):
return -revenue(X) / revenue0
#Spend
cost_per_promo = np.array([400., 600.])
def spend(X):
return np.dot(cost_per_promo, X.transpose())
#Initial Spend
s0 = spend(x0)
#Spend constraint
def spend_constraint(X):
return spend(X) - s0
#Getting the constraints into a dictionary
cons = ({'type':'eq', 'fun': spend_constraint})
#Bounds
bounds1 = (30., 90.)
bounds2 = (20., 60.)
#Optimize
res = minimize(objective, x0, method='SLSQP',
constraints = cons,
bounds = (bounds1, bounds2))
print(res)
Results in:
fun: -1.0157910949030706
jac: array([-0.00297113, -0.00444862])
message: 'Optimization terminated successfully.'
nfev: 36
nit: 9
njev: 9
status: 0
success: True
x: array([78.0015639, 27.9989574])

SciPy's minimize is not iterating at all

I am trying to minimize a function that basically looks like this:
In reality it has two independent variables, but since x1 + x2 = 1, they're not REALLY independent.
now here's the objective function
def calculatePVar(w,covM):
w = np.matrix(w)
return (w*covM*w.T) [0,0]
wnere w is a list of the weights of each asset and covM is the covariance matrix that is returned by .cov() of pandas
Here's where the optimization function is called:
w0 = []
for sec in portList:
w0.append(1/len(portList))
bnds = tuple((0,1) for x in w0)
cons = ({'type': 'eq', 'fun': lambda x: np.sum(x)-1.0})
res= minimize(calculatePVar, w0, args=nCov, method='SLSQP',constraints=cons, bounds=bnds)
weights = res.x
now there is a clear minimum to the function but minimize will just spit out the initial values as the result and it does say "Optimization terminated sucessfully". Any suggestions?
optimization results:
P.S. images as links because I don't meet the reqs!
Your code had just some confusing variables so I just cleared that out and simplified some lines, now the minimization works correctly. However, the question now is: if the results are correct? and do they make sense? and that is for you to judge:
import numpy as np
from scipy.optimize import minimize
def f(w, cov_matrix):
return (np.matrix(w) * cov_matrix * np.matrix(w).T)[0,0]
cov_matrix = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
p = [1, 2, 3]
w0 = [(1/len(p)) for e in p]
bnds = tuple((0,1) for e in w0)
cons = ({'type': 'eq', 'fun': lambda w: np.sum(w)-1.0})
res = minimize(f, w0,
args = cov_matrix,
method = 'SLSQP',
constraints = cons,
bounds = bnds)
weights = res.x
print(res)
print(weights)
Update:
Based on your comments, it seems to me that -maybe- your function has has multiple minima and that's why scipy.optimize.minimize gets trapped in there. I suggest scipy.optimize.basinhopping as an alternative, this would use a random step to go over most of the minima of your function and it will still be fast. Here is the code:
import numpy as np
from scipy.optimize import basinhopping
class MyBounds(object):
def __init__(self, xmax=[1,1], xmin=[0,0] ):
self.xmax = np.array(xmax)
self.xmin = np.array(xmin)
def __call__(self, **kwargs):
x = kwargs["x_new"]
tmax = bool(np.all(x <= self.xmax))
tmin = bool(np.all(x >= self.xmin))
return tmax and tmin
def f(w):
global cov_matrix
return (np.matrix(w) * cov_matrix * np.matrix(w).T)[0,0]
cov_matrix = np.array([[0.000244181, 0.000198035],
[0.000198035, 0.000545958]])
p = ['ABEV3', 'BBDC4']
w0 = [(1/len(p)) for e in p]
bnds = tuple((0,1) for e in w0)
cons = ({'type': 'eq', 'fun': lambda w: np.sum(w)-1.0})
bnds = MyBounds()
minimizer_kwargs = {"method":"SLSQP", "constraints": cons}
res = basinhopping(f, w0,
accept_test = bnds)
weights = res.x
print(res)
print("weights: ", weights)
Output:
fun: 2.3907094432990195e-09
lowest_optimization_result: fun: 2.3907094432990195e-09
hess_inv: array([[ 2699.43934183, -1184.79396719],
[-1184.79396719, 1210.50404805]])
jac: array([1.34548553e-06, 2.00122166e-06])
message: 'Optimization terminated successfully.'
nfev: 60
nit: 6
njev: 15
status: 0
success: True
x: array([0.00179748, 0.00118076])
message: ['requested number of basinhopping iterations completed successfully']
minimization_failures: 0
nfev: 6104
nit: 100
njev: 1526
x: array([0.00179748, 0.00118076])
weights: [0.00179748 0.00118076]
I had a similar problem and the issue turned out to be that the function and the constraint were outputting numpy arrays with a single element. Changing the output of those two functions to be floats solved the problem.
A very simple solution to a perplexing problem.

Migrating from PuLP to Scipy

Im using PuLP to solve some minimization problems
with constraints, uper and low bounds.
It is very easy and clean.
But im needing to use only the Scipy and Numpy
modules.
I was reading:
http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html
Constrained minimization of multivariate scalar functions
But im a bit lost... some good soul can post a small example like
this PuLP one in Scipy?
Thanks in advance.
MM
from pulp import *
'''
Minimize 1.800A + 0.433B + 0.180C
Constraint 1A + 1B + 1C = 100
Constraint 0.480A + 0.080B + 0.020C >= 24
Constraint 0.744A + 0.800B + 0.142C >= 76
Constraint 1C <= 2
'''
...
Consider the following:
import numpy as np
import scipy.optimize as opt
#Some variables
cost = np.array([1.800, 0.433, 0.180])
p = np.array([0.480, 0.080, 0.020])
e = np.array([0.744, 0.800, 0.142])
#Our function
fun = lambda x: np.sum(x*cost)
#Our conditions
cond = ({'type': 'eq', 'fun': lambda x: np.sum(x) - 100},
{'type': 'ineq', 'fun': lambda x: np.sum(p*x) - 24},
{'type': 'ineq', 'fun': lambda x: np.sum(e*x) - 76},
{'type': 'ineq', 'fun': lambda x: -1*x[2] + 2})
bnds = ((0,100),(0,100),(0,100))
guess = [20,30,50]
opt.minimize(fun, guess, method='SLSQP', bounds=bnds, constraints = cond)
It should be noted that eq conditions should be equal to zero, while ineq functions will return true for any values greater then zero.
We obtain:
status: 0
success: True
njev: 4
nfev: 21
fun: 97.884100000000345
x: array([ 40.3, 57.7, 2. ])
message: 'Optimization terminated successfully.'
jac: array([ 1.80000019, 0.43300056, 0.18000031, 0. ])
nit: 4
Double check the equalities:
output = np.array([ 40.3, 57.7, 2. ])
np.sum(output) == 100
True
round(np.sum(p*output),8) >= 24
True
round(np.sum(e*output),8) >= 76
True
The rounding comes from double point precision errors:
np.sum(p*output)
23.999999999999996

Categories

Resources