I am trying to minimize a function that basically looks like this:
In reality it has two independent variables, but since x1 + x2 = 1, they're not REALLY independent.
now here's the objective function
def calculatePVar(w,covM):
w = np.matrix(w)
return (w*covM*w.T) [0,0]
wnere w is a list of the weights of each asset and covM is the covariance matrix that is returned by .cov() of pandas
Here's where the optimization function is called:
w0 = []
for sec in portList:
w0.append(1/len(portList))
bnds = tuple((0,1) for x in w0)
cons = ({'type': 'eq', 'fun': lambda x: np.sum(x)-1.0})
res= minimize(calculatePVar, w0, args=nCov, method='SLSQP',constraints=cons, bounds=bnds)
weights = res.x
now there is a clear minimum to the function but minimize will just spit out the initial values as the result and it does say "Optimization terminated sucessfully". Any suggestions?
optimization results:
P.S. images as links because I don't meet the reqs!
Your code had just some confusing variables so I just cleared that out and simplified some lines, now the minimization works correctly. However, the question now is: if the results are correct? and do they make sense? and that is for you to judge:
import numpy as np
from scipy.optimize import minimize
def f(w, cov_matrix):
return (np.matrix(w) * cov_matrix * np.matrix(w).T)[0,0]
cov_matrix = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
p = [1, 2, 3]
w0 = [(1/len(p)) for e in p]
bnds = tuple((0,1) for e in w0)
cons = ({'type': 'eq', 'fun': lambda w: np.sum(w)-1.0})
res = minimize(f, w0,
args = cov_matrix,
method = 'SLSQP',
constraints = cons,
bounds = bnds)
weights = res.x
print(res)
print(weights)
Update:
Based on your comments, it seems to me that -maybe- your function has has multiple minima and that's why scipy.optimize.minimize gets trapped in there. I suggest scipy.optimize.basinhopping as an alternative, this would use a random step to go over most of the minima of your function and it will still be fast. Here is the code:
import numpy as np
from scipy.optimize import basinhopping
class MyBounds(object):
def __init__(self, xmax=[1,1], xmin=[0,0] ):
self.xmax = np.array(xmax)
self.xmin = np.array(xmin)
def __call__(self, **kwargs):
x = kwargs["x_new"]
tmax = bool(np.all(x <= self.xmax))
tmin = bool(np.all(x >= self.xmin))
return tmax and tmin
def f(w):
global cov_matrix
return (np.matrix(w) * cov_matrix * np.matrix(w).T)[0,0]
cov_matrix = np.array([[0.000244181, 0.000198035],
[0.000198035, 0.000545958]])
p = ['ABEV3', 'BBDC4']
w0 = [(1/len(p)) for e in p]
bnds = tuple((0,1) for e in w0)
cons = ({'type': 'eq', 'fun': lambda w: np.sum(w)-1.0})
bnds = MyBounds()
minimizer_kwargs = {"method":"SLSQP", "constraints": cons}
res = basinhopping(f, w0,
accept_test = bnds)
weights = res.x
print(res)
print("weights: ", weights)
Output:
fun: 2.3907094432990195e-09
lowest_optimization_result: fun: 2.3907094432990195e-09
hess_inv: array([[ 2699.43934183, -1184.79396719],
[-1184.79396719, 1210.50404805]])
jac: array([1.34548553e-06, 2.00122166e-06])
message: 'Optimization terminated successfully.'
nfev: 60
nit: 6
njev: 15
status: 0
success: True
x: array([0.00179748, 0.00118076])
message: ['requested number of basinhopping iterations completed successfully']
minimization_failures: 0
nfev: 6104
nit: 100
njev: 1526
x: array([0.00179748, 0.00118076])
weights: [0.00179748 0.00118076]
I had a similar problem and the issue turned out to be that the function and the constraint were outputting numpy arrays with a single element. Changing the output of those two functions to be floats solved the problem.
A very simple solution to a perplexing problem.
Related
I am trying to solve a linear equations with minimize (algorithm=SLSQP), having a set of constraints: The sum of the solution vector components has to be 1 (or at least very close to it, to minimize the error) and second constraint enforces the vector components to be ordered, with x_0 being largest and x_n having the smallest magnitude. Further I set bounds, as each vector component has
This is my code so far:
from scipy.optimize import minimize
import numpy as np
from scipy.sparse import rand
from scipy.optimize import lsq_linear
#Linear equation Ax = b
def fmin(x,A,b):
y = np.dot(A, x) - b
return np.dot(y, y)
#Test data
b = np.array([172,8,7.4,24,21,0.8,0.1])
A_t = np.array(
[[188,18.4,16.5,3.4,2.1,1.77,0.075],
[405,0,0,99.8,99.8,0,0.0054],
[90.5,0.4,0.009,19.7,15.6,1.06,0.012],
[322,0,0,79,79,0.3,0.3],
[0,0,0,0,0,0,0],
[362,0.25,0.009,89.2,0,0.43,0.019],
[37,1.4,0.2,7.3,1,4.5,0.1],
[26,0.29,0.038,6.1,2.4,0.4,0.053]])
A = A_t.T
#=========================
m = np.shape(A)[0]
n = np.shape(A)[1]
x0 = np.full(n, 0.5)
args = (A,b)
bnds = (( (1*10e-100, 1), )*n) #all x_i must be between 0 and 1
cons = [{'type': 'eq', 'fun': lambda x: 1.0-np.sum(x) }] #sum(x_i) = 1
#consider order of vectors as constraints
for i in range(n-1):
cons = cons + [{'type': 'ineq', 'fun': lambda x : x[i] - x[i+1] }]
res = minimize(fmin, x0, args, method='SLSQP',
bounds=bnds,constraints=cons,tol=1e-2,options={'disp': False})
print ("\n res\n", res)
print("Sum of coefficients {}".format(np.sum(res.x)))
print("Difference vector:\n{}".format(np.dot(A,res.x) - b))
Unfortunately the algorithm false with
res
fun: 317820.09898084006
jac: array([205597.34765625, 481389.625 , 105853.7265625 , 382592.76953125,
0. , 416196.953125 , 42268.78125 , 30196.81640625])
message: 'Positive directional derivative for linesearch'
nfev: 10
nit: 5
njev: 1
status: 8
success: False
x: array([0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5])
Sum of coefficients 4.0
Difference vector:
[5.4325e+02 2.3700e+00 9.7800e-01 1.2825e+02 7.8950e+01 3.4300e+00
1.8220e-01]
I would be very grateful if someone could help me to sort this out. In fact for the test data in this example I know that the right solution should be 0.58 for x_0 and 0.12 for x_2.
Many thanks in advance!
To clarify the discussion in the comments: Your objective function is nonlinear in the optimization variable x. Thus, this is a nonlinear optimization problem. The reason for the error message is quite simple: Your initial guess x0 lies outside the feasible region. You can easily verify that it doesn't fulfill your first constraint 1.0 - np.sum(x) = 0.
However, note also that you need to capture the loop variable i when creating your lambda constraint expressions in a loop. Otherwise, you add the constraint lambda x: x[n-2] - x[n-1] seven times. See, e.g. here for more details.
Creating the constraints correctly by
for i in range(n-1):
cons = cons + [{'type': 'ineq', 'fun': lambda x, i=i: x[i] - x[i+1] }]
and providing the feasible initial guess x0 = np.ones(n)/n yields
fun: 114.90196737679031
jac: array([3550.86690235, 7911.74978828, 1787.36224174, 6290.006423 ,
0. , 7580.9283762 , 757.60069752, 528.41590595])
message: 'Optimization terminated successfully'
nfev: 66
nit: 6
njev: 6
status: 0
success: True
x: array([0.38655391, 0.08763516, 0.08763516, 0.08763516, 0.08763516,
0.08763515, 0.08763516, 0.08763516])
However, for larger problem's it's highly recommended to rewrite your second constraint:
D = np.eye(n) - np.eye(n, k=1)
D[-1, -1] = 0.0
cons = [{'type': 'eq', 'fun': lambda x: 1.0-np.sum(x) }]
cons += [{'type': 'ineq', 'fun': lambda x: D # x}]
Now the solver only needs to evaluate 2 constraints instead of n+1 constraints in each iteration. You can further speed up the solver by providing gradients and jacobians, see this answer for more details.
I'm new to the community and finally, here I am for my first question!
I'm stuck on a Maximization problem (you can see it below):
Problem Set
My variables are set in this way:
pass; import numpy as np
from scipy.optimize import minimize
n_bond = 4
# Vector of Present Value for each Bond
pv_vec = [ 115.34755980259271, 100.95922248766202,
93.87539028309668, 93.63295880149812
]
# Vector of Duration for each Bond
dur_vec = [4.880937192589091, 5.1050167320620625, 3.713273768775609, 1.0]
# Vector of Convexity for each Bond
conv_vec = [31.66708597860361, 33.73613393551434, 18.098806559076504, 2.0]
# Initial guess
alpha = np.random.random(n_bond)
alpha = alpha / np.sum(alpha)
W0 = 1000 # Initial wealth
K = 3.5 # Target Duration
def obj(self):
return sum(conv_vec[i]*(pv_vec[i]*alpha[i])/W0 for i in range(n_bond))*(-1)
def cons1(self):
return sum(pv_vec[i]*alpha[i] for i in range(n_bond)) - W0
def cons2(self):
return sum(dur_vec[i]*(pv_vec[i]*alpha[i])/W0 for i in range(n_bond)) - K
const = ({'type': 'eq', 'fun': cons1},
{'type': 'eq', 'fun': cons2})
non_neg = []
for i in range(n_bond):
non_neg.append((0, None))
non_neg = tuple(non_neg)
solution = minimize(fun=obj,
x0=alpha,
method='SLSQP',
bounds = non_neg,
constraints=cons,
options = {'disp':True, 'ftol': 1e-20, 'maxiter': 1000})
And now the error message of minimization:
Singular matrix C in LSQ subproblem (Exit mode 6)
Current function value: -2.5052545159234914
Iterations: 1
Function evaluations: 6
Gradient evaluations: 1
fun: -2.5052545159234914
jac: array([0.00000, 0.00000, 0.00000, 0.00000])
message: 'Singular matrix C in LSQ subproblem'
nfev: 6
nit: 1
njev: 1
status: 6
success: False
x: array([0.27557, 0.38912, 0.07314, 0.26217])
I really need your wisdom!
I'm trying to solve a constraint maximization problem with bounds in scipy minimize SLSQP. But why am
I getting this message: 'Singular matrix C in LSQ subproblem'? How to resolve this? When I'm removing the constraint and trying to minimize the objective function it's working fine but when I'm trying to maximize it, it shows 'Positive directional derivative for linesearch'. Code follows below:
#Import libs
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize, Bounds
#Initial activity level
x0 = np.array([60, 40])
#Revenue function
def revenue(X):
dollarperTRx = 300
coeff_x1 = 0.234
coeff_x2 = 0.127
predwo= 1.245
nhcp = 400
r = dollarperTRx * nhcp * predwo * (pow(1 + (1+ X[0]),coeff_x1)) * (pow((1 + X[1]),coeff_x2))
return r
#Objective function
def objective(X, sign = -1.0):
return sign * revenue(X)
#Spend
cost_per_promo = np.array([400, 600])
def spend(X):
return np.dot(cost_per_promo, x0.transpose())
#Initial Spend
s0 = spend(x0)
#Spend constraint
def spend_constraint(X):
return spend(X) - s0
#Getting the constraints into a dictionary
cons = ({'type':'eq', 'fun': spend_constraint})
#Bounds
bounds1 = (30, 90)
bounds2 = (20, 60)
#Optimize
minimize(objective, x0, method='SLSQP', constraints = cons, bounds = (bounds1, bounds2))
Your spend function does not depend on your design vector, therefore your constraint also does not depend on it. This makes the problem singular. I changed X0 to the current design vector in your example, that way it converges. You have to verify, if that is what you meant to do with the constraint. But with X0 it always gives 0.
#Import libs
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize, Bounds
#Initial activity level
x0 = np.array([60, 40])
#Revenue function
def revenue(X):
dollarperTRx = 300
coeff_x1 = 0.234
coeff_x2 = 0.127
predwo= 1.245
nhcp = 400
r = dollarperTRx * nhcp * predwo * (pow(1 + (1+ X[0]),coeff_x1)) * (pow((1 + X[1]),coeff_x2))
return r
revenue0 = revenue(x0)
#Objective function
def objective(X):
return -revenue(X) / revenue0
#Spend
cost_per_promo = np.array([400., 600.])
def spend(X):
return np.dot(cost_per_promo, X.transpose())
#Initial Spend
s0 = spend(x0)
#Spend constraint
def spend_constraint(X):
return spend(X) - s0
#Getting the constraints into a dictionary
cons = ({'type':'eq', 'fun': spend_constraint})
#Bounds
bounds1 = (30., 90.)
bounds2 = (20., 60.)
#Optimize
res = minimize(objective, x0, method='SLSQP',
constraints = cons,
bounds = (bounds1, bounds2))
print(res)
Results in:
fun: -1.0157910949030706
jac: array([-0.00297113, -0.00444862])
message: 'Optimization terminated successfully.'
nfev: 36
nit: 9
njev: 9
status: 0
success: True
x: array([78.0015639, 27.9989574])
I'm setting up a new linear optimization code with Python. Unfortunately, I don't have the same results with Pulp, Scipy and Gekko packages.
I tried to implement code with different packages for Linear Optimization in Python.
OPTIMIZATION WITH GEKKO
from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
m = GEKKO() # create GEKKO model
x = m.Var(value=0, lb=0, ub=400000) # define new variable, initial value=0
y = m.Var(value=0, lb=0, ub=200) # define new variable, initial value=1
z = m.Var(value=0, lb=0)
m.Equation(x+y+z==100)
m.Obj(1.2*x + y + z) # equations
m.solve(disp=False) # solve
print("Solution with The GEKKO package")
print(x.value, y.value , z.value)# # print solution
OPTIMIZATION WITH Scipy
import numpy as np
from scipy.optimize import minimize
def objective(m):
x = m[0]
y = m[1]
z = m[2]
return 1.2*x + y + z
def constraint1(m):
return m[0] + m[1] + m[2] - 100
def constraint2(x):
return x[2]
x0 = [0,0,0]
b1 = (0,400000)
b2 = (0,200)
b3= (0,None)
bnds = (b1,b2,b3)
con1 = {'type' : 'eq', 'fun' : constraint1}
con2 = {'type' : 'ineq', 'fun' : constraint2}
cons = [con1,con2]
sol = minimize(objective,x0,method='SLSQP', bounds=bnds , constraints=cons)
print("Solution with The SCIPY package")
print(sol)
OPTIMIZATION WITH PULP
from pulp import *
prob = LpProblem("Problem",LpMinimize)
x = LpVariable("X",0,400000,LpContinuous)
y = LpVariable("Y",0,200,LpContinuous)
z = LpVariable("Z",0,None,LpContinuous)
prob += 1.2*x + y + z
prob += (x + y + z == 100)
prob.solve()
print("Solution with The PULP package")
print("Status:", LpStatus[prob.status])
for v in prob.variables():
print(v.name, "=", v.varValue)
I expect to have the same results, but the actual outputs are Different unfortunately :
The solution with The GEKKO package
[0.0] [36.210291349] [63.789708661]
The solution with The SCIPY package
fun: 100.0000000000001
jac: array([1.19999981, 1. , 1. ])
message: 'Optimization terminated successfully.'
nfev: 35
nit: 7
njev: 7
status: 0
success: True
x: array([4.88498131e-13, 5.00000000e+01, 5.00000000e+01])
The Solution with The PULP package
X = 0.0
Y = 100.0
Z = 0.0
All results are correct / Every solver is correct!
Each solution is reaching the minimum in it's objective: 100.
Each solution is preserving the variable-bounds
Each solution is preserving the "simplex-like" contraint: sum(x) = 100
Ignoring floating-point limitations, there are infinitely many different optimal solutions for your problem.
Different solvers including different solving approaches can lead to different solutions (picking one of many solutions). Here for example:
LP-algorithms like Simplex (Pulp)
NLP-algorithms like sequential least squares (scipy)
(keep in mind: there are also LP-solvers within scipy and more specialized solvers are usually better given some a-priori defined optimization problem -> LP vs. NLP)
I have a constrained optimization problem that I am trying to solve using scipy.optimize.minimize.
Basically, I am fitting one parameter, f, to a system of ODEs with the constraint:
f >0 and
f*1000< 500
I wrote a MWE below. In this simple case, it is obvious that 0<f<0.5, but in my real problem, it is not obvious the a-priori upper bound, hence the inequality contraint.
from __future__ import division
import numpy as np
from scipy.integrate import odeint
from scipy.optimize import minimize
def myeqs(y,t,beta, gamma,N):
dy0 = -(beta/N)*y[0]*y[1]
dy1 = (beta/N)*y[0]*y[1] - gamma*y[1]
dy2 = gamma*y[1]
return [dy0,dy1,dy2]
def runModel(f, extraParams):
[beta, gamma, N, initCond, time]= extraParams
S0 = (1-f)*initCond[0]
I0 = initCond[1]
R0 = f*initCond[0]
tspan = np.linspace(time[0], time[1], int(time[1] - time[0]))
sol = odeint(myeqs, [S0,I0,R0], tspan, args=(beta, gamma, N))
return sol
y0 = [1000, 1, 0]
extraParams = [0.5, 0.25, 1000, y0, [0,150]]
def computeSimple(f, extraParams):
sol = runModel(f, extraParams)
return np.abs((sol[-1,2] - sol[0,2]))
def c1(f):
return f*1000.0 - 500.0
cons = ({'type': 'ineq', 'fun': c1})
#cons = ({'type': 'ineq',
# 'fun': lambda x: x[0]*1000 - 500}, )
res = minimize(computeSimple, 0.2, args=(extraParams, ), method='SLSQP', bounds=((0,1.5), ), constraints=cons)
print res.x
print c1(res.x)
If you run this, you will see that res.x is always the upper bound of bounds, regardless of the contraints...
Where is my mistake?
thanks in advance
You got your constraint backward. This constraint:
def c1(f):
return f*1000.0 - 500.0
is constraining f to be at least 0.5, instead of at most 0.5