This is my current code in python 3.6 to optimise a constrained 3 variable optimiser using a SLSQP method this is what I am optimising to an elliptical distribution.
import numpy as np
from scipy.optimize import minimize
def func(x):
return ((x[0]*((0.2292 + x[1])/2)) + ((0.9-x[0])*((x[2] +
x[1])/2))-0.16)
def func_deriv(x):
""" Derivative of objective function """
dfdx0 = (0.2292 - x[2])/2
dfdx1 = 0.89/2
dfdx2 = (0.89 - x[0])/2
return np.array([dfdx0, dfdx1, dfdx2])
cons = ({'type': 'ineq',
'fun': lambda x: np.array([x[1]-x[2]]),
'jac': lambda x: np.array([0.0, 1.0, -1.0])},
{'type': 'ineq',
'fun': lambda x: np.array([0.2292-x[1]]),
'jac': lambda x: np.array([0.0, -1.0, 0.0])},
{'type': 'ineq',
'fun': lambda x: np.array(0.89-x[0]),
'jac': lambda x: np.array([-1.0, 0.0, 0.0])})
res = minimize(func, [0.45, 0.20, 0.10], jac=func_deriv, constraints=cons
method='SLSQP',
options={'disp': True, 'maxiter': 100})
the error I keep getting is a RunTimeWarning invalid double scalar
I don't know why this is happening
Iteration limit exceeded (Exit mode 9)
Current function value: nan
Iterations: 101
Function evaluations: 881
Gradient evaluations: 101
C:/Users/bob19/untitled9.py:13: RuntimeWarning: overflow encountered in
double_scalars
return ((x[0]*((0.2292 + x[1])/2)) + ((0.9-x[0])*((x[2] + x[1])/2))-0.16)
C:/Users/bob19/untitled9.py:13: RuntimeWarning: invalid value encountered
in double_scalars
return ((x[0]*((0.2292 + x[1])/2)) + ((0.9-x[0])*((x[2] + x[1])/2))-0.16)
Any help would be greatly appreciated.
The above error I don't get anymore instead I get 'Inequality constraints incompatible'
runfile('C:/Users/Robin19/opimizer v2.py', wdir='C:/Users/Robin19')
Inequality constraints incompatible (Exit mode 4)
Current function value: -2.77024369372e+24
Iterations: 8
Function evaluations: 8
Gradient evaluations: 8
fun: -2.7702436937183577e+24
jac: array([ 1.49413376e+12, 4.45000000e-01, 9.27040055e+11,
0.00000000e+00])
message: 'Inequality constraints incompatible'
nfev: 8
nit: 8
njev: 8
status: 4
success: False
x: array([ -1.85408011e+12, -3.00597522e+12, -2.98826753e+12])
Related
I am trying to solve a linear equations with minimize (algorithm=SLSQP), having a set of constraints: The sum of the solution vector components has to be 1 (or at least very close to it, to minimize the error) and second constraint enforces the vector components to be ordered, with x_0 being largest and x_n having the smallest magnitude. Further I set bounds, as each vector component has
This is my code so far:
from scipy.optimize import minimize
import numpy as np
from scipy.sparse import rand
from scipy.optimize import lsq_linear
#Linear equation Ax = b
def fmin(x,A,b):
y = np.dot(A, x) - b
return np.dot(y, y)
#Test data
b = np.array([172,8,7.4,24,21,0.8,0.1])
A_t = np.array(
[[188,18.4,16.5,3.4,2.1,1.77,0.075],
[405,0,0,99.8,99.8,0,0.0054],
[90.5,0.4,0.009,19.7,15.6,1.06,0.012],
[322,0,0,79,79,0.3,0.3],
[0,0,0,0,0,0,0],
[362,0.25,0.009,89.2,0,0.43,0.019],
[37,1.4,0.2,7.3,1,4.5,0.1],
[26,0.29,0.038,6.1,2.4,0.4,0.053]])
A = A_t.T
#=========================
m = np.shape(A)[0]
n = np.shape(A)[1]
x0 = np.full(n, 0.5)
args = (A,b)
bnds = (( (1*10e-100, 1), )*n) #all x_i must be between 0 and 1
cons = [{'type': 'eq', 'fun': lambda x: 1.0-np.sum(x) }] #sum(x_i) = 1
#consider order of vectors as constraints
for i in range(n-1):
cons = cons + [{'type': 'ineq', 'fun': lambda x : x[i] - x[i+1] }]
res = minimize(fmin, x0, args, method='SLSQP',
bounds=bnds,constraints=cons,tol=1e-2,options={'disp': False})
print ("\n res\n", res)
print("Sum of coefficients {}".format(np.sum(res.x)))
print("Difference vector:\n{}".format(np.dot(A,res.x) - b))
Unfortunately the algorithm false with
res
fun: 317820.09898084006
jac: array([205597.34765625, 481389.625 , 105853.7265625 , 382592.76953125,
0. , 416196.953125 , 42268.78125 , 30196.81640625])
message: 'Positive directional derivative for linesearch'
nfev: 10
nit: 5
njev: 1
status: 8
success: False
x: array([0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5])
Sum of coefficients 4.0
Difference vector:
[5.4325e+02 2.3700e+00 9.7800e-01 1.2825e+02 7.8950e+01 3.4300e+00
1.8220e-01]
I would be very grateful if someone could help me to sort this out. In fact for the test data in this example I know that the right solution should be 0.58 for x_0 and 0.12 for x_2.
Many thanks in advance!
To clarify the discussion in the comments: Your objective function is nonlinear in the optimization variable x. Thus, this is a nonlinear optimization problem. The reason for the error message is quite simple: Your initial guess x0 lies outside the feasible region. You can easily verify that it doesn't fulfill your first constraint 1.0 - np.sum(x) = 0.
However, note also that you need to capture the loop variable i when creating your lambda constraint expressions in a loop. Otherwise, you add the constraint lambda x: x[n-2] - x[n-1] seven times. See, e.g. here for more details.
Creating the constraints correctly by
for i in range(n-1):
cons = cons + [{'type': 'ineq', 'fun': lambda x, i=i: x[i] - x[i+1] }]
and providing the feasible initial guess x0 = np.ones(n)/n yields
fun: 114.90196737679031
jac: array([3550.86690235, 7911.74978828, 1787.36224174, 6290.006423 ,
0. , 7580.9283762 , 757.60069752, 528.41590595])
message: 'Optimization terminated successfully'
nfev: 66
nit: 6
njev: 6
status: 0
success: True
x: array([0.38655391, 0.08763516, 0.08763516, 0.08763516, 0.08763516,
0.08763515, 0.08763516, 0.08763516])
However, for larger problem's it's highly recommended to rewrite your second constraint:
D = np.eye(n) - np.eye(n, k=1)
D[-1, -1] = 0.0
cons = [{'type': 'eq', 'fun': lambda x: 1.0-np.sum(x) }]
cons += [{'type': 'ineq', 'fun': lambda x: D # x}]
Now the solver only needs to evaluate 2 constraints instead of n+1 constraints in each iteration. You can further speed up the solver by providing gradients and jacobians, see this answer for more details.
I'm trying to maximize a function in the form x1/x2. I don't want the x2 to go to zero so I'm defining my constraint as x2 > 0. But the Scipy SLSQP method does not take that into consideration and says 'Inequality constraints incompatible. The Scipy documentation states all constraints are non-negative, how I do take values only greater than zero?
Edit: here is the function i have to minimize
def f(x):
T = x[0]
mdot = x[1]
Pin = 1.5 # Input power in watts
f = -(T**2/(2*mdot*Pin))
return f
my constraint for x2>0:
def constraint1(x): ## mdot > 0
return x[1]
cons1 = {'type':'ineq', 'fun': constraint1}
res = minimize(f,[0.5,0.025],method = 'SLSQP',constraints = cons1,callback=callbackF)
the callback function is only there to get x values out at each iteration
the results:
Iter X1 X2 f(X)
1 0.520000 0.000000 19484373451061.960938
2 0.520000 0.000000 19484373451061.960938
Maximum thrust value is 0.5200000003096648
Ideal mass flow rate value is 6.938893903907228e-18
fun: -19484373451061.96
jac: array([-7.49398988e+13, 1.30757417e+21])
message: 'Inequality constraints incompatible'
nfev: 6
nit: 2
njev: 2
status: 4
success: False
x: array([5.2000000e-01, 6.9388939e-18])
As you can see the mdot(x2) goes to zero immediately and I don't how to fix it?
If we replace the constraint by a bound, we can see:
from scipy.optimize import minimize
def f(x):
T = x[0]
mdot = x[1]
Pin = 1.5 # Input power in watts
f = -(T**2/(2*mdot*Pin))
return f
res = minimize(f,[0.5,0.025],method = 'l-bfgs-b',bounds=[(0,10000),(0.0001,15)])
res
fun: -333333333333.3333
hess_inv: <2x2 LbfgsInvHessProduct with dtype=float64>
jac: array([-6.66687012e+07, 3.33300003e+15])
message: b'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
nfev: 9
nit: 2
status: 0
success: True
x: array([1.e+04, 1.e-04])
This corresponds with what one would expect (this problem can be trivially solved by hand).
I'm new to the community and finally, here I am for my first question!
I'm stuck on a Maximization problem (you can see it below):
Problem Set
My variables are set in this way:
pass; import numpy as np
from scipy.optimize import minimize
n_bond = 4
# Vector of Present Value for each Bond
pv_vec = [ 115.34755980259271, 100.95922248766202,
93.87539028309668, 93.63295880149812
]
# Vector of Duration for each Bond
dur_vec = [4.880937192589091, 5.1050167320620625, 3.713273768775609, 1.0]
# Vector of Convexity for each Bond
conv_vec = [31.66708597860361, 33.73613393551434, 18.098806559076504, 2.0]
# Initial guess
alpha = np.random.random(n_bond)
alpha = alpha / np.sum(alpha)
W0 = 1000 # Initial wealth
K = 3.5 # Target Duration
def obj(self):
return sum(conv_vec[i]*(pv_vec[i]*alpha[i])/W0 for i in range(n_bond))*(-1)
def cons1(self):
return sum(pv_vec[i]*alpha[i] for i in range(n_bond)) - W0
def cons2(self):
return sum(dur_vec[i]*(pv_vec[i]*alpha[i])/W0 for i in range(n_bond)) - K
const = ({'type': 'eq', 'fun': cons1},
{'type': 'eq', 'fun': cons2})
non_neg = []
for i in range(n_bond):
non_neg.append((0, None))
non_neg = tuple(non_neg)
solution = minimize(fun=obj,
x0=alpha,
method='SLSQP',
bounds = non_neg,
constraints=cons,
options = {'disp':True, 'ftol': 1e-20, 'maxiter': 1000})
And now the error message of minimization:
Singular matrix C in LSQ subproblem (Exit mode 6)
Current function value: -2.5052545159234914
Iterations: 1
Function evaluations: 6
Gradient evaluations: 1
fun: -2.5052545159234914
jac: array([0.00000, 0.00000, 0.00000, 0.00000])
message: 'Singular matrix C in LSQ subproblem'
nfev: 6
nit: 1
njev: 1
status: 6
success: False
x: array([0.27557, 0.38912, 0.07314, 0.26217])
I really need your wisdom!
I try to minimize a function with multiple variable using scipy. optimise.
The minimisation is not done a proper way because
def f(p):
A=np.zeros(c)
for i in range(c):
A [i]= (f_i[i] - (p[0] * scipy.stats.norm.pdf(j[i], p[1], p[2]) + p[3]* scipy.stats.norm.pdf(j[i], p[4], p[5]) + p[6]* scipy.stats.norm.pdf(j[i], p[7], p[8])))**2
return sum(A[0:c])
def my_cons(p):
g = np.zeros(10)
g[0] =p[0]-0
g[1] =p[1]-0
g[2] =p[2]-0
g[3] =p[3]-0
g[4] =p[4]-0
g[5] =p[5]-0
g[6] =p[6]-0
g[7] =p[7]-0
g[8] =p[8]-0
g[9] = p[0]+p[3]+p[6]-1.
return g
cons= ({'type': 'ineq', 'fun': lambda p: p[0]-0},
{'type': 'ineq', 'fun': lambda p: p[1]-0},
{'type': 'ineq', 'fun': lambda p: p[2]-0},
{'type': 'ineq', 'fun': lambda p: p[3]-0},
{'type': 'ineq', 'fun': lambda p: p[4]-0},
{'type': 'ineq', 'fun': lambda p: p[5]-0},
{'type': 'ineq', 'fun': lambda p: p[6]-0},
{'type': 'ineq', 'fun': lambda p: p[7]-0},
{'type': 'ineq', 'fun': lambda p: p[8]-0},
{'type': 'eq', 'fun': lambda p: p[0]+p[3]+p[6]-1})
x0 = np.array((0.1, 25., 6.1, 0.2, 35., 10.,0.1, 16., 10.))
res = optimize.minimize(f, x0, method='SLSQP',jac=None, bounds=None, constraints=cons,tol=None,options={'disp': True ,'eps' : 1e-8, 'maxiter' : 1000})
print res
Where j and f_i are taken from an csv file.
When I use the x0 mentioned above I obtain:
nfev: 23
nit: 2
njev: 2
status: 0
success: True
x: array([ 1.11173074e-19, 1.06811225e+01, 1.91022230e+00,
1.00000000e+00, 3.11982112e+01, 7.50048570e+01,
1.94288182e-20, 3.00000000e-01, 1.00000000e-01])
when I use an other one for instance:
x0 = np.array([1.3, 10.7, 1.8, 21.9, 31.2,75,0.6,0.3,0.1])
I obtain:
message: 'Optimization terminated successfully.'
nfev: 23
nit: 2
njev: 2
status: 0
success: True
x: array([ 1.11173074e-19, 1.06811225e+01, 1.91022230e+00,
1.00000000e+00, 3.11982112e+01, 7.50048570e+01,
1.94288182e-20, 3.00000000e-01, 1.00000000e-01])
How to define a good initial geuss?
Am I using the wrong method?
It looks like you are trying to fit a Gaussian mixture model. Instead of using the general purpose routines from scipy.optimize it might be better to take a look at scikit-learns sklearn.mixture (see http://scikit-learn.org/stable/modules/mixture.html), which was written for this exact purpose.
Staying with routines from scipy.optimize, there is one alternative algorithm that allows for constraints, called COBYLA, that you might want to try out. It does only allow for inequality constraints but you can simply redefine your problem to eliminate the equality constraint. This means you replace p[6] by 1-p[0]-p[3] and add the inequality constraint 1-p[0]-p[3] >= 0.
Im using PuLP to solve some minimization problems
with constraints, uper and low bounds.
It is very easy and clean.
But im needing to use only the Scipy and Numpy
modules.
I was reading:
http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html
Constrained minimization of multivariate scalar functions
But im a bit lost... some good soul can post a small example like
this PuLP one in Scipy?
Thanks in advance.
MM
from pulp import *
'''
Minimize 1.800A + 0.433B + 0.180C
Constraint 1A + 1B + 1C = 100
Constraint 0.480A + 0.080B + 0.020C >= 24
Constraint 0.744A + 0.800B + 0.142C >= 76
Constraint 1C <= 2
'''
...
Consider the following:
import numpy as np
import scipy.optimize as opt
#Some variables
cost = np.array([1.800, 0.433, 0.180])
p = np.array([0.480, 0.080, 0.020])
e = np.array([0.744, 0.800, 0.142])
#Our function
fun = lambda x: np.sum(x*cost)
#Our conditions
cond = ({'type': 'eq', 'fun': lambda x: np.sum(x) - 100},
{'type': 'ineq', 'fun': lambda x: np.sum(p*x) - 24},
{'type': 'ineq', 'fun': lambda x: np.sum(e*x) - 76},
{'type': 'ineq', 'fun': lambda x: -1*x[2] + 2})
bnds = ((0,100),(0,100),(0,100))
guess = [20,30,50]
opt.minimize(fun, guess, method='SLSQP', bounds=bnds, constraints = cond)
It should be noted that eq conditions should be equal to zero, while ineq functions will return true for any values greater then zero.
We obtain:
status: 0
success: True
njev: 4
nfev: 21
fun: 97.884100000000345
x: array([ 40.3, 57.7, 2. ])
message: 'Optimization terminated successfully.'
jac: array([ 1.80000019, 0.43300056, 0.18000031, 0. ])
nit: 4
Double check the equalities:
output = np.array([ 40.3, 57.7, 2. ])
np.sum(output) == 100
True
round(np.sum(p*output),8) >= 24
True
round(np.sum(e*output),8) >= 76
True
The rounding comes from double point precision errors:
np.sum(p*output)
23.999999999999996