Scipy minimize constrained function - python

I am solving the following optimization problem:
with this Python code:
from scipy.optimize import minimize
import math
def f(x):
return math.log(x[0]**2 + 1) + x[1]**4 + x[0]*x[2]
x0 = [0, 0, 0]
cons=({'type': 'ineq',
'fun': lambda x: x[0]**3 - x[1]**2 - 1},
{'type': 'ineq',
'fun': lambda x: x[0]},
{'type': 'ineq',
'fun': lambda x: x[2]})
res = minimize(f, x0, constraints=cons)
print res
I am getting an error
message: 'Inequality constraints incompatible'
What can cause this error?

The issue seems to be with your initial guess. If I change your starting values to
x0 = [1.0, 1.0, 1.0]
Then your code will execute fine (at least on my machine)
Python 3.5.1 (v3.5.1:37a07cee5969, Dec 6 2015, 01:54:25) [MSC v.1900 64 bit (AMD64)] on win32
message: 'Optimization terminated successfully.'
njev: 10
jac: array([ 1., 0., 1., 0.])
fun: 0.6931471805582502
nit: 10
status: 0
x: array([ 1.00000000e+00, -1.39724765e-06, 1.07686548e-14])
success: True
nfev: 51

Scipy's optimize module has lots of options. See the documentation or this tutorial. Since you didn't specify the method here, it will use Sequential Least SQuares Programming (SLSQP). Alternatively, you could use the Trust-Region Constrained Algorithm (trust-const).
For this problem, I found that trust-const seemed much more robust to starting values than SLSQP, handling starting values from [-2,-2,-2] to [10,10,10], although negative initial values resulted in increased iterations, as you'd expect. Negative values below -2 exceeded the max iterations, although I suspect might still converge if you increased max iterations, although specifying negative values at all for x1 and x3 is kind of silly, of course, I just did it to get a sense of how robust it was to a range of starting values.
The specifications for SLSQP and trust-const are conceptually the same, but the syntax is a little different (in particular, note the use of NonlinearConstraint).
from scipy.optimize import minimize, NonlinearConstraint, SR1
def f(x):
return math.log(x[0]**2 + 1) + x[1]**4 + x[0]*x[2]
constr_func = lambda x: np.array( [ x[0]**3 - x[1]**2 - 1,
x[0],
x[2] ] )
x0=[0.,0.,0.]
nonlin_con = NonlinearConstraint( constr_func, 0., np.inf )
res = minimize( f, x0, method='trust-constr',
jac='2-point', hess=SR1(),
constraints = nonlin_con )
Here are the results, edited for conciseness:
fun: 0.6931502233468916
message: '`gtol` termination condition is satisfied.'
x: array([1.00000063e+00, 8.21427026e-09, 2.40956900e-06])
Note that the function value and x values are the same as in #CoryKramer's answer. The x array may look superficially different at first glance, but both answers round to [1, 0, 0].

Related

Not iterable Error with constraints in minimize from scipy.optimize

I just started studying optimization with Python and I am facing an issue.
I have a problem where I want to minimize my objective function (obj_fun) using minimize from scipy.optimize.
I will share an example:
import numpy as np
def analysis(A):
N = []
for i in A:
N.append(i*3)
return N
def cons(A):
N = analysis(A)
C = []
for i in len(N):
if N[i] < 2:
C.append({'type': 'ineq', 'fun': lambda x: x[0]*N[i]})
else:
C.append({'type': 'ineq', 'fun': lambda x: x[0]-N[i]})
return C
def obj_fun(A):
"""Objective function returns the weight of the structure"""
w= 0.5*[1*A[0]+2*A[1]+3*A[2]]
return w
# Initial values
A0 = np.array([0.001 for i in range(0, 3)])
N = analysis(A0)
## Optimization
bnds = [(1e-6, None) for i in range(len(A0))]
from scipy.optimize import minimize
sol = minimize(obj_fun, x0=A0, method='trust-constr', bounds=bnds,
constraints=cons)
print(sol)
The whole error I get is:
runfile('C:/Users/Myc/Documents/Python Scripts/example stack.py', wdir='C:/Users/Myc/Documents/Python Scripts')
Traceback (most recent call last):
File "C:\Users\Myc\Documents\Python Scripts\example stack.py", line 40, in
sol = minimize(obj_fun, x0=A0, method='trust-constr', bounds=bnds, constraints=cons)
File "C:\Users\Myc\anaconda3\lib\site-packages\scipy\optimize_minimize.py", line 605, in minimize
constraints = standardize_constraints(constraints, x0, meth)
File "C:\Users\Myc\anaconda3\lib\site-packages\scipy\optimize_minimize.py", line 825, in standardize_constraints
constraints = list(constraints) # ensure it's a mutable sequence
TypeError: 'function' object is not iterable
I know the main problem is how i define the constraints and I could replace constraints=cons for constraints = Cons1 if i define Cons1 = rest(A0) before the optimization.
However that wouldn't help me because I need the function trus_analysis to be executed on every iteration of the optimization in order to update the parameters N for the restrictions.
How can I define the constraints?
The original script:
def obj_fun(A):
return 7*A[0]+ 3*A[1]+ 7*A[2]
def analysis(A):
N = []
for i in A:
N.append(i*3)
return N
def cons(A):
n = analysis(A)
C = []
for i in range(len(A)):
if n[i] < 4:
C.append({'type': 'ineq', 'fun': lambda x: x[i]**2 / n[i]})
else:
C.append({'type': 'ineq', 'fun': lambda x: x[i] - n[i]})
return C
A0 = [1,2,3]
C = cons(A0)
bnds = [(1e-6, None) for i in range(len(A0))]
from scipy.optimize import minimize
sol = minimize(obj_fun, x0=A0, method='trust-constr', bounds=bnds, constraints=C)
print(sol)
runs with:
/usr/local/lib/python3.8/dist-packages/scipy/optimize/_hessian_update_strategy.py:182: UserWarning: delta_grad == 0.0. Check if the approximated function is linear. If the function is linear better results can be obtained by defining the Hessian as zero instead of using quasi-Newton approximations.
warn('delta_grad == 0.0. Check if the approximated '
barrier_parameter: 0.00016000000000000007
barrier_tolerance: 0.00016000000000000007
cg_niter: 15
cg_stop_cond: 1
constr: [array([9.00009143]), array([4.57149698e-05]), array([4.57149698e-05]), array([2.38571416e-05, 5.43334162e-05, 9.00004571e+00])]
constr_nfev: [40, 40, 40, 0]
constr_nhev: [0, 0, 0, 0]
constr_njev: [0, 0, 0, 0]
constr_penalty: 1.0
constr_violation: 0.0
execution_time: 0.0873115062713623
fun: 63.00065000502843
grad: array([7. , 3. , 6.99999999])
jac: [array([[0. , 0. , 2.00001017]]), array([[0., 0., 1.]]), array([[0., 0., 1.]]), array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])]
lagrangian_grad: array([1.77635684e-15, 1.55431223e-14, 5.67948534e-14])
message: '`gtol` termination condition is satisfied.'
method: 'tr_interior_point'
nfev: 40
nhev: 0
nit: 14
niter: 14
njev: 10
optimality: 5.679485337974424e-14
status: 1
success: True
tr_radius: 18734.614693588483
v: [array([-1.77775972e-05]), array([-3.49997333]), array([-3.49997333]), array([-7.00000000e+00, -3.00000000e+00, -1.77776895e-05])]
x: array([2.38571416e-05, 5.43334162e-05, 9.00004571e+00])
Here
In [36]: C
Out[36]:
[{'type': 'ineq', 'fun': <function __main__.cons.<locals>.<lambda>(x)>},
{'type': 'ineq', 'fun': <function __main__.cons.<locals>.<lambda>(x)>},
{'type': 'ineq', 'fun': <function __main__.cons.<locals>.<lambda>(x)>}]
A0 is used the create the 3 constraint functions.
The analysis function just multiplies A by 3.
In [38]: analysis(A0)
Out[38]: [3, 6, 9]
In [39]: A0
Out[39]: [1, 2, 3]
In [40]: analysis(A0)
Out[40]: [3, 6, 9]
In [41]: np.array(A0)*3
Out[41]: array([3, 6, 9])
In the latest cons you dropped the range, and use cons directly rather than cons(A0). The constraints parameter is supposed to be a list of dict, as shown in C.

Scipy.optimize - constraints not giving expected output

I'm trying to replicate this simple optimisation problem in order to get started with Scipy.optimize.
The problem is a classic product mix-problem, where the objective is to maximize profit given some production and ingredients constraints, in this case it is a coffee shop with three coffee types.
x0 -> Regular
x1 -> Latte
x2 -> Mocha
Constraints:
x0+x1+x2 <= 500
x1+x2 <= 350
x2 <= 125
Objective function
Maximize x0*1.5 + x1*2.00 + x2*2.25
Here is my code (please note that I run it in a notebook):
from scipy.optimize import minimize
#profit / cup
Reg = 1.25
Lat = 2.00
Moc = 2.25
#objective function to maximize
def objective(x):
return (x[0]*Reg + x[1]*Lat + x[2]*Moc)
#constraints
def cons_total_production(x):
return (sum(x))-500
def cons_choc(x):
return (x[2])-125
def cons_milk(x):
return (sum(x[1:]))-350
cons1 = {'type':'ineq', 'fun':cons_total_production}
cons2 = {'type':'ineq', 'fun':cons_choc}
cons3 = {'type':'ineq', 'fun':cons_milk}
cons = [cons1,cons2,cons3]
#boundries
bnds = ((0,None),(0,None),(0,None))
#initial guess
x0 = [250,150,100]
#let scipy do its magic
sol = minimize(objective, x0, constraints=cons, bounds=bnds)
This produces the correct output. How is that even possible when i use scipy minimize?
fun: 918.7499999999358
jac: array([ 1.25, 2. , 2.25])
message: 'Optimization terminated successfully.'
nfev: 50
nit: 10
njev: 10
status: 0
success: True
x: array([ 150., 225., 125.])
But when I try to add one more constrain, the output is wrong. If I for instance would make a constrain that states that x0 must be equal to x1, I would change the following and run the model again:
def cons_eq(x):
return x[0]-x[1]
cons4 = {'type':'eq', 'fun':cons_eq}
cons = [cons1, cons2, cons3, cons4]
But now my constrain x2<=125 is ignored:
fun: 937.4999999999934
jac: array([ 1.25, 2. , 2.25])
message: 'Optimization terminated successfully.'
nfev: 35
nit: 7
njev: 7
status: 0
success: True
x: array([ 150., 150., 200.])
Any suggestions? thx...
The problem lies in both your objective function and the constraints. Since you are using scipy's minimize function, you must minimize the negative of the objective function to find the maximum of the function (It is slightly tricky).
#objective function to maximize
def objective(x):
return -1.0*(x[0]*Reg + x[1]*Lat + x[2]*Moc)
You have also incorrectly typed the inequality functions. If you look at the scipy documentation, all inequalities are in the form g(x)>=0. So, for example, if you want x2 <= 125, you must multiply by negative 1 (to switch the inequality), then add 125 which gets: g(x) = 125-x2 >= 0. So for the rest of your constraints:
#constraints
def cons_total_production(x):
return (-1.0*sum(x))+500
def cons_choc(x):
return (-1.0*x[2])+125
def cons_milk(x):
return (-1.0*sum(x[1:]))+350
Which will give you the outputs:
fun: -918.7499999999989
jac: array([-1.25, -2. , -2.25])
message: 'Optimization terminated successfully.'
nfev: 35
nit: 7
njev: 7
status: 0
success: True
x: array([ 150., 225., 125.])
fun: -890.62499998809
jac: array([-1.25, -2. , -2.25])
message: 'Optimization terminated successfully.'
nfev: 10
nit: 2
njev: 2
status: 0
success: True
x: array([ 187.5, 187.5, 125. ])
Linear programming is so cool!
Hope it helps :)

Scipy optimization with SLSQP disregards constraints

I would like to optimize the following formula with scipy adding the constraint of x[0] - x[1] > 0. When printing this expression in the objective function it gives negative values as well, while the optimization terminates successfully. The final goal would be something like minimizing the sqrt(0.1*x[0]*x[1]) which due to math error fails.
import numpy as np
from scipy.optimize import minimize
def f(x):
print x[0] - x[1]
#return sqrt(0.1*x[0]*x[1])
return 0.1*x[0]*x[1]
def ineq_constraint(x):
return x[0] - x[1]
con = {'type': 'ineq', 'fun': ineq_constraint}
x0 = [1, 1]
res = minimize(f, x0, method='SLSQP', constraints=con)
print res
And the output:
0.0
0.0
1.49011611938e-08
-1.49011611938e-08
0.0
0.0
1.49011611938e-08
-1.49011611938e-08
0.0
0.0
1.49011611938e-08
-1.49011611938e-08
4.65661176285e-10
4.65661176285e-10
1.53668223701e-08
-1.44355000176e-08
fun: 1.7509862319755833e-18
jac: array([ 3.95812066e-10, 4.42378184e-10, 0.00000000e+00])
message: 'Optimization terminated successfully.'
nfev: 16
nit: 4
njev: 4
status: 0
success: True
x: array([ 4.42378184e-09, 3.95812066e-09])
In the general case, we don't know your whole task, constraints are not enforced at all steps (as observed)! Without changing the optimizer there is not much to do. And even finding an appropriate optimizer is maybe not easy.
For your case, it would work if your variables are nonnegative! If that's something you can use in your other task, we don't know.
Now there are two approaches for nonnegativity:
inequalities
bounds
Using bounds, explicit handling is used (as far is i know) and those will not be violated during optimization.
Example:
import numpy as np
from scipy.optimize import minimize
from math import sqrt
def f(x):
print(x)
return sqrt(0.1*x[0]*x[1])
def ineq_constraint(x):
return x[0] - x[1]
con = {'type': 'ineq', 'fun': ineq_constraint}
x0 = [1, 1]
res = minimize(f, x0, method='SLSQP', constraints=con, bounds=[(0, None) for i in range(len(x0))])
print(res)
Output:
[1. 1.]
[1. 1.]
[1.00000001 1. ]
[1. 1.00000001]
[0.84188612 0.84188612]
[0.84188612 0.84188612]
[0.84188613 0.84188612]
[0.84188612 0.84188613]
[0.05131671 0.05131669]
[0.05131671 0.05131669]
[0.05131672 0.05131669]
[0.05131671 0.0513167 ]
[0. 0.]
[0. 0.]
[1.49011612e-08 0.00000000e+00]
[0.00000000e+00 1.49011612e-08]
fun: 0.0
jac: array([0., 0.])
message: 'Optimization terminated successfully.'
nfev: 16
nit: 4
njev: 4
status: 0
success: True
x: array([0., 0.])

Ineq and eq constraints with scipy.optimize.minimize()

I am attempting to understand the behavior of the constraints in scipy.optimize.minimize:
First, I create 4 assets and 100 scenarios of returns. The average returning funds are in order best to worse D > B > A > C
#seed first
np.random.seed(1)
df_returns = pd.DataFrame(np.random.rand(100,4) - 0.25, columns =list('ABCD'))
df_returns.head()
A B C D
0 0.167022 0.470324 -0.249886 0.052333
1 -0.103244 -0.157661 -0.063740 0.095561
2 0.146767 0.288817 0.169195 0.435220
3 -0.045548 0.628117 -0.222612 0.420468
4 0.167305 0.308690 -0.109613 -0.051899
and a set of weights
weights = pd.Series([0.25, 0.25, 0.25, 0.25], index=list('ABCD'))
0
A 0.25
B 0.25
C 0.25
D 0.25
we create an objective function:
def returns_objective_function(weights, df_returns):
result = -1. * (df_returns * weights).mean().sum()
return result
and constraints and bounds
cons = ({'type': 'eq', 'fun': lambda weights: np.sum(weights) -1 })
bnds = ((0.01, .8), (0.01, .8), (0.01, .8), (0.01, .75))
Let's optimize
optimize.minimize(returns_objective_function, weights, (df_returns),
bounds=bnds, constraints=cons, method= 'SLSQP')
And we get success.
status: 0
success: True
njev: 8
nfev: 48
fun: -0.2885398923185326
x: array([ 0.01, 0.23, 0.01, 0.75])
message: 'Optimization terminated successfully.'
jac: array([-0.24384782, -0.2789166 , -0.21977262, -0.29300382, 0. ])
nit: 8
Now I wish to add constraints starting with a basic inequality:
scipy.optimize.minimize documentation states
Equality constraint means that the constraint function result is to be zero whereas inequality means that it is to be non-negative.
cons = (
{'type': 'eq', 'fun': lambda weights: np.sum(weights) -1 }
,{'type': 'ineq', 'fun': lambda weights: np.sum(weights) + x}
)
Depending on x, I get unexpected behavior.
x = -100
Based on the bounds, weights can be a maximum of 3.15 and, of course, must sum to 1 by the first equality constraint np.sum(weights) - 1, but, as a result, np.sum(weights) + x would always be negative. I believe no solution should be found, yet scipy.optimize.minimize returns success.
With a simpler model I get the same behavior:
x = [1,2]
optimize.minimize(
lambda x: x[0]**2+x[1]**2,
x,
constraints = (
{'type':'eq','fun': lambda x: x[0]+x[1]-1},
{'type':'ineq','fun': lambda x: x[0]-2}
),
bounds = ((0,None),(0,None)),
method='SLSQP')
with results:
nfev: 8
fun: 2.77777777777712
nit: 6
jac: array([ 3.33333334e+00, 2.98023224e-08, 0.00000000e+00])
x: array([ 1.66666667e+00, 1.39888101e-14])
success: True
message: 'Optimization terminated successfully.'
status: 0
njev: 2
There should be some flag that this is an infeasible solution.
SLSQP is also available from R:
> slsqp(c(1,2),
+ function(x) {x[1]^2+x[2]^2},
+ heq=function(x){x[1]+x[2]-1},
+ hin=function(x){x[1]-2},
+ lower=c(0,0))
$par
[1] 1.666667e+00 4.773719e-11
$value
[1] 2.777778
$iter
[1] 105
$convergence
[1] -4
$message
[1] "NLOPT_ROUNDOFF_LIMITED: Roundoff errors led to a breakdown of the optimization algorithm. In this case, the returned minimum may still be useful. (e.g. this error occurs in NEWUOA if one tries to achieve a tolerance too close to machine precision.)"
At least we see some warning signals here.

Best way to scale the matrix variables in SCIPY linear programming scheme

I have the following optimization scheme implemented under NNLS
in scipy.
import numpy as np
from scipy.optimize import nnls
from scipy import stats
#Define problem
A = np.array([[60., 90., 120.],
[30., 120., 90.]])
b = np.array([6700.5, 699.,])
# Add ones to ensure the solution sums to 1
b = np.hstack([b,1.0])
A = np.vstack([A,np.ones(3)])
x, rnorm = nnls(A,b)
print x
# the solution is
# [ 93.97933792 0. 0. ]
# we expect it to sum to 1 if it's not skewed
As you can see the b vector is much higher than values in A.
My question is what's the best/reasonable way to scale A and b so that the solution
is not skewed.
Note that both A and b are gene expression raw data without pre-processing.
If you want to include the equality constraint, you can't really use the nnls routine, since it doesn't cater for equalities. If you are limited to what's on offer in scipy, you can use this:
import numpy as np
from scipy.optimize import minimize
#Define problem
A = np.array([[60., 90., 120.],
[30., 120., 90.]])
b = np.array([6700.5, 699.,])
#-----------------------------
# I tried rescaling the data by adding this two lines,
# so that they're in same scale.
# but why the solution is different?
# x: array([ 1., 0., 0.])
# What's the correct way to go?
#-----------------------------
# A = A/np.linalg.norm(A,axis=0)
# b = b/np.linalg.norm(b)
def f(x):
return np.linalg.norm(A.dot(x) - b)
cons ={'type': 'eq',
'fun': lambda x: sum(x) - 1}
x0 = [1, 0, 0] # initial guess
minimize(f, x0, method='SLSQP', bounds=((0, np.inf),)*3, constraints=cons)
Output:
status: 0
success: True
njev: 2
nfev: 10
fun: 6608.620222860367
x: array([ 0., 0., 1.])
message: 'Optimization terminated successfully.'
jac: array([ -62.50927734, -100.675354 , -127.78314209, 0. ])
nit: 2
This minimises the objective function directly while also imposing the equality constraint you're interested in.
If speed is important, you can add the jacobian and hessian information, or even better, use a proper QP solver, as supplied by cvxopt.

Categories

Resources