Optimization in Sci-Py for n >1000 - python

I am doing a constrained optimization using sci-py's optimize tool for 100K observations, but my results are converging to the initial values for n > 1000. That is if I perform optimization on say 500 observations, I get a good local minimum, however as n tends to 1000 the optimized values are the same as the original values:
Here are my objective function and constraints:
Objective function:
def objective(x):
return np.sum(np.array([-TU_array[i]*(NR_fn(x[i])-LS_fn(x[i],pd_array[i])) for i in range(0,len(x))]))
# where
def NR_fn(x):
R = (1.99525947856706E-13*x**3-0.000000299925*x**2+0.137682384749*x+3648.729074580060)
return R
def LS_fn(x,pd):
L =x*pd
return L
and constraints are :
def constraint1(x):
return np.sum(x)-df["creditlimit"].sum()
def constraint3(x):
return np.sum(np.array([LS_fn(x[i],pd_array[i]) for i in range(0,len(x))]))-525000
b = (df["creditlimit"].min(),df["creditlimit"].max())
bnds = (b,)*len(df)
con1 = {'type': 'eq', 'fun': constraint1}
# con2 = {'type': 'eq', 'fun': constraint2}
con3 = {'type': 'eq', 'fun': constraint3}
cons = ([con1,con3])
x0 = np.array(df["creditlimit"])
solution = minimize(objective,x0,method='SLSQP', bounds=bnds,constraints = cons, options={'maxiter': 200, 'disp': True})
x = solution.x
Any help on how to go about it? Can I use some other tools which can help minimize my function?
I tried changing the constraints and functional form and expected unique values for my observations but I still got initial values.

Related

Problems minimizing a function with 2 constraints - Python

I'm writing a program to minimize a function of several parameters subjected to constraints and bounds. Just in case you want to run the program, the function is given by:
def Fnret(mins):
Bj, Lj, a, b = mins.reshape(4,N)
S1 = 0; S2 = 0
Binf = np.zeros(N); Linf = np.zeros(N);
for i in range(N):
sbi=(Bi/2); sli=(Li/2)
for j in range(i+1):
sbi -= Bj[j]
sli -= Lj[j]
Binf[i]=sbi
Linf[i]=sli
for i in range(N):
S1 += (C*(1-np.sin(a[i]))+T*np.sin(a[i])) * ((2*Bj[i]*Binf[i]+Bj[i]**2)/(np.tan(b[i])*np.cos(a[i]))) +\
(C*(1-np.sin(b[i]))+T*np.sin(b[i])) * ((2*Bj[i]*Linf[i]+Lj[i]*Bj[i])/(np.sin(b[i])))
S2 += (gamma*Bj[0]/(6*np.tan(b[0])))*((Bi/2)*(Li/2) + 4*(Binf[0]+Bj[0])*(Linf[0]+Lj[0]) + Binf[0]*Linf[0])
j=1
while j<(N):
S2 += (gamma*Bj[j]/(6*np.tan(b[j])))*(Binf[j-1]*Linf[j-1] + 4*(Binf[j]+Bj[j])*(Linf[j]+Lj[j]) + Binf[j]*Linf[j])
j += 1
F = 2*(S1+S2)
return F
where Bj,Lj,a, and b are the minimization results given by N-sized arrays with N being an input of the program, I double-checked the function and it is working correctly. My constraints are given by:
def Brhs(mins): # Constraint over Bj
return np.sum(mins.reshape(4,N)[0]) - Bi
def Lrhs(mins): # Constraint over Lj
return np.sum(mins.reshape(4,N)[1]) - Li
cons = [{'type': 'eq', 'fun': lambda Bj: 1.0*Brhs(Bj)},
{'type': 'eq', 'fun': lambda Lj: 1.0*Lrhs(Lj)}]
In such a way that the sum of all components of Bj must be equal to Bi (same thing with Lj). The bounds of the variables are given by:
bounds = [(0,None)]*2*N + [(0,np.pi/2)]*2*N
For the problem to be reproducible, it's important to use the following inputs:
# Inputs:
gamma = 17.
C = 85.
T = C
Li = 1.
Bi = 0.5
N = 3
For the minimization, I'm using the cyipopt library (that is just similar to the scipy.optimize). Then, the minimization is given by:
from cyipopt import minimize_ipopt
x0 = np.ones(4*N) # Initial guess
res = minimize_ipopt(Fnret, x0, constraints=cons, bounds=bounds)
The problem is that the result is not obeying the conditions I imposed on the constraints (i.e. the sum of Bj or Lj values is different than Bi or Li on the results). But, for instance, if I only write one of the two constraints (over Lj or Bj) it works fine for that variable. Maybe I'm missing something when using 2 constraints and I can't find the error, it seems that it doesn't work with both constraints together. Any help would be truly appreciated. Thank you in advance!
P.S.: In addition, I would like that the function result F to be positive as well. How can I impose this condition? Thanks!
Not a complete answer, just some hints in arbitrary order:
Your initial point x0 is not feasible because it contradicts both of your constraints. This can easily be observed by evaluating your constraints at x0. Under the hood, Ipopt typically detects this and tries to find a feasible initial point. However, it's highly recommended to provide a feasible initial point whenever possible.
Your variable bounds are not well-defined. Evaluating your objective at your bounds yields multiple divisions by zero. For example, the denominator np.tan(b[i]) is zero if and only if b[i] = 0, so 0 isn't a valid value for all of your b[i]s. Proceeding similarly for the other terms, you should obtain 0 < b[i] < pi/2 and 0 ≤ a[i] < pi/2. Here, you can model the strict inequalities by 0 + eps ≤ b[i] ≤ pi/2 - eps and 0 ≤ a[i] ≤ pi/2 - eps, where eps is a sufficiently small positive number.
If you really want to impose that the objective function is always positive, you can simply add the inequality constraint Fnret(x) >= 0, i.e. {'type': 'ineq', 'fun': Fnret}.
In Code:
# bounds
eps = 1e-8
bounds = [(0, None)]*2*N + [(0, np.pi/2 - eps)]*N + [(0+eps, np.pi/2 - eps)]*N
# (feasible) initial guess
x0 = eps*np.ones(4*N)
x0[[0, N]] = [Bi-(N-1)*eps, Li-(N-1)*eps]
# constraints
cons = [{'type': 'eq', 'fun': Brhs},
{'type': 'eq', 'fun': Lrhs},
{'type': 'ineq', 'fun': Fnret}]
res = minimize_ipopt(Fnret, x0, constraints=cons, bounds=bounds, options={'disp': 5})
Last but not least, this still doesn't converge to a stationary point, so chances are that there's indeed no local minimum. From here, you can try experimenting with other (feasible!) initial points and double-check the math of your problem. It's also worth providing the exact gradient and constraint Jacobians.
So, based on #joni suggestions, I could find a stationary point respecting the constraints by adopting the trust-constr method of scipy.optimize.minimize library. My code is running as follows:
import numpy as np
from scipy.optimize import minimize
# Inputs:
gamma = 17
C = 85.
T = C
Li = 2.
Bi = 1.
N = 3 # for instance
# Constraints:
def Brhs(mins):
return np.sum(mins.reshape(4,N)[0]) - Bi/2
def Lrhs(mins):
return np.sum(mins.reshape(4,N)[1]) - Li/2
# Function to minimize:
def Fnret(mins):
Bj, Lj, a, b = mins.reshape(4,N)
S1 = 0; S2 = 0
Binf = np.zeros(N); Linf = np.zeros(N);
for i in range(N):
sbi=(Bi/2); sli=(Li/2)
for j in range(i+1):
sbi -= Bj[j]
sli -= Lj[j]
Binf[i]=sbi
Linf[i]=sli
for i in range(N):
S1 += (C*(1-np.sin(a[i]))+T*np.sin(a[i])) * ((2*Bj[i]*Binf[i]+Bj[i]**2)/(np.tan(b[i])*np.cos(a[i]))) +\
(C*(1-np.sin(b[i]))+T*np.sin(b[i])) * ((2*Bj[i]*Linf[i]+Lj[i]*Bj[i])/(np.sin(b[i])))
S2 += (gamma*Bj[0]/(6*np.tan(b[0])))*((Bi/2)*(Li/2) + 4*(Binf[0]+Bj[0])*(Linf[0]+Lj[0]) + Binf[0]*Linf[0])
j=1
while j<(N):
S2 += (gamma*Bj[j]/(6*np.tan(b[j])))*(Binf[j-1]*Linf[j-1] + 4*(Binf[j]+Bj[j])*(Linf[j]+Lj[j]) + Binf[j]*Linf[j])
j += 1
F = 2*(S1+S2)
return F
eps = 1e-3
bounds = [(0,None)]*2*N + [(0+eps,np.pi/2-eps)]*2*N # Bounds
cons = ({'type': 'ineq', 'fun': Fnret},
{'type': 'eq', 'fun': Lrhs},
{'type': 'eq', 'fun': Brhs})
x0 = np.ones(4*N) # Initial guess
res = minimize(Fnret, x0, method='trust-constr', bounds = bounds, constraints=cons, tol=1e-6)
F = res.fun
Bj = (res.x).reshape(4,N)[0]
Lj = (res.x).reshape(4,N)[1]
ai = (res.x).reshape(4,N)[2]
bi = (res.x).reshape(4,N)[3]
Which is essentially the same just changing the minimization technique. From np.sum(Bj) and np.sum(Lj) is easy to see that the results are in agreement with the constraints imposed, which were not working previously.

scipy.opimize.minimize fails with constraints for ordered solution vector

I am trying to solve a linear equations with minimize (algorithm=SLSQP), having a set of constraints: The sum of the solution vector components has to be 1 (or at least very close to it, to minimize the error) and second constraint enforces the vector components to be ordered, with x_0 being largest and x_n having the smallest magnitude. Further I set bounds, as each vector component has
This is my code so far:
from scipy.optimize import minimize
import numpy as np
from scipy.sparse import rand
from scipy.optimize import lsq_linear
#Linear equation Ax = b
def fmin(x,A,b):
y = np.dot(A, x) - b
return np.dot(y, y)
#Test data
b = np.array([172,8,7.4,24,21,0.8,0.1])
A_t = np.array(
[[188,18.4,16.5,3.4,2.1,1.77,0.075],
[405,0,0,99.8,99.8,0,0.0054],
[90.5,0.4,0.009,19.7,15.6,1.06,0.012],
[322,0,0,79,79,0.3,0.3],
[0,0,0,0,0,0,0],
[362,0.25,0.009,89.2,0,0.43,0.019],
[37,1.4,0.2,7.3,1,4.5,0.1],
[26,0.29,0.038,6.1,2.4,0.4,0.053]])
A = A_t.T
#=========================
m = np.shape(A)[0]
n = np.shape(A)[1]
x0 = np.full(n, 0.5)
args = (A,b)
bnds = (( (1*10e-100, 1), )*n) #all x_i must be between 0 and 1
cons = [{'type': 'eq', 'fun': lambda x: 1.0-np.sum(x) }] #sum(x_i) = 1
#consider order of vectors as constraints
for i in range(n-1):
cons = cons + [{'type': 'ineq', 'fun': lambda x : x[i] - x[i+1] }]
res = minimize(fmin, x0, args, method='SLSQP',
bounds=bnds,constraints=cons,tol=1e-2,options={'disp': False})
print ("\n res\n", res)
print("Sum of coefficients {}".format(np.sum(res.x)))
print("Difference vector:\n{}".format(np.dot(A,res.x) - b))
Unfortunately the algorithm false with
res
fun: 317820.09898084006
jac: array([205597.34765625, 481389.625 , 105853.7265625 , 382592.76953125,
0. , 416196.953125 , 42268.78125 , 30196.81640625])
message: 'Positive directional derivative for linesearch'
nfev: 10
nit: 5
njev: 1
status: 8
success: False
x: array([0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5])
Sum of coefficients 4.0
Difference vector:
[5.4325e+02 2.3700e+00 9.7800e-01 1.2825e+02 7.8950e+01 3.4300e+00
1.8220e-01]
I would be very grateful if someone could help me to sort this out. In fact for the test data in this example I know that the right solution should be 0.58 for x_0 and 0.12 for x_2.
Many thanks in advance!
To clarify the discussion in the comments: Your objective function is nonlinear in the optimization variable x. Thus, this is a nonlinear optimization problem. The reason for the error message is quite simple: Your initial guess x0 lies outside the feasible region. You can easily verify that it doesn't fulfill your first constraint 1.0 - np.sum(x) = 0.
However, note also that you need to capture the loop variable i when creating your lambda constraint expressions in a loop. Otherwise, you add the constraint lambda x: x[n-2] - x[n-1] seven times. See, e.g. here for more details.
Creating the constraints correctly by
for i in range(n-1):
cons = cons + [{'type': 'ineq', 'fun': lambda x, i=i: x[i] - x[i+1] }]
and providing the feasible initial guess x0 = np.ones(n)/n yields
fun: 114.90196737679031
jac: array([3550.86690235, 7911.74978828, 1787.36224174, 6290.006423 ,
0. , 7580.9283762 , 757.60069752, 528.41590595])
message: 'Optimization terminated successfully'
nfev: 66
nit: 6
njev: 6
status: 0
success: True
x: array([0.38655391, 0.08763516, 0.08763516, 0.08763516, 0.08763516,
0.08763515, 0.08763516, 0.08763516])
However, for larger problem's it's highly recommended to rewrite your second constraint:
D = np.eye(n) - np.eye(n, k=1)
D[-1, -1] = 0.0
cons = [{'type': 'eq', 'fun': lambda x: 1.0-np.sum(x) }]
cons += [{'type': 'ineq', 'fun': lambda x: D # x}]
Now the solver only needs to evaluate 2 constraints instead of n+1 constraints in each iteration. You can further speed up the solver by providing gradients and jacobians, see this answer for more details.

Symbolic matrix and numpy usage error "TypeError: ufunc 'isfinite' not supported for the input types.."

I was trying to perform a scipy.opimization using minimize function. I am looking to find all the variables like Iz,Iy,J,kz,ky,Yc,Yg such that the error between vector K_P_X and f is minimum. That is objective function K_P_X-fshould be minimum. I think my mistake is related to the calculation involving numpy.linalg.norm(sol-f)where the sol is assigned with a symbolic vector (K_P_X). Due to the data type conflict i am getting this error. If that's the case, Q1. Can anyone please suggest a better way to represent the equality constraint equation (ie. constr1()) such that this error can be avoided. The full code is given below,
import scipy.optimize as optimize
from sympy import symbols,zeros,Matrix,Transpose
import numpy
#Symobolic K matrix
Zc,Yc,Zg,Yg=symbols("Zc Yc Zg Yg",real=True)
A,Iz,Iy,J,kz,ky,E,G,L=symbols("A Iz Iy J kz ky E G L",real=True,positive=True)
E=10400000;G=3909800;L=5
def phi_z():
phi_z=(12*E*Iy)/(kz*A*G*L**2)
return phi_z
def phi_y():
phi_y=(12*E*Iz)/(ky*A*G*L**2)
return phi_y
K_P=zeros(12,12)
K1=Matrix(([E*A/L,0,0],[0,(12*E*Iz)/((1+phi_y())*L**3),0],[0,0,(12*E*Iy)/((1+phi_z())*L**3)]))
K2=Matrix(([G*J/L,0,0],[0,E*Iy/L,0],[0,0,E*Iz/L]))
Q1=Matrix(([0,Zg,-Yg],[-Zc,0,L/2],[Yc,-L/2,0]))
Q1_T=Transpose(Q1)
Q2=Matrix(([0,Zg,-Yg],[-Zc,0,-L/2],[Yc,L/2,0]))
Q2_T=Transpose(Q2)
K11=K1; K12=K1*Q1; K13=-K1; K14=-K1*Q2; K22=Q1_T*K1*Q1+K2; K23=-Q1_T*K1; K24=-Q1_T*K1*Q2-K2; K33=K1; K34=K1*Q2; K44=Q2_T*K1*Q2+K2
K_P[0:3,0:3]=K11; K_P[0:3,3:6]=K12; K_P[0:3,6:9]=K13; K_P[0:3,9:12]=K14; K_P[3:6,3:6]=K22; K_P[3:6,6:9]=K23; K_P[3:6,9:12]=K24 ;K_P[6:9,6:9]=K33; K_P[6:9,9:12]=K34; K_P[9:12,9:12]=K44
##Converting Upper triangular stiffness matrix to Symmetric stiffness matrix##
for i in range(0,12):
for j in range(0,12):
K_P[j,i]=K_P[i,j]
K_P = K_P.subs({A: 7.55})
K_P = K_P.subs({Zc: 0})
K_P = K_P.subs({Zg: 0})
X= numpy.matrix([[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1]])
K_P_X=K_P*X
f= numpy.matrix([[-9346.76033789],[1595512.77906],[-1596283.83112],[274222.872543],[4234010.18889],[4255484.3549],[9346.76033789],[-1595512.77906],[1596283.83112],[-275173.513088],[3747408.91068],[3722085.0499]])
function=K_P_X-f
def Obj_func(variables):
Iz,Iy,J,kz,ky,Yc,Yg=variables
function=K_P_X-f #K_P_X matrix contains the variables like Iz,Iy,J,kz,ky,Yc,Yg.
return function
def constr1(variables):
sol = K_P_X #Here the variables are in the symbolic vector K_P_X
if numpy.allclose(sol, f):
return 0.00 #If Error is equal to zero hence required accuracy is reached. So stop optimization
else:
return numpy.linalg.norm(sol-f)
initial_guess=[10,10,10,0.1,0.1,0.001,0.001]
cons = ({'type':'eq', 'fun': constr1},{'type': 'ineq', 'fun': lambda variables: -variables[3]+1},{'type': 'ineq', 'fun': lambda variables: variables[3]-0.001},{'type': 'ineq', 'fun': lambda variables: -variables[4]+1},{'type': 'ineq', 'fun': lambda variables: variables[4]-0.001},{'type': 'ineq', 'fun': lambda variables: -variables[5]+0.5},{'type': 'ineq', 'fun': lambda variables: variables[5]-0},{'type': 'ineq', 'fun': lambda variables: -variables[6]+0.5},{'type': 'ineq', 'fun': lambda variables: variables[6]-0})
bnds = ((1, 60), (1, 60),(1, 60),(0.1, 1),(0.1, 1),(0.001, 0.5),(0.001, 0.5))
res=optimize.minimize(Obj_func,initial_guess, bounds=bnds,constraints=cons)
I'll list some of the things that are wrong here.
As hpaulj said, you can't directly pass SymPy objects to SciPy or NumPy. But you can lambdify and then use that in the minimization routine
Your minimization setup does not make sense. Minimizing a function with the constraint that that same function must be zero... this is not what constrained minimization means. Constraints are something different from the objective.
It's better to use least_squares here which is dedicated to minimizing the norm of the difference (some vector function - target vector).
With that in mind, here is your script reworked:
import scipy.optimize as optimize
from sympy import symbols, Matrix, lambdify
import numpy
Iz,Iy,J,kz,ky,Yc,Yg = symbols("Iz Iy J kz ky Yc Yg",real=True,positive=True)
K_P_X = Matrix([[37.7776503296448*Yg + 8.23411191827681],[-340.454138522391*Iz/(21.1513673253807*Iz/ky + 125)],[-9.4135635827062*Iy*Yc/(21.1513673253807*Iy/kz + 125) - 368.454956983948*Iy/(21.1513673253807*Iy/kz + 125)],[-9.4135635827062*Iy*Yc**2/(21.1513673253807*Iy/kz + 125) - 368.454956983948*Iy*Yc/(21.1513673253807*Iy/kz + 125) - 0.0589826136148473*J],[23.5339089567655*Iy*Yc/(21.1513673253807*Iy/kz + 125) + 2.62756822555969*Iy + 921.137392459871*Iy/(21.1513673253807*Iy/kz + 125)],[-5.00660515891599*Iz - 851.135346305977*Iz/(21.1513673253807*Iz/ky + 125) - 37.7776503296448*Yg**2 - 8.23411191827681*Yg],[-37.7776503296448*Yg - 8.23411191827681],[340.454138522391*Iz/(21.1513673253807*Iz/ky + 125)],[9.4135635827062*Iy*Yc/(21.1513673253807*Iy/kz + 125) + 368.454956983948*Iy/(21.1513673253807*Iy/kz + 125)],[9.4135635827062*Iy*Yc**2/(21.1513673253807*Iy/kz + 125) + 368.454956983948*Iy*Yc/(21.1513673253807*Iy/kz + 125) + 0.0589826136148473*J],[23.5339089567655*Iy*Yc/(21.1513673253807*Iy/kz + 125) - 2.62756822555969*Iy + 921.137392459871*Iy/(21.1513673253807*Iy/kz + 125)],[5.00660515891599*Iz - 851.135346305977*Iz/(21.1513673253807*Iz/ky + 125) + 37.7776503296448*Yg**2 + 8.23411191827681*Yg]])
f = Matrix([[-1],[-1],[-1],[-1.00059553353],[3.99999996539],[-5.99940443072],[1],[1],[1],[1],[1],[1]])
obj = lambdify([Iz,Iy,J,kz,ky,Yc,Yg], tuple(K_P_X - f))
initial_guess=[10,10,10,0.1,0.1,0.001,0.001]
bnds = ((1, 60), (1, 60),(1, 60),(0.1, 1),(0.1, 1),(0.001, 0.5),(0.001, 0.5))
lower = [a for (a, b) in bnds]
upper = [b for (a, b) in bnds]
res = optimize.least_squares(lambda x: obj(x[0], x[1], x[2], x[3], x[4], x[5], x[6]), initial_guess, bounds=(lower, upper))
print(res)
Changes:
Prior to lambdify, we should have a SymPy expression. So both K_P_X and f are SymPy matrices now.
Lambdified function takes 7 scalar arguments and returns a tuple of components of K_P_X - f
The bounds are separated into lower and upper, as the syntax of least_squares requires
We can't directly pass obj to least_squares, because it will receive one array parameter instead of 7 scalars. Hence the additional lambda step for unpacking the vector.
Believe it or not, minimization works. It returns res.x, the minimum point, as
[ 1.00000000e+00, 1.00000000e+00, 1.69406332e+01,
1.00000000e-01, 1.00000000e-01, 1.00000000e-03,
1.00000000e-03]
which looks suspiciously round at first, but this is only because the point hits against the bounds you placed (10, 1, 0.1 and so on). Only the third variable ended up with an inactive constaint.

why is scipy.minimize ignoring my constraints?

I have a constrained optimization problem that I am trying to solve using scipy.optimize.minimize.
Basically, I am fitting one parameter, f, to a system of ODEs with the constraint:
f >0 and
f*1000< 500
I wrote a MWE below. In this simple case, it is obvious that 0<f<0.5, but in my real problem, it is not obvious the a-priori upper bound, hence the inequality contraint.
from __future__ import division
import numpy as np
from scipy.integrate import odeint
from scipy.optimize import minimize
def myeqs(y,t,beta, gamma,N):
dy0 = -(beta/N)*y[0]*y[1]
dy1 = (beta/N)*y[0]*y[1] - gamma*y[1]
dy2 = gamma*y[1]
return [dy0,dy1,dy2]
def runModel(f, extraParams):
[beta, gamma, N, initCond, time]= extraParams
S0 = (1-f)*initCond[0]
I0 = initCond[1]
R0 = f*initCond[0]
tspan = np.linspace(time[0], time[1], int(time[1] - time[0]))
sol = odeint(myeqs, [S0,I0,R0], tspan, args=(beta, gamma, N))
return sol
y0 = [1000, 1, 0]
extraParams = [0.5, 0.25, 1000, y0, [0,150]]
def computeSimple(f, extraParams):
sol = runModel(f, extraParams)
return np.abs((sol[-1,2] - sol[0,2]))
def c1(f):
return f*1000.0 - 500.0
cons = ({'type': 'ineq', 'fun': c1})
#cons = ({'type': 'ineq',
# 'fun': lambda x: x[0]*1000 - 500}, )
res = minimize(computeSimple, 0.2, args=(extraParams, ), method='SLSQP', bounds=((0,1.5), ), constraints=cons)
print res.x
print c1(res.x)
If you run this, you will see that res.x is always the upper bound of bounds, regardless of the contraints...
Where is my mistake?
thanks in advance
You got your constraint backward. This constraint:
def c1(f):
return f*1000.0 - 500.0
is constraining f to be at least 0.5, instead of at most 0.5

Numpy Minimize COBYLA Constraints

I am using scipy.minimize with the COBYLA method, but I seem to be unable to write the constraints properly because when I check the values of the objective function, they do not respect those constraints.
Basically, the objective function accepts an array as argument, that must follow two constraints:
Each value in the array must be greater than 0
The sum of the values must be inferior to 1
So far I wrote it this way:
constraints = [{'type': 'ineq', 'fun': lambda x: 1 - sum(x)},
{'type': 'ineq', 'fun': lambda x: x[0]},
{'type': 'ineq', 'fun': lambda x: x[1]}]
However, sometimes I get values greater than 1...
Here is an example:
from __future__ import division
from math import pow, exp
import numpy as np
from scipy.optimize import minimize
nbStudy = 3
nbCYP = 2
raucObserved = [3.98, 2.0, 0.12]
IXmat = np.matrix([[-0.98, 0], [-0.3, -0.98], [7.7, 4.2]])
NBITER = 50
estimatedCR = []
raucPred = []
varR = [0.0085, 0.0048, 0.0110]
sdR = [0.0922, 0.0692, 0.1051]
cnstrts = [{'type': 'ineq', 'fun': lambda x: 1 - sum(x)},
{'type': 'ineq', 'fun': lambda x: x}]
def fun(CR):
dum = []
for i in range(nbStudy):
crix = 0
for j in range(nbCYP):
crix += CR[j] * IXmat[i, j]
raucPredicted = 1 / (1 + crix)
dum.append(pow((np.log(raucPredicted) - np.log(raucObservedBiased[i])), 2) / varR[i])
output = np.sum(dum)
return output
for iter in range(NBITER):
raucObservedBiased = []
for k in range(nbStudy):
raucObservedBiased.append(raucObserved[k] * exp(sdR[k] * np.random.normal()))
initialCR = np.matrix([[(1 / nbCYP) * np.random.uniform()], [(1 / nbCYP) * np.random.uniform()]])
output = minimize(fun, initialCR, method='COBYLA', constraints=cnstrts)
estimatedCR.append(output.x)
Apparently a version problem, the issue has been fixed since. I was using Python 2.7 and Scipy 0.13...
You are not checking that the solver converged (output.success == True) --- and in your case it does not converge. If there is no convergence, nothing is guaranteed about the constraints.

Categories

Resources