Created an objective function
Added constraints
The problem is no matter what initial guess I use, the minimize functions just keeps on using that number. for example: If I use 15 for the initial guess, the solver will not try any other number and say the answer is 15. I'm sure the ere is an issue with the code but I am not sure where.
CODE BELOW:
from scipy.optimize import minimize
import numpy as np
from pandas import *
#----------------------------------------------------
#-------- Create Function ------------
#----------------------------------------------------
def MovingAverage(Input,N,test=0):
# Create data frame
df = DataFrame(Input, columns=['Revenue'])
# Add columns
df['CummSum'] = df['Revenue'].cumsum()
df['Mavg'] = rolling_mean(df['Revenue'], N)
df['Error'] = df['Revenue'] - df['Mavg']
df['MFE'] = (df['Error']).mean()
df['MAD'] = np.fabs(df['Error']).mean()
df['MSE'] = np.sqrt(np.square(df['Error']).mean())
df['TS'] = np.sum(df['Error'])/df['MAD']
print N, df.MAD[0]
if test == 0:
return df.MAD[0]
else: return df
#----------------------------------------------------
#-------- Input ------------
#----------------------------------------------------
data = [1,2,3,4,5,5,5,5,5,5,5,5,5,5,5]
#----------------------------------------------------
#-------- SOLVER ------------
#----------------------------------------------------
## Objective Function
fun = lambda x: MovingAverage(data, x[0])
## Contraints
cons = ({'type': 'ineq', 'fun': lambda x: x[0] - 2}, # N>=2
{'type': 'ineq', 'fun': lambda x: len(data) - x[0]}) # N<=len(data)
## Bounds (note sure what this is yet)
bnds = (None,None)
## Solver
res = minimize(fun, 15, method='SLSQP', bounds=bnds, constraints=cons)
##print res
##print res.status
##print res.success
##print res.njev
##print res.nfev
##print res.fun
##for i in res.x:
## print i
##print res.message
##for i in res.jac:
## print i
##print res.nit
# print final results
result = MovingAverage(data,res.x,1)
print result
List of possible values:
2 = 0.142857142857,
3 = 0.25641025641,
4 = 0.333333333333,
5 = 0.363636363636,
6 = 0.333333333333,
7 = 0.31746031746,
8 = 0.3125,
9 = 0.31746031746,
10 = 0.333333333333,
11 = 0.363636363636,
12 = 0.416666666667,
13 = 0.487179487179,
14 = 0.571428571429,
15 = 0.666666666667
Your function is piecewise constant between integer input values, as seen in the plot below (plotted in steps of 0.1 on the x axis):
So the derivative is zero at almost all points, and that's why a gradient based minimization method will return any given initial point as a local minimum.
To rescue the situation, you could think about using interpolation in the objective function to get intermediate function values for non-integer input values. If you combine this with a gradient-based minimization, it might find some point around 8 as a local minimum when starting at 15.
Related
I have a piece of code that worked well when I optimized advertising budget with 2 variables (channels) but when I added aditional channels, it stopped optimizing with no error messages.
import numpy as np
import scipy.optimize as sco
# setup variables
media_budget = 100000 # total media budget
media_labels = ['launchvideoviews', 'conversion', 'traffic', 'videoviews', 'reach'] # channel names
media_coefs = [0.3524764781, 5.606903166, -0.1761937775, 5.678596017, 10.50445914] #
# model coefficients
media_drs = [-1.15, 2.09, 6.7, -0.201, 1.21] # diminishing returns
const = -243.1018144
# the function for our model
def model_function(x, media_coefs, media_drs, const):
# transform variables and multiply them by coefficients to get contributions
channel_1_contrib = media_coefs[0] * x[0]**media_drs[0]
channel_2_contrib = media_coefs[1] * x[1]**media_drs[1]
channel_3_contrib = media_coefs[2] * x[2]**media_drs[2]
channel_4_contrib = media_coefs[3] * x[3]**media_drs[3]
channel_5_contrib = media_coefs[4] * x[4]**media_drs[4]
# sum contributions and add constant
y = channel_1_contrib + channel_2_contrib + channel_3_contrib + channel_4_contrib + channel_5_contrib + const
# return negative conversions for the minimize function to work
return -y
# set up guesses, constraints and bounds
num_media_vars = len(media_labels)
guesses = num_media_vars*[media_budget/num_media_vars,] # starting guesses: divide budget evenly
args = (media_coefs, media_drs, const) # pass non-optimized values into model_function
con_1 = {'type': 'eq', 'fun': lambda x: np.sum(x) - media_budget} # so we can't go over budget
constraints = (con_1)
bound = (0, media_budget) # spend for a channel can't be negative or higher than budget
bounds = tuple(bound for x in range(5))
# run the SciPy Optimizer
solution = sco.minimize(model_function, x0=guesses, args=args, method='SLSQP', constraints=constraints, bounds=bounds)
# print out the solution
print(f"Spend: ${round(float(media_budget),2)}\n")
print(f"Optimized CPA: ${round(media_budget/(-1 * solution.fun),2)}")
print("Allocation:")
for i in range(len(media_labels)):
print(f"-{media_labels[i]}: ${round(solution.x[i],2)} ({round(solution.x[i]/media_budget*100,2)}%)")
And the result is
Spend: $100000.0
Optimized CPA: $-0.0
Allocation:
-launchvideoviews: $20000.0 (20.0%)
-conversion: $20000.0 (20.0%)
-traffic: $20000.0 (20.0%)
-videoviews: $20000.0 (20.0%)
-reach: $20000.0 (20.0%)
Which is the same as the initial guesses argument.
Thank you very much!
Update: Following #joni comment, I passed the gradient function explicitly, but still no result.
I don't know how to change the constrains to test #chthonicdaemon
comment yet.
import numpy as np
import scipy.optimize as sco
# setup variables
media_budget = 100000 # total media budget
media_labels = ['launchvideoviews', 'conversion', 'traffic', 'videoviews', 'reach'] # channel names
media_coefs = [0.3524764781, 5.606903166, -0.1761937775, 5.678596017, 10.50445914] #
# model coefficients
media_drs = [-1.15, 2.09, 6.7, -0.201, 1.21] # diminishing returns
const = -243.1018144
# the function for our model
def model_function(x, media_coefs, media_drs, const):
# transform variables and multiply them by coefficients to get contributions
channel_1_contrib = media_coefs[0] * x[0]**media_drs[0]
channel_2_contrib = media_coefs[1] * x[1]**media_drs[1]
channel_3_contrib = media_coefs[2] * x[2]**media_drs[2]
channel_4_contrib = media_coefs[3] * x[3]**media_drs[3]
channel_5_contrib = media_coefs[4] * x[4]**media_drs[4]
# sum contributions and add constant (objetive function)
y = channel_1_contrib + channel_2_contrib + channel_3_contrib + channel_4_contrib + channel_5_contrib + const
# return negative conversions for the minimize function to work
return -y
# partial derivative of the objective function
def fun_der(x, media_coefs, media_drs, const):
d_chan1 = 1
d_chan2 = 1
d_chan3 = 1
d_chan4 = 1
d_chan5 = 1
return np.array([d_chan1, d_chan2, d_chan3, d_chan4, d_chan5])
# set up guesses, constraints and bounds
num_media_vars = len(media_labels)
guesses = num_media_vars*[media_budget/num_media_vars,] # starting guesses: divide budget evenly
args = (media_coefs, media_drs, const) # pass non-optimized values into model_function
con_1 = {'type': 'eq', 'fun': lambda x: np.sum(x) - media_budget} # so we can't go over budget
constraints = (con_1)
bound = (0, media_budget) # spend for a channel can't be negative or higher than budget
bounds = tuple(bound for x in range(5))
# run the SciPy Optimizer
solution = sco.minimize(model_function, x0=guesses, args=args, method='SLSQP', constraints=constraints, bounds=bounds, jac=fun_der)
# print out the solution
print(f"Spend: ${round(float(media_budget),2)}\n")
print(f"Optimized CPA: ${round(media_budget/(-1 * solution.fun),2)}")
print("Allocation:")
for i in range(len(media_labels)):
print(f"-{media_labels[i]}: ${round(solution.x[i],2)} ({round(solution.x[i]/media_budget*100,2)}%)")
The reason you are not able to solve this exact problem turns out to be all about the specific coefficients you have. For the problem as it is specified, the optimum appears to be near allocations where some spends are zero. However, at spends near zero, due to the negative coefficients in media_drs, the objective function rapidly becomes infinite. I believe this is what is causing the issues you are experiencing. I can get a solution with success = True by manipulating the 6.7 to be 0.7 in the coefficients and setting lower bound that is larger than 0 to stop the objective function from exploding. So this isn't so much of a programming issue as a problem formulation issue.
I cannot imagine it would be true that you would see more payoff when you reduce the budget on a particular item, so all the negative powers in media_dirs seem off to me.
I will also post here some improvements I made while debugging this issue. Notice that I'm using numpy arrays more to make some of the functions easier to read. Also notice how I have calculated a correct jacobian:
import numpy as np
import scipy.optimize as sco
# setup variables
media_budget = 100000 # total media budget
media_labels = ['launchvideoviews', 'conversion', 'traffic', 'videoviews', 'reach'] # channel names
media_coefs = np.array([0.3524764781, 5.606903166, -0.1761937775, 5.678596017, 10.50445914]) #
# model coefficients
media_drs = np.array([-1.15, 2.09, 1.7, -0.201, 1.21]) # diminishing returns
const = -243.1018144
# the function for our model
def model_function(x, media_coefs, media_drs, const):
# transform variables and multiply them by coefficients to get contributions
channel_contrib = media_coefs * x**media_drs
# sum contributions and add constant
y = channel_contrib.sum() + const
# return negative conversions for the minimize function to work
return -y
def model_function_jac(x, media_coefs, media_drs, const):
dy_dx = media_coefs * media_drs * x**(media_drs-1)
return -dy_dx
# set up guesses, constraints and bounds
num_media_vars = len(media_labels)
guesses = num_media_vars*[media_budget/num_media_vars,] # starting guesses: divide budget evenly
args = (media_coefs, media_drs, const) # pass non-optimized values into model_function
con_1 = {'type': 'ineq', 'fun': lambda x: media_budget - sum(x)} # so we can't go over budget
constraints = (con_1,)
bound = (10, media_budget) # spend for a channel can't be negative or higher than budget
bounds = tuple(bound for x in range(5))
# run the SciPy Optimizer
solution = sco.minimize(
model_function, x0=guesses, args=args,
method='SLSQP',
jac=model_function_jac,
constraints=constraints,
bounds=bounds
)
# print out the solution
print(solution)
print(f"Spend: ${round(float(media_budget),2)}\n")
print(f"Optimized CPA: ${round(media_budget/(-1 * solution.fun),2)}")
print("Allocation:")
for i in range(len(media_labels)):
print(f"-{media_labels[i]}: ${round(solution.x[i],2)} ({round(solution.x[i]/media_budget*100,2)}%)")
This solution at least "works" in the sense that it reports a successful solve and returns an answer different from the initial guess.
I am trying to use scipy.odeint() method in order to solve an second order partial derivative function.
I can do that for a single value of constant k, which is a constant of the function I have.
But I want to try this solution for many values of k.
To do so, I included the values that I want in a list k, and going through a loop I want to plug in these values for the final solution as arguments.
However, I am getting an error
error: Extra arguments must be in a tuple
import numpy as np
from scipy.integrate import odeint
### Code with a single value of K.THAT WORKS FINE!!!! ###
k = 1 #attributes to be changed
t = [0.1,0.2,0.3] #Data
init = [45,0] #initial values
#Function to apply an integration
def f(init, t, args=(k,)):
dOdt = init[1]
dwdt = -np.cos(init[0]) + k*dOdt
return [dOdt, dwdt]
#integrating function that returns a list of 2D numpy arrays
zCH = odeint(f,init,t)
################################################################
### Code that DOES NOT WORK!###
k = [1,2,3] #attributes to be changed
t = [0.1,0.2,0.3] #Data
init = [45,0] #initial values
#Function to apply an integration
def f(init, t, args=(k,)):
dOdt = init[1]
dwdt = -np.cos(init[0]) + k*dOdt
return [dOdt, dwdt]
solutions = []
for i in k:
#integrating function that returns a list of 2D numpy arrays
zCH = odeint(f,init,t,(k[i-1]))
solutions.append(zCH)```
It has to do with the way you are passing k into your function f().
The following changes the value of k on each iteration
k_list = [1,2,3] #attributes to be changed
t = [0.1,0.2,0.3] #Data
init = [45,0] #initial values
#Function to apply an integration
def f(init, t, args=(k,)):
dOdt = init[1]
dwdt = -np.cos(init[0]) + k*dOdt
return [dOdt, dwdt]
solutions = []
for k in k_list:
#integrating function that returns a list of 2D numpy arrays
zCH = odeint(f, init, t)
solutions.append(zCH)
Sample Code: https://pastebin.com/L8qV1eWQ
Link to PDF/Image of the code: https://imgur.com/tiPcB8M
import numpy as np
from scipy.optimize import minimize
import math as m
import random
#Inputs:
## X = A 100x6 array of 600 organized variables.
def func(X):
x = 0
for i in range(0,100):
for j in range(0,6):
x = x = X[i][j]
return -1.0*x
#Inputs:
## a = the A matrix constructed below
## d = the d array constructed below
#Outputs:
# res - result of optimization
def Optimize(a,c,d):
###Constraints###
cons = list()
for i in range(0,100):
cons.append(
{'type' : 'ineq',
'fun' : lambda X : np.array([ X[i][0]+X[i][1]+
X[i][2]+X[i][3]+X[i][4]+X[i][5]-d ]),
'jac' : lambda X : np.ones(6) })
for j in range(0,6):
cons.append(
{'type' : 'ineq',
'fun' : lambda X : np.array([
X[0][j]+ X[1][j]+ X[2][j]+ X[3][j]+ X[4][j]+
X[5][j]+ X[6][j]+ X[7][j]+ X[8][j]+ X[9][j]+
X[10][j]+X[11][j]+X[12][j]+X[13][j]+X[14][j]+X[15][j]+
X[16][j]+X[17][j]+X[18][j]+X[19][j]+
X[20][j]+X[21][j]+X[22][j]+X[23][j]+X[24][j]+X[25][j]+
X[26][j]+X[27][j]+X[28][j]+X[29][j]+
X[30][j]+X[31][j]+X[32][j]+X[33][j]+X[34][j]+X[35][j]+
X[36][j]+X[37][j]+X[38][j]+X[39][j]+
X[40][j]+X[41][j]+X[42][j]+X[43][j]+X[44][j]+X[45][j]+
X[46][j]+X[47][j]+X[48][j]+X[49][j]+
X[50][j]+X[51][j]+X[52][j]+X[53][j]+X[54][j]+X[55][j]+
X[56][j]+X[57][j]+X[58][j]+X[59][j]+
X[60][j]+X[61][j]+X[62][j]+X[63][j]+X[64][j]+X[65][j]+
X[66][j]+X[67][j]+X[68][j]+X[69][j]+
X[70][j]+X[71][j]+X[72][j]+X[73][j]+X[74][j]+X[75][j]+
X[76][j]+X[77][j]+X[78][j]+X[79][j]+
X[80][j]+X[81][j]+X[82][j]+X[83][j]+X[84][j]+X[85][j]+
X[86][j]+X[87][j]+X[88][j]+X[89][j]+
X[90][j]+X[91][j]+X[92][j]+X[93][j]+X[94][j]+X[95][j]+
X[96][j]+X[97][j]+X[98][j]+X[99][j]
-c]),
'jac' : lambda X : np.ones(100)})
bnds = list()
for i in range(0,100):
for j in range(0,6):
bnds.append((0,A[i][j]))
res = minimize(func, np.zeros(shape=(100,6)), jac=np.ones(600), bounds =
bnds, constraints=cons, method='SLSQP', options={'disp': True})
return res
##Create sample matrix and arrays of values##
A = np.zeros(shape=(100,6))
def p1(A,temp):
for i in range(0,temp.size):
j = int(random.uniform(0, 6))
A[i][j] = temp[i]
return A
numbers = np.array([4*(m.ceil(0.05*m.e**(0.08*i))) for i in range(0,100)])
A = p1(A,numbers)
d = [[] for i in range(0,100)]
for i in range(0,100):
d[i] = np.hstack(np.random.poisson(lam=(m.ceil(0.05*m.e**(0.08*i))),
size=(1)))
Optimize(A,370,d)
I am working on a project to replicate the results of a 2017 paper.
Part of the paper involves solving a specific optimization problem called the
max-flow problem.
I have constructed a simple example of sufficient scale to replicate the problem
I am having. I am new to using the optimization algorithms in python.
The question seems to be a linear optimization problem with normal inequality
constraints along the sum of each row, and the sum of each column and box constraints or bounds on each specific element.
So I use the constraint argument for the row sum and column sum constraints
then I used the bounds argument to bound each of the individual matrix elements.
The error I am getting seems to be coming from the jacobian.
Since my problem is given in the form of a matrix, I am assuming that each element of the matrix is its own variable and is linear, the matrix is 100 x 6, so 600 linear variables, Hence the gradient would be a vector of ones of length 600
It seems the algorithm does not accept my jacobian and attempts to calculate its own but then runs into a value error. Though I could be wrong.
I hope this is enough information to help me understand the fix.
I have two sets of frequencies data from experiment and from theoretical formula. I want to use minimize function of scipy.
Here's my code snippet.
where g is coupling which I want to find out.
Ad ind is inductance for plotting on x-axis.
from scipy.optimize import minimize
def eigenfreq1_func(ind,w_q,w_r,g):
return (w_q+w_r)+np.sqrt((w_q+w_r)**2.0-4*(w_q+w_r-g**2.0))/2
def eigenfreq2_func(ind,w_q,w_r,g):
return (w_q+w_r)-np.sqrt((w_q+w_r)**2.0-4*(w_q+w_r-g**2))/2.0
def err_func(y1,y1_fit,y2,y2_fit):
return np.sqrt((y1-y1_fit)**2+(y2-y2_fit)**2)
g_init=80e6
res1=eigenfreq1_func(ind,qubit_freq,readout_freq,g_init)
print res1
res2=eigenfreq2_func(ind,qubit_freq,readout_freq,g_init)
print res2
fit=minimize(err_func,args=[qubit_freq,res1,readout_freq,res2])
But it's showing the following error :
"TypeError: minimize() takes at least 2 arguments (2 given)"
First, the indentation in your example is messed up. Hope you don't try and run this
Second, here is a baby example to minimize the chi2 with the function scipy.optimize.minimize (note you can minimize what you want: likelihood, |chi|**?, toto, etc.):
import numpy as np
import scipy.optimize as opt
def functionyouwanttofit(x,y,z,t,u):
return np.array([x+y+z+t+u , x+y+z+t-u , x+y+z-t-u , x+y-z-t-u ]) # baby test here but put what you want
def calc_chi2(parameters):
x,y,z,t,u = parameters
data = np.array([100,250,300,500])
chi2 = sum( (data-functiontofit(x,y,z,t,u))**2 )
return chi2
# baby example for init, min & max values
x_init = 0
x_min = -1
x_max = 10
y_init = 1
y_min = -2
y_max = 9
z_init = 2
z_min = 0
z_max = 1000
t_init = 10
t_min = 1
t_max = 100
u_init = 10
u_min = 1
u_max = 100
parameters = [x_init,y_init,z_init,t_init,u_init]
bounds = [[x_min,x_max],[y_min,y_max],[z_min,z_max],[t_min,t_max],[u_min,u_max]]
result = opt.minimize(calc_chi2,parameters,bounds=bounds)
In your example you don't give initial values... This with the indentation... Were you waiting for someone doing the job for you ?
Third, note the optimization processes proposed by scipy are not always adapted to your needs. You may prefer minimizers such as lmfit
I am trying to build an efficient frontier as in the Markowitz problem.
I have written the code below, but I get the error "ValueError: Objective function must return a scalar". I have tested 'fun' with some values, for example, I input to the console:
W = np.ones([n])/n # start optimization with equal weights
cov_matrix = returns.cov()
fun = 0.5*np.dot(np.dot(W, cov_matrix), W) # variance of the portfolio
fun
The output is 0.00015337622774133828, which is a scalar.
I don't know what might be wrong. Any help is appreciated.
Code:
from scipy.optimize import minimize
import pandas as pd
import numpy as np
from openpyxl import load_workbook
wb = load_workbook('path/Assets_3.xlsx') # in this workbook there is data for returns.
# The next lines clean unnecessary first column and first row.
ws = wb.active
df = pd.DataFrame(ws.values)
df1 = df.drop(0,axis=1)
df1 = df1.drop(0)
df1 = df1.astype(float)
rf = 0.05
r_bar = 0.05
returns = df1.copy()
def efficient_frontier(rf, r_bar, returns):
n = len(returns.transpose())
W = np.ones([n])/n # start optimization with equal weights
exp_ret = returns.mean()
cov_matrix = returns.cov()
fun = 0.5*np.dot(np.dot(W, cov_matrix), W) # variance of the portfolio
cons = ({'type': 'eq', 'fun': lambda W: sum(W) - 1. },
{'type': 'ineq', 'fun': lambda W: np.dot(exp_ret,W) - r_bar })
bnds = [(0.,1.) for i in range(n)] # weights between 0..1.
res = minimize(fun, W, (returns, cov_matrix, rf),
method='SLSQP', bounds = bnds, constraints = cons)
return res
x= efficient_frontier(rf,r_bar,returns)
x
Some Data
1 2 3
1 0.060206 0.005781 0.001117
2 0.006463 -0.007390 0.001133
3 -0.003211 -0.015730 0.001167
4 0.044227 -0.006250 0.001225
5 -0.040571 -0.006910 0.001292
6 -0.007900 -0.006160 0.001208
7 0.068702 0.013836 0.001300
8 0.039286 0.009854 0.001350
9 0.012457 -0.007950 0.001358
10 -0.013758 0.001021 0.001283
11 -0.002616 -0.013600 0.001300
12 0.059004 -0.006090 0.001442
13 0.015566 0.002818 0.001308
14 -0.036454 0.001395 0.001283
15 0.058899 0.011072 0.001325
16 -0.043086 0.017070 0.001308
17 0.023156 -0.003350 0.001392
18 0.063705 0.000301 0.001417
19 0.017628 -0.001960 0.001508
20 -0.014567 -0.006990 0.001525
21 -0.007191 -0.013000 0.001425
22 -0.000815 0.014773 0.001450
23 0.046493 -0.001540 0.001542
24 0.051832 -0.008580 0.001742
25 -0.007151 0.001177 0.001633
26 -0.018196 -0.008680 0.001642
27 -0.013513 -0.008810 0.001675
28 -0.026493 -0.010510 0.001825
29 -0.003249 -0.014750 0.001800
30 0.001222 0.022258 0.001758
This code is a mess and while i can show you something which runs, that does not mean anything.
You will see convergence to your starting-point, whatever that means in your task! It's a strong indicator that something is still very wrong (might be the underlying theory)!
Some additional remarks:
scipy's optimizers are build to work with numpy-arrays, not pandas Dataframes or Series objects!
the only things in your original question which hinted pandas-usage was a var-name df and returns.cov() which does not exist for numpy-arrays!
rf is never used anywhere!
there are multiple things in optimize's args, which are not used!
it does not feel like a problem one should use scipy's optimizers for! (but it's possible; here we are paying for numerical-differentiation for example)
cvxpy would probably a much much better approach (more clean, faster, more accurate) if interpret the problem correctly (did not analyze much)
but the same rules apply: some python-knowledge is needed!
Code:
from scipy.optimize import minimize
import numpy as np
import pandas as pd
rf = 0.05
r_bar = 0.05
returns = pd.DataFrame(np.random.randn(30, 3), columns=list('ABC')) # PANDAS DF
cov_matrix = returns.cov().as_matrix() # use PANDAS one last time
# but result = np.array!
returns = returns.as_matrix() # From now on: np-only!
def fun(x, returns, cov_matrix, rf):
return 0.5*np.dot(np.dot(x, cov_matrix), x)
def efficient_frontier(rf, r_bar, returns):
n = len(returns.transpose())
W = np.ones([n])/n # start optimization with equal weights
exp_ret = returns.mean()
cons = ({'type': 'eq', 'fun': lambda x: np.sum(x) - 1. }, # let's use numpy here
{'type': 'ineq', 'fun': lambda x: np.dot(exp_ret, x) - r_bar })
bnds = [(0.,1.) for i in range(n)] # weights between 0..1.
res = minimize(fun, W, (returns, cov_matrix, rf),
method='SLSQP', bounds = bnds, constraints = cons)
return res
x= efficient_frontier(rf,r_bar,returns)
print(x)
Output:
A B C
A 0.813375 -0.001370 0.173901
B -0.001370 1.482756 0.380514
C 0.173901 0.380514 1.285936
fun: 0.2604530793556774
jac: array([ 0.32863522, 0.62063321, 0.61345008])
message: 'Optimization terminated successfully.'
nfev: 35
nit: 7
njev: 3
status: 0
success: True
x: array([ 0.33333333, 0.33333333, 0.33333333])