Maximising a function - python

experts. I'm trying to maximize a function my_obj with the Nelder-Mead algorithm to fit my data. For this i have taken help from the scipy's optimize.fmin . I think i am very close to the solutions but missing something and getting an error like:

As explained in the scipy.optimize.minimize documentation, you should be using a 1-D array (or a 1-D list because it is compatible) as input for your objective function instead of multiple parameters:
#!/usr/bin/env python
import numpy as np
from scipy.optimize import minimize
d1 = np.array([ 5.0, 10.0, 15.0, 20.0, 25.0])
h = np.array([10000720600.0, 10011506200.0, 10057741200.0, 10178305100.0,10415318500.0])
b = 2.0
cx = 2.0
#objective function
def obj_function(x): # EDIT: Input is a list
m,n,r= x
pw = 1/cx
c = b*cx
x1 = 1+(d1/n)**c
x2 = 1+(d1/m)**c
x3 = (x1/x2)**pw
dcal = (r)*x3
dobs = (h)
deld=((np.log10(dcal)-np.log10(dobs)))**2
return np.sum(deld)
print(obj_function([5.0,10.0,15.0])) # EDIT: Input is a list
x0 = [5.0,10.0,15.0]
print(obj_function(x0))
res = minimize(obj_function, x0, method='nelder-mead')
print(res)
Output:
% python3 script.py
432.6485766651165
432.6485766651165
final_simplex: (array([[7.76285924e+00, 3.02470699e-04, 1.93396980e+01],
[7.76286507e+00, 3.02555020e-04, 1.93397231e+01],
[7.76285178e+00, 3.01100639e-04, 1.93397381e+01],
[7.76286445e+00, 3.01025402e-04, 1.93397169e+01]]), array([0.12196442, 0.12196914, 0.12197448, 0.12198028]))
fun: 0.12196441986340725
message: 'Optimization terminated successfully.'
nfev: 130
nit: 67
status: 0
success: True
x: array([7.76285924e+00, 3.02470699e-04, 1.93396980e+01])

Related

Trying to maximize this simple non linear problem using #gekko but getting this error

(#error: No solution found)
positions = ["AAPL", "NVDA", "MS","CI", "HON"]
cov = df_ret.cov()
ret = df_ret.mean().values
weights = np.array(np.random.random(len(positions)))
def maximize(weights):
std = np.sqrt(np.dot(np.dot(weights.T,cov),weights))
p_ret = np.dot(ret.T,weights)
sharpe = p_ret/std
return sharpe
a = GEKKO()
w1 = a.Var(value=0.2, lb=0, ub=1)
w2 = a.Var(value=0.2, lb=0, ub=1)
w3 = a.Var(value=0.2, lb=0, ub=1)
w4 = a.Var(value=0.2, lb=0, ub=1)
w5 = a.Var(value=0.2, lb=0, ub=1)
a.Equation(w1+w2+w3+w4+w5<=1)
weight = np.array([w1,w2,w3,w4,w5])
a.Obj(-maximize(weight))
a.solve(disp=False)
**** trying to figure out why is it giving no solution as an error
# df_ret is a data frame with returns (for stocks in position)
Df_ret looks like this
# trying to maximize the sharpe ratio
# w(1 to n) are the weights with sum less than or equal to 1****
Here is a solution with gekko:
from gekko import GEKKO
import numpy as np
import pandas as pd
a = GEKKO()
positions = ["AAPL", "NVDA", "MS","CI", "HON"]
df_ret = pd.DataFrame(np.array([[.001729, .014603, .036558, .016772, .001983],
[-0.015906, .006396, .012796, -.002163, 0],
[-0.001849, -.019598, .014484, .036856, .019292],
[.006648, .002161, -.020352, -.007580, 0.022083],
[-.008821, -.014016, -.006512, -.015802, .012583]]))
cov = df_ret.cov().values
ret = df_ret.mean().values
def obj(weights):
std = a.sqrt(np.dot(np.dot(weights.T,cov),weights))
p_ret = np.dot(ret.T,weights)
sharpe = p_ret/std
return sharpe
a = GEKKO()
w = a.Array(a.Var,len(positions),value=0.2,lb=1e-5, ub=1)
a.Equation(a.sum(w)<=1)
a.Maximize(obj(w))
a.solve(disp=False)
print(w)
A couple things that I've done to problem is to use the Array function to create the variable weights w. I also switched to using the gekko sqrt so that it does the automatic differentiation for the objective function. I also added a lower bound of 1e-5 to avoid sqrt(0) and divide by zero. The Obj() function minimizes so I removed the negative sign and use the Maximize() function to make it more readable. It produces this solution for w:
[[1e-05] [0.15810629919] [0.19423029287] [1e-05] [0.6476428726]]
Many are more familiar with scipy. Here is a benchmark problem where the same problem is solved with scipy.minimize.optimize and gekko. There is also a link for that same solution with MATLAB fmincon or gekko with MATLAB.
I am not familiar with GEKKO so I can't really help with that package, but incase someone doesn't answer how to do it using GEKKO, here's a potential solution with scipy.optimize.minimize:
from scipy.optimize import minimize
import numpy as np
import pandas as pd
def OF(weights, cov, ret, sign = 1.0):
std = np.sqrt(np.dot(np.dot(weights.T,cov),weights))
p_ret = np.dot(ret.T,weights)
sharpe = p_ret/std
return sign*sharpe
if __name__ == '__main__':
x0 = np.array([0.2,0.2,0.2,0.2,0.2])
df_ret = pd.DataFrame(np.array([[.001729, .014603, .036558, .016772, .001983],
[-0.015906, .006396, .012796, -.002163, 0],
[-0.001849, -.019598, .014484, .036856, .019292],
[.006648, .002161, -.020352, -.007580, 0.022083],
[-.008821, -.014016, -.006512, -.015802, .012583]]))
cov = df_ret.cov()
ret = df_ret.mean().values
minx0 = np.repeat(0, [len(x0)] , axis = 0)
maxx0 = np.repeat(1, [len(x0)] , axis = 0)
bounds = tuple(zip(minx0, maxx0))
cons = {'type':'ineq',
'fun':lambda weights: 1 - sum(weights)}
res_cons = minimize(OF, x0, (cov, ret, -1), bounds = bounds, constraints=cons, method='SLSQP')
print(res_cons)
print('Current value of objective function: ' + str(res_cons['fun']))
print('Current value of controls:')
print(res_cons['x'])
which outputs:
fun: -2.1048843911794486
jac: array([ 5.17067784e+00, -2.36839056e-04, -6.24716282e-04, 6.56819057e+00,
2.45392323e-04])
message: 'Optimization terminated successfully.'
nfev: 69
nit: 9
njev: 9
status: 0
success: True
x: array([5.47832097e-14, 1.52927443e-01, 1.87864415e-01, 5.32258098e-14,
6.26433468e-01])
Current value of objective function: -2.1048843911794486
Current value of controls:
[5.47832097e-14 1.52927443e-01 1.87864415e-01 5.32258098e-14
6.26433468e-01]
The sign parameter is added here because in order to maximize the objective function you just minimize OF*(-1). I set the default to 1 (minimize), but I pass -1 in args to change it.

Excel Solver solution in Python. Scipy not working

I have a simple non-linear optimization project. I want to find the discount rate for future cash flows and terminal value so that the sum equals to the specified NPV. Below are some experiments that I have tried.
Both companies have a fixed cash flow of 10 with different NPV. The discount rate results should be 1.074 (7.4%) and 1.052 (5.2%) respectively. Excel Solver found the roots quickly while the Scipy returned NoConvergence.
import numpy as np
from scipy.optimize import newton_krylov
from scipy.optimize.nonlin import NoConvergence
cf_fy1 = [10]*2
cf_fy2 = [10]*2
cf_fy3 = [10]*2
cf_fy4 = [10]*2
cf_fy5 = [10]*2
cf_fy6 = [10]*2
npv = [200, 400]
def mydr(dr):
terminal_value = np.divide(cf_fy6, np.subtract(dr, 1.03))
ev = np.sum([np.divide(cf_fy1, np.power(dr, 1)),
np.divide(cf_fy2, np.power(dr, 2)),
np.divide(cf_fy3, np.power(dr, 3)),
np.divide(cf_fy4, np.power(dr, 4)),
np.divide(cf_fy5, np.power(dr, 5)),
np.divide(terminal_value, np.power(dr, 5))], axis=0)
z = np.subtract(ev, npv)
return abs(z)
try:
sol = newton_krylov(mydr, [1.1] * len(npv))
converged = True
except NoConvergence as e:
sol = e.args[0]
converged = False
Thanks all in advance!
According to the docs the Newton-Krylov method is (only?) suitable for solving large-scale problems. And the Newton-Krylov method doesn't converges with your initial point. Since it's a very simple problem, I'd use the general root method instead:
In [13]: from scipy.optimize import root
In [14]: root(mydr, x0 = [1.1, 1.1])
Out[14]:
fjac: array([[-9.99999700e-01, 7.74730469e-04],
[-7.74730469e-04, -9.99999700e-01]])
fun: array([9.03350275e-06, 1.53610404e-06])
message: 'The solution converged.'
nfev: 40
qtf: array([-9.03230997e-06, -1.54310211e-06])
r: array([ -4128.02172068, -37514.05364792, 19083.3896212 ])
status: 1
success: True
x: array([1.07391362, 1.05176871])
If needed, you can set the used solver by the method option (Note the different initial point):
In [15]: root(mydr, x0 = [1.05, 1.05], method="krylov")
Out[15]:
fun: array([3.97903932e-13, 1.68824954e-11])
message: 'A solution was found at the specified tolerance.'
nit: 7
status: 1
success: True
x: array([1.07391362, 1.05176871])

Scipy: How can I use Bounds with trust-constr?

for my constrained Problem I want to use the Scipy-Trusted-Constr algorithm as I have a multivariable, constraint problem. I dont want /can't calculate the Jacobi/Hessian analytically, and compute it.
However, when setting the bounds, the computation of the Jacobian crashes:
File "C:\Python27\lib\site-packages\scipy\optimize\_trustregion_constr\tr_interior_point.py", line 56, in __init__
self.jac0 = self._compute_jacobian(jac_eq0, jac_ineq0, s0)
File "C:\Python27\lib\site-packages\scipy\optimize\_trustregion_constr\tr_interior_point.py", line 164, in _compute_jacobian
[J_ineq, S]]))
File "C:\Python27\lib\site-packages\numpy\matrixlib\defmatrix.py", line 1237, in bmat
arr_rows.append(concatenate(row, axis=-1))
ValueError: all the input array dimensions except for the concatenation axis must match exactly
The error occurs both when using old style bounds and the newest Bounds object. I could reproduce the error with this code:
import numpy as np
import scipy.optimize as scopt
def RosenbrockN(x):
result = 0
for i in range(len(x)-1):
result += 100*(x[i+1]-x[i]**2)**2+(1-x[i])**2
return result
x0 = [0.0, 0.0, 0.0]
#bounds = scopt.Bounds([-2.0,-0.5,-2.0],[2.0,0.8,0.7])
bounds = [(-2.0,2.0),(-0.5,0.8),(-2.0,0.7)]
Res = scopt.minimize(RosenbrockN, x0, \
method = 'trust-constr', bounds = bounds, \
jac = '2-point', hess = scopt.SR1())
I take it that I just misunderstand how bounds are set, but cant find my mistake. Advice is appreciated.
EDIT: I also tried the code example from the documentation which gave the same result. Other methods as SLSQP work well with bounds.
SciPy Version 1.1.0, Python Version 2.7.4, OS Win 7 Ent.
I tried several times with the method "trust-constr" and the boundary constraints fails to be incorporated. I solved this issue by using linear constraints for the boundary conditions. Following the example
from scipy.optimize import minimize, LinearConstraint, Bounds
def RosenbrockN(x):
result = 0
for i in range(len(x)-1):
result += 100*(x[i+1]-x[i]**2)**2+(1-x[i])**2
return result
x0 = [0.0, 0.0, 0.0]
# This will not work:
#bounds = Bounds([-2.0,-0.5,-2.0],[2.0,0.8,0.7])
# This works
lb = [-2.0,-0.5,-2.0]
ub = [2.0,0.8,0.7]
A = [[1, 0, 0], [0, 1, 0], [0, 0, 1]]
lcons = LinearConstraint(A, lb=lb, ub=ub, keep_feasible=True)
Res = minimize(RosenbrockN, x0, method = 'trust-constr', constraints=lcons)
I removed your jac and hess arguments and got it to work; perhaps the problem lies there?
import numpy as np
import scipy.optimize as scopt
def RosenbrockN(x):
result = 0
for i in range(len(x)-1):
result += 100*(x[i+1]-x[i]**2)**2+(1-x[i])**2
return result
x0 = [0.0, 0.0, 0.0]
#bounds = scopt.Bounds([-2.0,-0.5,-2.0],[2.0,0.8,0.7])
bounds = [(-2.0,2.0),(-0.5,0.8),(-2.0,0.7)]
Res = scopt.minimize(RosenbrockN, x0, \
method = 'SLSQP', bounds = bounds)
print(Res)
Result is
fun: 0.051111012543332675
jac: array([-0.00297706, -0.50601892, -0.00621008])
message: 'Optimization terminated successfully.'
nfev: 95
nit: 18
njev: 18
status: 0
success: True
x: array([0.89475126, 0.8 , 0.63996894])

Optimization using scipy

I am trying to build an efficient frontier as in the Markowitz problem.
I have written the code below, but I get the error "ValueError: Objective function must return a scalar". I have tested 'fun' with some values, for example, I input to the console:
W = np.ones([n])/n # start optimization with equal weights
cov_matrix = returns.cov()
fun = 0.5*np.dot(np.dot(W, cov_matrix), W) # variance of the portfolio
fun
The output is 0.00015337622774133828, which is a scalar.
I don't know what might be wrong. Any help is appreciated.
Code:
from scipy.optimize import minimize
import pandas as pd
import numpy as np
from openpyxl import load_workbook
wb = load_workbook('path/Assets_3.xlsx') # in this workbook there is data for returns.
# The next lines clean unnecessary first column and first row.
ws = wb.active
df = pd.DataFrame(ws.values)
df1 = df.drop(0,axis=1)
df1 = df1.drop(0)
df1 = df1.astype(float)
rf = 0.05
r_bar = 0.05
returns = df1.copy()
def efficient_frontier(rf, r_bar, returns):
n = len(returns.transpose())
W = np.ones([n])/n # start optimization with equal weights
exp_ret = returns.mean()
cov_matrix = returns.cov()
fun = 0.5*np.dot(np.dot(W, cov_matrix), W) # variance of the portfolio
cons = ({'type': 'eq', 'fun': lambda W: sum(W) - 1. },
{'type': 'ineq', 'fun': lambda W: np.dot(exp_ret,W) - r_bar })
bnds = [(0.,1.) for i in range(n)] # weights between 0..1.
res = minimize(fun, W, (returns, cov_matrix, rf),
method='SLSQP', bounds = bnds, constraints = cons)
return res
x= efficient_frontier(rf,r_bar,returns)
x
Some Data
1 2 3
1 0.060206 0.005781 0.001117
2 0.006463 -0.007390 0.001133
3 -0.003211 -0.015730 0.001167
4 0.044227 -0.006250 0.001225
5 -0.040571 -0.006910 0.001292
6 -0.007900 -0.006160 0.001208
7 0.068702 0.013836 0.001300
8 0.039286 0.009854 0.001350
9 0.012457 -0.007950 0.001358
10 -0.013758 0.001021 0.001283
11 -0.002616 -0.013600 0.001300
12 0.059004 -0.006090 0.001442
13 0.015566 0.002818 0.001308
14 -0.036454 0.001395 0.001283
15 0.058899 0.011072 0.001325
16 -0.043086 0.017070 0.001308
17 0.023156 -0.003350 0.001392
18 0.063705 0.000301 0.001417
19 0.017628 -0.001960 0.001508
20 -0.014567 -0.006990 0.001525
21 -0.007191 -0.013000 0.001425
22 -0.000815 0.014773 0.001450
23 0.046493 -0.001540 0.001542
24 0.051832 -0.008580 0.001742
25 -0.007151 0.001177 0.001633
26 -0.018196 -0.008680 0.001642
27 -0.013513 -0.008810 0.001675
28 -0.026493 -0.010510 0.001825
29 -0.003249 -0.014750 0.001800
30 0.001222 0.022258 0.001758
This code is a mess and while i can show you something which runs, that does not mean anything.
You will see convergence to your starting-point, whatever that means in your task! It's a strong indicator that something is still very wrong (might be the underlying theory)!
Some additional remarks:
scipy's optimizers are build to work with numpy-arrays, not pandas Dataframes or Series objects!
the only things in your original question which hinted pandas-usage was a var-name df and returns.cov() which does not exist for numpy-arrays!
rf is never used anywhere!
there are multiple things in optimize's args, which are not used!
it does not feel like a problem one should use scipy's optimizers for! (but it's possible; here we are paying for numerical-differentiation for example)
cvxpy would probably a much much better approach (more clean, faster, more accurate) if interpret the problem correctly (did not analyze much)
but the same rules apply: some python-knowledge is needed!
Code:
from scipy.optimize import minimize
import numpy as np
import pandas as pd
rf = 0.05
r_bar = 0.05
returns = pd.DataFrame(np.random.randn(30, 3), columns=list('ABC')) # PANDAS DF
cov_matrix = returns.cov().as_matrix() # use PANDAS one last time
# but result = np.array!
returns = returns.as_matrix() # From now on: np-only!
def fun(x, returns, cov_matrix, rf):
return 0.5*np.dot(np.dot(x, cov_matrix), x)
def efficient_frontier(rf, r_bar, returns):
n = len(returns.transpose())
W = np.ones([n])/n # start optimization with equal weights
exp_ret = returns.mean()
cons = ({'type': 'eq', 'fun': lambda x: np.sum(x) - 1. }, # let's use numpy here
{'type': 'ineq', 'fun': lambda x: np.dot(exp_ret, x) - r_bar })
bnds = [(0.,1.) for i in range(n)] # weights between 0..1.
res = minimize(fun, W, (returns, cov_matrix, rf),
method='SLSQP', bounds = bnds, constraints = cons)
return res
x= efficient_frontier(rf,r_bar,returns)
print(x)
Output:
A B C
A 0.813375 -0.001370 0.173901
B -0.001370 1.482756 0.380514
C 0.173901 0.380514 1.285936
fun: 0.2604530793556774
jac: array([ 0.32863522, 0.62063321, 0.61345008])
message: 'Optimization terminated successfully.'
nfev: 35
nit: 7
njev: 3
status: 0
success: True
x: array([ 0.33333333, 0.33333333, 0.33333333])

How to display progress of scipy.optimize function?

I use scipy.optimize to minimize a function of 12 arguments.
I started the optimization a while ago and still waiting for results.
Is there a way to force scipy.optimize to display its progress (like how much is already done, what are the current best point)?
As mg007 suggested, some of the scipy.optimize routines allow for a callback function (unfortunately leastsq does not permit this at the moment). Below is an example using the "fmin_bfgs" routine where I use a callback function to display the current value of the arguments and the value of the objective function at each iteration.
import numpy as np
from scipy.optimize import fmin_bfgs
Nfeval = 1
def rosen(X): #Rosenbrock function
return (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
(1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2
def callbackF(Xi):
global Nfeval
print '{0:4d} {1: 3.6f} {2: 3.6f} {3: 3.6f} {4: 3.6f}'.format(Nfeval, Xi[0], Xi[1], Xi[2], rosen(Xi))
Nfeval += 1
print '{0:4s} {1:9s} {2:9s} {3:9s} {4:9s}'.format('Iter', ' X1', ' X2', ' X3', 'f(X)')
x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
[xopt, fopt, gopt, Bopt, func_calls, grad_calls, warnflg] = \
fmin_bfgs(rosen,
x0,
callback=callbackF,
maxiter=2000,
full_output=True,
retall=False)
The output looks like this:
Iter X1 X2 X3 f(X)
1 1.031582 1.062553 1.130971 0.005550
2 1.031100 1.063194 1.130732 0.004973
3 1.027805 1.055917 1.114717 0.003927
4 1.020343 1.040319 1.081299 0.002193
5 1.005098 1.009236 1.016252 0.000739
6 1.004867 1.009274 1.017836 0.000197
7 1.001201 1.002372 1.004708 0.000007
8 1.000124 1.000249 1.000483 0.000000
9 0.999999 0.999999 0.999998 0.000000
10 0.999997 0.999995 0.999989 0.000000
11 0.999997 0.999995 0.999989 0.000000
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 11
Function evaluations: 85
Gradient evaluations: 17
At least this way you can watch as the optimizer tracks the minimum
Following #joel's example, there is a neat and efficient way to do the similar thing. Following example show how can we get rid of global variables, call_back functions and re-evaluating target function multiple times.
import numpy as np
from scipy.optimize import fmin_bfgs
def rosen(X, info): #Rosenbrock function
res = (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
(1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2
# display information
if info['Nfeval']%100 == 0:
print '{0:4d} {1: 3.6f} {2: 3.6f} {3: 3.6f} {4: 3.6f}'.format(info['Nfeval'], X[0], X[1], X[2], res)
info['Nfeval'] += 1
return res
print '{0:4s} {1:9s} {2:9s} {3:9s} {4:9s}'.format('Iter', ' X1', ' X2', ' X3', 'f(X)')
x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
[xopt, fopt, gopt, Bopt, func_calls, grad_calls, warnflg] = \
fmin_bfgs(rosen,
x0,
args=({'Nfeval':0},),
maxiter=1000,
full_output=True,
retall=False,
)
This will generate output like
Iter X1 X2 X3 f(X)
0 1.100000 1.100000 1.100000 2.440000
100 1.000000 0.999999 0.999998 0.000000
200 1.000000 0.999999 0.999998 0.000000
300 1.000000 0.999999 0.999998 0.000000
400 1.000000 0.999999 0.999998 0.000000
500 1.000000 0.999999 0.999998 0.000000
Warning: Desired error not necessarily achieved due to precision loss.
Current function value: 0.000000
Iterations: 12
Function evaluations: 502
Gradient evaluations: 98
However, no free launch, here I used function evaluation times instead of algorithmic iteration times as a counter. Some algorithms may evaluate target function multiple times in a single iteration.
Try using:
options={'disp': True}
to force scipy.optimize.minimize to print intermediate results.
Many of the optimizers in scipy indeed lack verbose output (the 'trust-constr' method of scipy.optimize.minimize being an exception). I faced a similar issue and solved it by creating a wrapper around the objective function and using the callback function. No additional function evaluations are performed here, so this should be an efficient solution.
import numpy as np
class Simulator:
def __init__(self, function):
self.f = function # actual objective function
self.num_calls = 0 # how many times f has been called
self.callback_count = 0 # number of times callback has been called, also measures iteration count
self.list_calls_inp = [] # input of all calls
self.list_calls_res = [] # result of all calls
self.decreasing_list_calls_inp = [] # input of calls that resulted in decrease
self.decreasing_list_calls_res = [] # result of calls that resulted in decrease
self.list_callback_inp = [] # only appends inputs on callback, as such they correspond to the iterations
self.list_callback_res = [] # only appends results on callback, as such they correspond to the iterations
def simulate(self, x, *args):
"""Executes the actual simulation and returns the result, while
updating the lists too. Pass to optimizer without arguments or
parentheses."""
result = self.f(x, *args) # the actual evaluation of the function
if not self.num_calls: # first call is stored in all lists
self.decreasing_list_calls_inp.append(x)
self.decreasing_list_calls_res.append(result)
self.list_callback_inp.append(x)
self.list_callback_res.append(result)
elif result < self.decreasing_list_calls_res[-1]:
self.decreasing_list_calls_inp.append(x)
self.decreasing_list_calls_res.append(result)
self.list_calls_inp.append(x)
self.list_calls_res.append(result)
self.num_calls += 1
return result
def callback(self, xk, *_):
"""Callback function that can be used by optimizers of scipy.optimize.
The third argument "*_" makes sure that it still works when the
optimizer calls the callback function with more than one argument. Pass
to optimizer without arguments or parentheses."""
s1 = ""
xk = np.atleast_1d(xk)
# search backwards in input list for input corresponding to xk
for i, x in reversed(list(enumerate(self.list_calls_inp))):
x = np.atleast_1d(x)
if np.allclose(x, xk):
break
for comp in xk:
s1 += f"{comp:10.5e}\t"
s1 += f"{self.list_calls_res[i]:10.5e}"
self.list_callback_inp.append(xk)
self.list_callback_res.append(self.list_calls_res[i])
if not self.callback_count:
s0 = ""
for j, _ in enumerate(xk):
tmp = f"Comp-{j+1}"
s0 += f"{tmp:10s}\t"
s0 += "Objective"
print(s0)
print(s1)
self.callback_count += 1
A simple test can be defined
from scipy.optimize import minimize, rosen
ros_sim = Simulator(rosen)
minimize(ros_sim.simulate, [0, 0], method='BFGS', callback=ros_sim.callback, options={"disp": True})
print(f"Number of calls to Simulator instance {ros_sim.num_calls}")
resulting in:
Comp-1 Comp-2 Objective
1.76348e-01 -1.31390e-07 7.75116e-01
2.85778e-01 4.49433e-02 6.44992e-01
3.14130e-01 9.14198e-02 4.75685e-01
4.26061e-01 1.66413e-01 3.52251e-01
5.47657e-01 2.69948e-01 2.94496e-01
5.59299e-01 3.00400e-01 2.09631e-01
6.49988e-01 4.12880e-01 1.31733e-01
7.29661e-01 5.21348e-01 8.53096e-02
7.97441e-01 6.39950e-01 4.26607e-02
8.43948e-01 7.08872e-01 2.54921e-02
8.73649e-01 7.56823e-01 2.01121e-02
9.05079e-01 8.12892e-01 1.29502e-02
9.38085e-01 8.78276e-01 4.13206e-03
9.73116e-01 9.44072e-01 1.55308e-03
9.86552e-01 9.73498e-01 1.85366e-04
9.99529e-01 9.98598e-01 2.14298e-05
9.99114e-01 9.98178e-01 1.04837e-06
9.99913e-01 9.99825e-01 7.61051e-09
9.99995e-01 9.99989e-01 2.83979e-11
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 19
Function evaluations: 96
Gradient evaluations: 24
Number of calls to Simulator instance 96
Of course this is just a template, it can be adjusted to your needs. It does not provide all information about the status of the optimizer (like e.g. in the Optimization Toolbox of MATLAB), but at least you have some idea of the progress of the optimization.
A similar approach can be found here, without using the callback function. In my approach the callback function is used to print output exactly when the optimizer has finished an iteration, and not every single function call.
Which minimization function are you using exactly?
Most of the functions have progress report built, including multiple levels of reports showing exactly the data you want, by using the disp flag (for example see scipy.optimize.fmin_l_bfgs_b).
Below is a solution that works for me :
def f_(x): # The rosenbrock function
return (1 - x[0])**2 + 100 * (x[1] - x[0]**2)**2
def conjugate_gradient(x0, f):
all_x_i = [x0[0]]
all_y_i = [x0[1]]
all_f_i = [f(x0)]
def store(X):
x, y = X
all_x_i.append(x)
all_y_i.append(y)
all_f_i.append(f(X))
optimize.minimize(f, x0, method="CG", callback=store, options={"gtol": 1e-12})
return all_x_i, all_y_i, all_f_i
and by example :
conjugate_gradient([2, -1], f_)
Source
It is also possible to include a simple print() statement in the function to be minimized. If you import the function you can create a wapper.
import numpy as np
from scipy.optimize import minimize
def rosen(X): #Rosenbrock function
print(X)
return (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
(1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2
x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
minimize(rosen,
x0)
There you go! (Beware: Most of the time, global variables are bad practice.)
from scipy.optimize import minimize
import numpy as np
f = lambda x, b=.1 : x[0]**2 + b * x[1]**2
x0 = np.array( [2.,2.] )
P = [ x0 ]
def save(x):
global P
P.append(x)
minimize(f, x0=x0, callback=save)
fun: 4.608946876190852e-13
hess_inv: array([[4.99995194e-01, 3.78976566e-04],
[3.78976566e-04, 4.97011817e+00]])
jac: array([ 5.42429092e-08, -4.27698767e-07])
message: 'Optimization terminated successfully.'
nfev: 24
nit: 7
njev: 8
status: 0
success: True
x: array([ 1.96708740e-08, -2.14594442e-06])
print(P)
[array([2., 2.]),
array([0.99501244, 1.89950125]),
array([-0.0143533, 1.4353279]),
array([-0.0511755 , 1.11283405]),
array([-0.03556007, 0.39608524]),
array([-0.00393046, -0.00085631]),
array([-0.00053407, -0.00042556]),
array([ 1.96708740e-08, -2.14594442e-06])]

Categories

Resources