I have some questions and problems regarding scipy's optimize.minimize routine. I would like to minimize the function:
f(eta) = sum_i |eta*x_i - y_i|
with regard to eta. Since I am not familiar with the minimize routine and the corresponding methods, I tried some out. However, using method BFGS raises the following error:
File "/usr/local/lib/python3.4/dist-packages/scipy/optimize/_minimize.py", line 441, in minimize return _minimize_bfgs(fun, x0, args, jac, callback, **options)
File "/usr/local/lib/python3.4/dist-packages/scipy/optimize/optimize.py", line 904, in _minimize_bfgs
A1 = I - sk[:, numpy.newaxis] * yk[numpy.newaxis, :] * rhok
IndexError: 0-d arrays can only use a single () or a list of newaxes (and a single ...) as an index
which I was not able to solve. Please find code, which causes the error below. I am using Python3 with scipy 0.17.0 and numpy 1.8.2 on Ubuntu 14.04.3 LTS.
Furthermore, the method conjugate gradient seems to perform worse than other methods.
Last but not least, I favour estimating the minimum by finding the zero of the first derivative via scipy.optimize.brentq. Is this fine or do you recommend another approach? I prefer robustness over speed.
Here is some code illustrating the problems and questions:
from scipy import optimize
import numpy as np
def function(x, bs, cs):
sum = 0.
for b, c in zip(bs, cs):
sum += np.abs(x*b - c)
return sum
def derivativeFunction(x, bs, cs):
sum = 0.
for b, c in zip(bs, cs):
if x*b > c:
sum += b
else:
sum -= b
return sum
np.random.seed(1000)
bs = np.random.rand(10)
cs = np.random.rand(10)
eta0 = 0.5
res = optimize.minimize(fun=function, x0=eta0, args=(bs, cs), method='Nelder-Mead', tol=1e-6)
print('Nelder-Mead:\t', res.x[0], function(res.x[0], bs, cs))
res = optimize.minimize(fun=function, x0=eta0, args=(bs, cs,), method='CG', jac=derivativeFunction, tol=1e-6)
print('CG:\t', res.x[0], function(res.x[0], bs, cs))
x = optimize.brentq(f=derivativeFunction, a=0, b=2., args=(bs, cs), xtol=1e-6, maxiter=100)
print('Brentq:\t', x, function(x, bs, cs))
#Throwing the error
res = optimize.minimize(fun=function, x0=eta0, args=(bs, cs), method='BFGS', jac=derivativeFunction, tol=1e-6)
print('BFGS:\t', res.x[0], function(res.x[0], bs, cs))
Its output is:
Nelder-Mead: 0.493537902832 3.71986334101
CG: 0.460178525461 3.72659733011
Brentq: 0.49353725172947666 3.71986347245
where the first value is the position of the minimum and the second value the minimum itself. The output misses the error message from above.
Thank you for your help!
Related
I am a newbie at Python and I was writing a code to compute, then fit, magnetization data.
Firstly, I am writing the function for the energy to be minimized with respect to the parameter "theta".
def E_uniaxial(H, phi, theta, Keff, Ms):
e = Keff*(np.cos(theta))**2 - ((4*np.pi)**2*mu0)*Ms*H*np.cos(theta - phi)
return e
Then, as the magnetization depends strongly on the previous equilibriuum position of the system, I write a function for the "next equilibriuum position", the parameter H is the one supposed to change between the previous and the new equilibriuum position.
def next_theta(Ms, phi, Keff, H, lasttheta, fctE):
E = lambda x : fctE(H, phi, x, Keff, Ms)[0]
result = scipy.optimize.minimize(E, lasttheta)
return result.x
After this, I write a function that computes a whole hysteresis cycle. Given a starting point that is known, the function increases H and computes all the equilibriuum positions that depends on the previous one (then H is decreased and the same process is performed).
def cycle_theta(Ms, desfield, Keff, Hmax, theta_init_1, theta_init_2, fctE):
#aller
H1 = np.linspace(-Hmax, Hmax, 2000)
sol1 = np.zeros(np.shape(H1))
sol1[0] = theta_init_1
for i in range(len(H1)-1):
sol1[i+1] = next_theta(Ms, desfield, Keff, H1[i+1], sol1[i], fctE)
#retour
H2 = np.linspace(Hmax, -Hmax, 2000)
sol2 = np.zeros(np.shape(H2))
sol2[0] = theta_init_2
for i in range(len(H2) -1):
sol2[i+1] = next_theta(Ms, desfield, Keff, H2[i+1], sol2[i], fctE)
return H1, sol1, np.flip(sol2)
Then, I have to fit data in order to find the Ms and Keff parameters. I defined this function :
def test_fit(H, Ms, Keff):
a = cycle_theta(Ms, 1., Keff, 20, np.pi, 0., E_uniaxial)[1]
idx = 0
if isinstance(H, float):
idx = find_nearest(a, H)
print('float')
return np.sin(a[idx])
if isinstance(H, np.ndarray):
c = np.zeros(np.shape(H))
for i in range(len(H)):
idx = find_nearest(a, H[i])
c[i] = a[idx]
print('array')
return np.sin(c)
The condition on the type seemed to be required for the function to work with curve_fit.
I finally call popt = curve_fit(test_fit, b, sig) where "b" and "sig" are my experimental data.
But I got this error several times coming from the scipy.optimize.minimize, not the curve_fit:
ValueError: setting an array element with a sequence.
I read that this message can come from the fact my energy function E_unixial returns an array and not a scalar, but actually it's a quite regular function : if you input a scalar, you get a scalar and if you input an array, you get an array.
So I really don't understand, am I not supposed to use scipy.optimize.minimize and scipy.minimize.curve_fit one into the other ?
Thank you a lot for your help !!
Imagine I have two equations with one unknown and I want to use fsolve to solve it:
0 = 0.5*x[0]**2-2
0 = 2-x
Clearly the answer is x=2. I have tried this
import numpy as np; from scipy.optimize import fsolve
def f(x):
r = np.zeros(2)
r[0] = 0.5*x[0]**2-2
r[1] = 2-x[0]
return r
fsolve(f,[0.5])
The error message is "The array returned by a function changed size between calls"
I can't see what is going wrong here. How do I solve this problem?
In general, How do I solve equations where the number of variables is less than the number of equations.
Here is the full message
Traceback (most recent call last):
File "<ipython-input-37-e4f77791f3f6>", line 12, in <module>
fsolve(f,[0.5])
File "... anaconda3/lib/python3.7/site-packages/scipy/optimize/minpack.py", line 148, in fsolve
res = _root_hybr(func, x0, args, jac=fprime, **options)
File ".... /anaconda3/lib/python3.7/site-packages/scipy/optimize/minpack.py", line 227, in _root_hybr
ml, mu, epsfcn, factor, diag)
ValueError: The array returned by a function changed size between calls
In case of overdetermined system (the number of equations is greater than the number of variables)you need to use, for example, least squares approach. In such cases, there are usually no solution in traditional sense. And we need to define, what should we treat as a solution of the system.
Let you have a system of two equations with one scalar variable:
f(x) = 0
g(x) = 0
This system usually inconsistent and have no solution in traditional sense.
Lets add some values eps1 and eps2 to the right part of the system:
f(x) = 0 + eps1
g(x) = 0 + eps2
eps1 and eps2 are some values;
Now, lets find such x when eps1^2 + eps2^2 is rises its minimum value; that will be a solution of the system
in least squares sense.
To get such solution using scipy you can use least_square function.
Lets look at the following piece of code, that solves your system of equations:
import numpy as np
from scipy.optimize import fsolve, least_squares
def f(x):
r = np.zeros(2)
r[0] = 0.5*x**2-2
r[1] = 2-x
return r
least_squares(f, [0.0])
Result:
active_mask: array([0.])
cost: 5.175333019854869e-20
fun: array([ 2.87759150e-10, -1.43879575e-10])
grad: array([7.19397879e-10])
jac: array([[ 2.00000001],
[-1. ]])
message: '`gtol` termination condition is satisfied.'
nfev: 6
njev: 6
optimality: 7.193978788924559e-10
status: 1
success: True
x: array([2.])
I am trying to use scipy.optimize.minimize with simple a <= x <= b bounds. However, it often happens that my target function is evaluated just outside the bounds. To my understanding, this happens when minimize tries to determine the gradient of the target function at the boundary.
Minimal example:
import math
import numpy as np
from scipy.optimize import Bounds, minimize
constraint = Bounds([-1, -1], [1, 1], True)
def fun(x):
print(x)
return -math.exp(-np.dot(x,x))
result = minimize(fun, [-1, -1], bounds=constraint)
The output shows that the minimizer jumps to the point [1,1] and then tries to evaluate at [1.00000001, 1]:
[-1. -1.]
[-0.99999999 -1. ]
[-1. -0.99999999]
[-0.72932943 -0.72932943]
[-0.72932942 -0.72932943]
[-0.72932943 -0.72932942]
[-0.22590689 -0.22590689]
[-0.22590688 -0.22590689]
[-0.22590689 -0.22590688]
[1. 1.]
[1.00000001 1. ]
[1. 1.00000001]
[-0.03437328 -0.03437328]
...
Of course, there is no problem in this example, as fun can be evaluated also there. But that might not always be the case...
In my actual problem, the minimum can not be on the boundary and I have the easy workaround of adding an epsilon to the bounds.
But one would expect that there should be an easy solution to this issue which also works if the minimum can be at a boundary?
PS: It would be strange if I were the first to have this problem -- sorry if this question has been asked before somewhere, but I didn't find it anywhere.
As discussed here (thanks #"Welcome to Stack Overflow" for the comment directing me there), the problem is indeed that the gradient routine doesn't respect the bounds.
I wrote a new one that does the job:
import math
import numpy as np
from scipy.optimize import minimize
def gradient_respecting_bounds(bounds, fun, eps=1e-8):
"""bounds: list of tuples (lower, upper)"""
def gradient(x):
fx = fun(x)
grad = np.zeros(len(x))
for k in range(len(x)):
d = np.zeros(len(x))
d[k] = eps if x[k] + eps <= bounds[k][1] else -eps
grad[k] = (fun(x + d) - fx) / d[k]
return grad
return gradient
bounds = ((-1, 1), (-1, 1))
def fun(x):
print(x)
return -math.exp(-np.dot(x,x))
result = minimize(fun, [-1, -1], bounds=bounds,
jac=gradient_respecting_bounds(bounds, fun))
Note that this can be a bit less efficient, because fun(x) now gets evaluated twice at each point.
This seems to be unavoidable, relevant snippet from _minimize_lbfgsb in lbfgsb.py:
if jac is None:
def func_and_grad(x):
f = fun(x, *args)
g = _approx_fprime_helper(x, fun, epsilon, args=args, f0=f)
return f, g
else:
def func_and_grad(x):
f = fun(x, *args)
g = jac(x, *args)
return f, g
As you can see, the value of f can only be reused by the internal _approx_fprime_helper function.
I have used Python to perform optimization in the past; however, I am now trying to use a matrix as the input for the objective function as well as set bounds on the individual element values and the sum of the value of each row in the matrix, and I am encountering problems.
Specifically, I would like to pass the objective function ObjFunc three parameters - w, p, ret - and then minimize the value of this function (technically I am trying to maximize the function by minimizing the value of -1*ObjFunc) by adjusting the value of w subject to the bound that all elements of w should fall within the range [0, 1] and the constraint that sum of each row in w should sum to 1.
I have included a simplified piece of example code below to demonstrate the issue I'm encountering. As you can see, I am using the minimize function from scipy.opimize. The problems begin in the first line of objective function x = np.dot(p, w) in which the optimization procedure attempts to flatten the matrix into a one-dimensional vector - a problem that does not occur when the function is called without performing optimization. The bounds = b and constraints = c are both producing errors as well.
I know that I am making an elementary mistake in how I am approaching this optimization and would appreciate any insight that can be offered.
import numpy as np
from scipy.optimize import minimize
def objFunc(w, p, ret):
x = np.dot(p, w)
y = np.multiply(x, ret)
z = np.sum(y, axis=1)
r = z.mean()
s = z.std()
ratio = r/s
return -1 * ratio
# CREATE MATRICES
# returns, ret, of each of the three assets in the 5 periods
ret = np.matrix([[0.10, 0.05, -0.03], [0.05, 0.05, 0.50], [0.01, 0.05, -0.10], [0.01, 0.05, 0.40], [1.00, 0.05, -0.20]])
# probability, p, of being in each stae {X, Y, Z} in each of the 5 periods
p = np.matrix([[0,0.5,0.5], [0,0.6,0.4], [0.2,0.4,0.4], [0.3,0.3,0.4], [1,0,0]])
# initial equal weights, w
w = np.matrix([[0.33333,0.33333,0.33333],[0.33333,0.33333,0.33333],[0.33333,0.33333,0.33333]])
# OPTIMIZATION
b = [(0, 1)]
c = ({'type': 'eq', 'fun': lambda w_: np.sum(w, 1) - 1})
result = minimize(objFunc, w, (p, ret), method = 'SLSQP', bounds = b, constraints = c)
Digging into the code a bit. minimize calls optimize._minimize._minimize_slsqp. One of the first things it does is:
x = asfarray(x0).flatten()
So you need to design your objFunc to work with the flattened version of w. It may be enough to reshape it at the start of that function.
I read the code from a IPython session, but you can also find it in your scipy directory:
/usr/local/lib/python3.5/dist-packages/scipy/optimize/_minimize.py
I have a set of data that I am trying to fit to an ODE model using scipy's leastsq function. My ODE has parameters beta and gamma, so that it looks for example like this:
# dS/dt = -betaSI
# dI/dt = betaSI - gammaI
# dR/dt = gammaI
# with y0 = y(t=0) = (S(0),I(0),R(0))
The idea is to find beta and gamma so that the numerical integration of my system of ODE's best approximates the data. I am able to do this just fine using leastsq if I know all the points in my initial condition y0.
Now, I am trying to do the same thing but to pass now one of the entries of y0 as an extra parameter. Here is where the Python and me stop communicating...
I did a function so that now the first entry of the parameters that I pass to leastsq is the initial condition of my variable R.
I get the following message:
*Traceback (most recent call last):
File "/Users/Laura/Dropbox/SHIV/shivmodels/test.py", line 73, in <module>
p1,success = optimize.leastsq(errfunc, initguess, args=(simpleSIR,[y0[0]],[Tx],[mydata]))
File "/Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/site-packages/scipy/optimize/minpack.py", line 283, in leastsq
gtol, maxfev, epsfcn, factor, diag)
TypeError: array cannot be safely cast to required type*
Here is my code. It is a little more involved that what it needs to be for this example because in reality I want to fit another ode with 7 parameters and want to fit to several data sets at once. But I wanted to post here something simpler... Any help will be very very much appreciated! Thank you very much!
import numpy as np
from matplotlib import pyplot as plt
from scipy import optimize
from scipy.integrate import odeint
#define the time span for the ODE integration:
Tx = np.arange(0,50,1)
num_points = len(Tx)
#define a simple ODE to fit:
def simpleSIR(y,t,params):
dydt0 = -params[0]*y[0]*y[1]
dydt1 = params[0]*y[0]*y[1] - params[1]*y[1]
dydt2 = params[1]*y[1]
dydt = [dydt0,dydt1,dydt2]
return dydt
#generate noisy data:
y0 = [1000.,1.,0.]
beta = 12*0.06/1000.0
gamma = 0.25
myparam = [beta,gamma]
sir = odeint(simpleSIR, y0, Tx, (myparam,))
mydata0 = sir[:,0] + 0.05*(-1)**(np.random.randint(num_points,size=num_points))*sir[:,0]
mydata1 = sir[:,1] + 0.05*(-1)**(np.random.randint(num_points,size=num_points))*sir[:,1]
mydata2 = sir[:,2] + 0.05*(-1)**(np.random.randint(num_points,size=num_points))*sir[:,2]
mydata = np.array([mydata0,mydata1,mydata2]).transpose()
#define a function that will run the ode and fit it, the reason I am doing this
#is because I will use several ODE's to see which one fits the data the best.
def fitfunc(myfun,y0,Tx,params):
myfit = odeint(myfun, y0, Tx, args=(params,))
return myfit
#define a function that will measure the error between the fit and the real data:
def errfunc(params,myfun,y0,Tx,y):
"""
INPUTS:
params are the parameters for the ODE
myfun is the function to be integrated by odeint
y0 vector of initial conditions, so that y(t0) = y0
Tx is the vector over which integration occurs, since I have several data sets and each
one has its own vector of time points, Tx is a list of arrays.
y is the data, it is a list of arrays since I want to fit to multiple data sets at once
"""
res = []
for i in range(len(y)):
V0 = params[0][i]
myparams = params[1:]
initCond = np.zeros([3,])
initCond[:2] = y0[i]
initCond[2] = V0
myfit = fitfunc(myfun,initCond,Tx[i],myparams)
res.append(myfit[:,0] - y[i][:,0])
res.append(myfit[:,1] - y[i][:,1])
res.append(myfit[1:,2] - y[i][1:,2])
#end for
all_residuals = np.hstack(res).ravel()
return all_residuals
#end errfunc
#example of the problem:
V0 = [0]
params = [V0,beta,gamma]
y0 = [1000,1]
#this is just to test that my errfunc does work well.
errfunc(params,simpleSIR,[y0],[Tx],[mydata])
initguess = [V0,0.5,0.5]
p1,success = optimize.leastsq(errfunc, initguess, args=(simpleSIR,[y0[0]],[Tx],[mydata]))
The problem is with the variable initguess. The function optimize.leastsq has the following call signature:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.leastsq.html
It's second argument, x0, has to be an array. Your list
initguess = [v0,0.5,0.5]
won't be converted to an array because v0 is a list instead of an int or float. So you get an error when you try to convert initguess from a list to an array in the leastsq function.
I would adjust the variable params from
def errfunc(params,myfun,y0,Tx,y):
so that it is a 1-D array. Make the first few entries the values of v0 then append beta and gamma to that.