[Homework] I am going to solve the linear system Ax=b by the Preconditioned Conjugate Gradient method, and I use spilu function from scipy.sparse.linalg for the preconditioner. A is a sparse symmetric 162*162 matrix. Since the spilu gives an approximation to the inverse of A, say M approximates A, and so spilu(A) gives M^-1, which is the preconditioner. I find that we can directly gives the preconditioner in the python Conjugate Gradient function, but my code below does not work.
M_inverse=scipy.sparse.linalg.spilu(A)
M2=scipy.sparse.linalg.LinearOperator((162,162),M_inverse.solve)
x3=scipy.sparse.linalg.cg(A,b,M2)
TypeError Traceback (most recent call last)
<ipython-input-84-86f8f91df8d2> in <module>()
----> 1 x3=scipy.sparse.linalg.cg(A,b,M2)
/Users/ruobinghan/anaconda/lib/python3.4/site-packages/scipy/sparse/linalg/isolve/iterative.py in cg(A, b, x0, tol, maxiter, xtype, M, callback)
/Users/ruobinghan/anaconda/lib/python3.4/site-packages/scipy/sparse/linalg/isolve/iterative.py in non_reentrant(func, *a, **kw)
83 try:
84 d['__entered'] = True
---> 85 return func(*a, **kw)
86 finally:
87 d['__entered'] = False
/Users/ruobinghan/anaconda/lib/python3.4/site-packages/scipy/sparse/linalg/isolve/iterative.py in cg(A, b, x0, tol, maxiter, xtype, M, callback)
219 #non_reentrant
220 def cg(A, b, x0=None, tol=1e-5, maxiter=None, xtype=None, M=None, callback=None):
--> 221 A,M,x,b,postprocess = make_system(A,M,x0,b,xtype)
222
223 n = len(b)
/Users/ruobinghan/anaconda/lib/python3.4/site-packages/scipy/sparse/linalg/isolve/utils.py in make_system(A, M, x0, b, xtype)
108 x = zeros(N, dtype=xtype)
109 else:
--> 110 x = array(x0, dtype=xtype)
111 if not (x.shape == (N,1) or x.shape == (N,)):
112 raise ValueError('A and x have incompatible dimensions')
TypeError: float() argument must be a string or a number, not 'LinearOperator'
Also, the question hints I will need to use LinearOperator interface, I do not understand what is exactly LinearOperator doing and why we need it here.
Any suggestion would be appreciated!
Thanks in advance!
I think the parameters are in wrong order,
x3=scipy.sparse.linalg.cg(A,b,M2)
In the Error message:
220 def cg(A, b, x0=None, tol=1e-5, maxiter=None, xtype=None, M=None,
callback=None):
--> 221 A,M,x,b,postprocess = make_system(A,M,x0,b,xtype)
M2 is in the place of x0 - the initial guess of the solution but not the preconditioner.
In my host, with correct order, class-LinearOperator is functioning well.
correct version
x3=scipy.sparse.linalg.cg(A,b,M=M2)
Please use "key word" arguments as often as possible.
Related
I'm trying to obtain the signal to noise ratio of two variables I have cross-correlated. This involves using scipy.integrate.quad
In short, I've got two functions: an integrand and the integral function.
The integrand takes four inputs: one array and three scalars.
This integrand function works fine: it swallows the array and spits out another.
I then vectorize the integrand using numpy.vectorize.
I then call scipy.integrate.quad in the following way
integral, integrall_err = quad(lambda array: integrand(array, scalar, scalar, scalar), 0, array).
Then I vectorize the integral as well. When I try and perform it on the actual values I have, it just returns me
'TypeError: only size-1 arrays can be converted to Python scalars'.
Here's the code:
def ISNR(ell, var, NFRB):
ad = CL_tauvals + NL_tauvals
bd = CL_disvals + NFRB
cd = CL_dtauvals*CL_dtauvals
res = ell*((CL_dtauvals**2)/(ad*bd + cd))
return res #dimensionless
ISNR = np.vectorize(ISNR)
test_val = ISNR(1, 300, 4000)
def SNR(ell, var, NFRB, FoV):
Integrand = lambda ell: ISNR(ell, var, NFRB)
snr = quad(Integrand, 0, 100)
prefactors = np.sqrt(4*np.pi*FoV)
return prefactors*np.sqrt(snr)
SNR = np.vectorize(SNR)
SNR_test = SNR(l, 100, 100, 100)
Note that l is an array of 100 values I've defined using np.logspace. The error message I get is
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-31c52985c5c2> in <module>
22 SNR = np.vectorize(SNR)
23
---> 24 SNR_test = SNR(l, 100, 100, 100)
/Software/users/modules/7/software/anaconda3/2020.07/lib/python3.8/site-packages/numpy/lib/function_base.py in __call__(self, *args, **kwargs)
2089 vargs.extend([kwargs[_n] for _n in names])
2090
-> 2091 return self._vectorize_call(func=func, args=vargs)
2092
2093 def _get_ufunc_and_otypes(self, func, args):
/Software/users/modules/7/software/anaconda3/2020.07/lib/python3.8/site-packages/numpy/lib/function_base.py in _vectorize_call(self, func, args)
2159 res = func()
2160 else:
-> 2161 ufunc, otypes = self._get_ufunc_and_otypes(func=func, args=args)
2162
2163 # Convert args to object arrays first
/Software/users/modules/7/software/anaconda3/2020.07/lib/python3.8/site-packages/numpy/lib/function_base.py in _get_ufunc_and_otypes(self, func, args)
2119
2120 inputs = [arg.flat[0] for arg in args]
-> 2121 outputs = func(*inputs)
2122
2123 # Performance note: profiling indicates that -- for simple
<ipython-input-28-31c52985c5c2> in SNR(ell, var, NFRB, FoV)
16 def SNR(ell, var, NFRB, FoV):
17 Integrand = lambda ell: ISNR(ell, var, NFRB)
---> 18 snr = quad(Integrand, 0, 100)
19 prefactors = np.sqrt(4*np.pi*FoV)
20 return prefactors*np.sqrt(snr)
/Software/users/modules/7/software/anaconda3/2020.07/lib/python3.8/site-packages/scipy/integrate/quadpack.py in quad(func, a, b, args, full_output, epsabs, epsrel, limit, points, weight, wvar, wopts, maxp1, limlst)
349
350 if weight is None:
--> 351 retval = _quad(func, a, b, args, full_output, epsabs, epsrel, limit,
352 points)
353 else:
/Software/users/modules/7/software/anaconda3/2020.07/lib/python3.8/site-packages/scipy/integrate/quadpack.py in _quad(func, a, b, args, full_output, epsabs, epsrel, limit, points)
461 if points is None:
462 if infbounds == 0:
--> 463 return _quadpack._qagse(func,a,b,args,full_output,epsabs,epsrel,limit)
464 else:
465 return _quadpack._qagie(func,bound,infbounds,args,full_output,epsabs,epsrel,limit)
TypeError: only size-1 arrays can be converted to Python scalars
Can anyone explain to me what's going on and/or how to solve this? I've successfully done integrals of the exact same type with some other code recently and the main difference is just that three scalars are taken as inputs alongside the array and just called by the integrand (which works) within the function. I really don't understand why it's trying to convert the array into a scalar: I just want the integral evaluated at each input value.
Some of the solutions I've found tell me to vectorize and use lambda for the variable I'm integrating with... that's exactly what I've done. The other solutions I've found are just not that relevant to what I'm trying to do. If anyone has any tips, I'd be more than grateful.
I've been running an optimization process using the legacy scipy.optimize.leastsq
Now I want to switch to scipy.optimize.least_squares (I need to introduce bounds).
But least_squares throws an error which I can't debug. Below my code, I am doing exactly the same with least_squares as with leastsq.
import scipy
from scipy.optimize import leastsq, least_squares
print(scipy.__version__)
def residuals_cmrset_as_2009JoH(x0, df):
k_max= x0[0]
a= x0[1]
alpha= x0[2]
b= x0[3]
beta= x0[4]
k_Ei_max= x0[5]
k_CMI= x0[6]
C_CMI= x0[7]
CMI_max= x0[8]
EVI_min= x0[9]
EVI_max= x0[10]
df['aet_cmrset'] = aet_cmrset_as_2009JoH(df.evi, df.gvmi, df.pet, df.rain,
k_max, a, alpha, b, beta, k_Ei_max, k_CMI, C_CMI, CMI_max, EVI_min, EVI_max)
return(df.aet_cmrset - df.AET_observed)
print('run calibration with leastsq')
x, flag = leastsq(residuals_cmrset_as_2009JoH,
np.transpose(x0),
args=(df_calibration))
print('this is the result from leastsq')
print(x)
print('run calibration with least_squares')
x, flag = least_squares(residuals_cmrset_as_2009JoH,
np.transpose(x0),
args=(df_calibration))
print('this is the result from least_squares')
print(x)
and this is the output:
1.2.0
run calibration with leastsq
this is the result from leastsq
[ 0.99119625 1.44145154 1.12799561 27.41023799 2.60102797 0.09771226
1.14979708 -0.24298292 1. 0. 0.9 ]
run calibration with least_squares
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-16-bc305703822b> in <module>
30 x, flag = least_squares(residuals_cmrset_as_2009JoH,
31 np.transpose(x0),
---> 32 args=(df_calibration))
33 print('this is the result from least_squares')
34 print(x)
/apps/python/3.7.2/lib/python3.7/site-packages/scipy-1.2.0-py3.7-linux-x86_64.egg/scipy/optimize/_lsq/least_squares.py in least_squares(fun, x0, jac, bounds, method, ftol, xtol, gtol, x_scale, loss, f_scale, diff_step, tr_solver, tr_options, jac_sparsity, max_nfev, verbose, args, kwargs)
796 x0 = make_strictly_feasible(x0, lb, ub)
797
--> 798 f0 = fun_wrapped(x0)
799
800 if f0.ndim != 1:
/apps/python/3.7.2/lib/python3.7/site-packages/scipy-1.2.0-py3.7-linux-x86_64.egg/scipy/optimize/_lsq/least_squares.py in fun_wrapped(x)
791
792 def fun_wrapped(x):
--> 793 return np.atleast_1d(fun(x, *args, **kwargs))
794
795 if method == 'trf':
TypeError: residuals_cmrset_as_2009JoH() takes 2 positional arguments but 11 were given
Any help will be welcome
Both functions specify that args is supposed to be a tuple. But
leastsq has, near the start this
if not isinstance(args, tuple):
args = (args,)
I don't see something equivalent in least_squares. That step "protects" leastsq in case the user makes a mistake and passes an array instead of the specified tuple.
first i define some matrix and vector in proper shape .
initialization
I=np.eye(24)
Z=np.zeros((24,24))
a=0.012
b=1.1
gamma1=0.9/80
gamma2=1.1/80
MM=np.eye(24)
for i in range (22):
MM[i+1,i]=-1
MM[0,23]=-1
M=random.randint(200,300, size=(24,1))
max_pch=5
max_pdch=5
ppp=random.randint(150,200, size=(24,))
define matrix of objective function
Q and C is matrix and vector of objective function 1/2 x^T Q x +C^T x , respectively.
Q=np.asarray(np.bmat([[a*I,Z,Z,Z],[Z,a*I,Z,Z],[Z,Z,Z,Z],[Z,Z,Z,Z] ]))
C=np.asarray(np.bmat([[b*np.ones(24),b*np.ones(24),0*np.ones(24),ppp]]))
##create equal subject
In my problem, I have just equal constraint and upper bound and lower bound that define blew.
Aeq=np.asarray(np.bmat([[-I,I,Z,I], [-gamma1*I, gamma2*I,MM,Z],[np.zeros((48,96))]]))
beq=np.asarray(np.bmat([[M],[np.zeros((72,1))]]))
##create upper and lower bound in shape (1,96)
lb=np.asarray(np.bmat([[0*np.ones(24),0*np.ones(24),[0.1],0.1*np.ones(22),
[0.1],100*np.ones(24)]]))
ub=np.asarray(np.bmat([[max_pch*np.ones(24),max_pdch*np.ones(24),[0.1],0.9*np.ones(22),
[0.9],500*np.ones(24)]]))
x = solve_qp(P=matrix(Q), q=C.T,
G=None,h=None, A=matrix(Aeq), b=beq, lb=lb.T, ub=ub.T,solver='quadprog')
##error
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-48-111d9695d5a8> in <module>
25
26 x = solve_qp(P=matrix(Q), q=C.T,
---> 27 G=None,h=None, A=matrix(Aeq), b=beq, lb=lb.T, ub=ub.T,solver='quadprog')
28
29
~\Anaconda3\lib\site-packages\qpsolvers\__init__.py in solve_qp(P, q, G, h, A, b, lb, ub,
solver, initvals, sym_proj, verbose, **kwargs)
271 kwargs["verbose"] = verbose
272 try:
--> 273 return __solve_function__[solver](*args, **kwargs)
274 except KeyError:
275 raise SolverNotFound(f"solver '{solver}' is not available")
~\Anaconda3\lib\site-packages\qpsolvers\quadprog_.py in quadprog_solve_qp(P, q, G, h, A, b,
initvals, verbose)
85 else:
86 qp_C = -vstack([A, G]).T
---> 87 qp_b = -hstack([b, h])
88 meq = A.shape[0]
89 else: # no equality constraint
~\Anaconda3\lib\site-packages\numpy\core\shape_base.py in hstack(tup)
338 return _nx.concatenate(arrs, 0)
339 else:
--> 340 return _nx.concatenate(arrs, 1)
341
342
ValueError: all the input array dimensions except for the concatenation axis must match
exactly
If anybody can help me, I am glad.
I need to make an integral of the type g(u)jn(u) where g(u) is a smooth function without zeros and jn(u) in the Bessel function with infinity zeros, but I got the following error:
TypeError: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe'
First I need to change of variable x to variable u and make an integration in the new variable u but how the function u(x) is not analytically invertible so I need to use interpolation to make this inversion numerically.
import numpy as np
from scipy.interpolate import InterpolatedUnivariateSpline
x = np.linspace(0.1, 100, 1000)
u = lambda x: x*np.exp(x)
dxdu_x = lambda x: 1/((1+x) * np.exp(x)) ## dxdu as function of x: not invertible
dxdu_u = InterpolatedUnivariateSpline(u(x), dxdu_x(x)) ## dxdu as function of u: change of variable
After this, the integral is:
from mpmath import mp
def f(n):
integrand = lambda U: dxdu_u(U) * mp.besselj(n,U)
bjz = lambda nth: mp.besseljzero(n, nth)
return mp.quadosc(integrand, [0,mp.inf], zeros=bjz)
I use quadosc from mpmath and not quad from scipy because quadosc is more appropriate to make integral of rapidly oscillating functions, like Bessel functions. But, by other hand, this force me to use two different packges, scipy to calculate dxdu_u by interpolation, and mpmath to calculate the Bessel functions mp.besselj(n,U) and the integral of the product dxdu_u(U) * mp.bessel(n,U) so I suspect that this mix of two different packages can make some issue/ conflict. So when I make:
print(f(0))
I got the error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-38-ac2976a6b736> in <module>
12 return mp.quadosc(integrand, [0,mp.inf], zeros=bjz)
13
---> 14 f(0)
<ipython-input-38-ac2976a6b736> in f(n)
10 integrand = lambda U: dxdu_u(U) * mp.besselj(n,U)
11 bjz = lambda nth: mp.besseljzero(n, nth)
---> 12 return mp.quadosc(integrand, [0,mp.inf], zeros=bjz)
13
14 f(0)
TypeError: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe'
Does anyone know how I can solve this problem?
Thanks
The full traceback (the part you sniped) shows that the error is in the __call__ method of the univariatespline object. So indeed the problem is that the mpmath integration routine feeds in its mpf decimals, and scipy has no way of dealing with them.
A simplest fix is then to manually cast the offending part of the argument of the integrand to a float:
integrand = lambda U: dxdu_u(float(U)) * mp.besselj(n,U)
In general this is prone to numerical errors (mpmath uses its high-precision variables on purpose!) so proceed with caution. In this specific case it might be OK, because the interpolation is actually done in double precision. Still, best check the results.
A possible alternative might be to avoid mpmath and use the weights argument to scipy.integrate.quad, see the docs (scroll down to weights="sin" part)
Another alternative is to stick with mpmath all the way and implement the interpolation yourselves in pure python (this way, mpf objects are probably fine since they should support usual arithmetics). It's likely a simple linear interpolation is enough. If it's not, it's not too big of a deal to code up your own cubic spline interpolator.
The full traceback:
In [443]: f(0)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-443-6bfbdbfff9c4> in <module>
----> 1 f(0)
<ipython-input-440-7ebeff3611f6> in f(n)
2 integrand = lambda U: dxdu_u(U) * mp.besselj(n,U)
3 bjz = lambda nth: mp.besseljzero(n, nth)
----> 4 return mp.quadosc(integrand, [0,mp.inf], zeros=bjz)
5
/usr/local/lib/python3.6/dist-packages/mpmath/calculus/quadrature.py in quadosc(ctx, f, interval, omega, period, zeros)
998 # raise ValueError("zeros do not appear to be correctly indexed")
999 n = 1
-> 1000 s = ctx.quadgl(f, [a, zeros(n)])
1001 def term(k):
1002 return ctx.quadgl(f, [zeros(k), zeros(k+1)])
/usr/local/lib/python3.6/dist-packages/mpmath/calculus/quadrature.py in quadgl(ctx, *args, **kwargs)
807 """
808 kwargs['method'] = 'gauss-legendre'
--> 809 return ctx.quad(*args, **kwargs)
810
811 def quadosc(ctx, f, interval, omega=None, period=None, zeros=None):
/usr/local/lib/python3.6/dist-packages/mpmath/calculus/quadrature.py in quad(ctx, f, *points, **kwargs)
740 ctx.prec += 20
741 if dim == 1:
--> 742 v, err = rule.summation(f, points[0], prec, epsilon, m, verbose)
743 elif dim == 2:
744 v, err = rule.summation(lambda x: \
/usr/local/lib/python3.6/dist-packages/mpmath/calculus/quadrature.py in summation(self, f, points, prec, epsilon, max_degree, verbose)
230 print("Integrating from %s to %s (degree %s of %s)" % \
231 (ctx.nstr(a), ctx.nstr(b), degree, max_degree))
--> 232 results.append(self.sum_next(f, nodes, degree, prec, results, verbose))
233 if degree > 1:
234 err = self.estimate_error(results, prec, epsilon)
/usr/local/lib/python3.6/dist-packages/mpmath/calculus/quadrature.py in sum_next(self, f, nodes, degree, prec, previous, verbose)
252 case the quadrature rule is able to reuse them.
253 """
--> 254 return self.ctx.fdot((w, f(x)) for (x,w) in nodes)
255
256
/usr/local/lib/python3.6/dist-packages/mpmath/ctx_mp_python.py in fdot(ctx, A, B, conjugate)
942 hasattr_ = hasattr
943 types = (ctx.mpf, ctx.mpc)
--> 944 for a, b in A:
945 if type(a) not in types: a = ctx.convert(a)
946 if type(b) not in types: b = ctx.convert(b)
/usr/local/lib/python3.6/dist-packages/mpmath/calculus/quadrature.py in <genexpr>(.0)
252 case the quadrature rule is able to reuse them.
253 """
--> 254 return self.ctx.fdot((w, f(x)) for (x,w) in nodes)
255
256
<ipython-input-440-7ebeff3611f6> in <lambda>(U)
1 def f(n):
----> 2 integrand = lambda U: dxdu_u(U) * mp.besselj(n,U)
3 bjz = lambda nth: mp.besseljzero(n, nth)
4 return mp.quadosc(integrand, [0,mp.inf], zeros=bjz)
5
at this point it starts using the scipy interpolation code
/usr/local/lib/python3.6/dist-packages/scipy/interpolate/fitpack2.py in __call__(self, x, nu, ext)
310 except KeyError:
311 raise ValueError("Unknown extrapolation mode %s." % ext)
--> 312 return fitpack.splev(x, self._eval_args, der=nu, ext=ext)
313
314 def get_knots(self):
/usr/local/lib/python3.6/dist-packages/scipy/interpolate/fitpack.py in splev(x, tck, der, ext)
366 return tck(x, der, extrapolate=extrapolate)
367 else:
--> 368 return _impl.splev(x, tck, der, ext)
369
370
/usr/local/lib/python3.6/dist-packages/scipy/interpolate/_fitpack_impl.py in splev(x, tck, der, ext)
596 shape = x.shape
597 x = atleast_1d(x).ravel()
--> 598 y, ier = _fitpack._spl_(x, der, t, c, k, ext)
599
600 if ier == 10:
TypeError: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe'
_fitpack._spl_ probably is compiled code (for speed). It can't take the mpmath objects directly; it has to pass their values as C compatible doubles.
To illustrate the problem, make a numpy array of mpmath objects:
In [444]: one,two = mp.mpmathify(1), mp.mpmathify(2)
In [445]: arr = np.array([one,two])
In [446]: arr
Out[446]: array([mpf('1.0'), mpf('2.0')], dtype=object)
In [447]: arr.astype(float) # default 'unsafe' casting
Out[447]: array([1., 2.])
In [448]: arr.astype(float, casting='safe')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-448-4860036bcca8> in <module>
----> 1 arr.astype(float, casting='safe')
TypeError: Cannot cast array from dtype('O') to dtype('float64') according to the rule 'safe'
With integrand = lambda U: dxdu_u(float(U)) * mp.besselj(n,U),
In [453]: f(0) # a minute or so later
Out[453]: mpf('0.61060303588231069')
I'm using python 2.7 in Canopy and I'm trying to fit 6 parameters of a model by minimising mean squared error between data and model predictions. I'm using COBYLA since I need bounds on parameter values, and I don't have a gradient.
Currently, I have:
import numpy as np
import scipy.optimize as opt
def cost_func(pars,y,x):
y_hat = model_output(pars,x)
mse = np.mean((y-y_hat)**2)
return mse
def make_constraints(par_min,par_max):
cons = []
for (i,(a,b)) in enumerate(zip(par_min,par_max)):
lower = lambda x: x[i] - a
upper = lambda x: b - x[i]
cons = cons + [lower] + [upper]
return cons
def estimate_parameters(par_min, par_max,par_init,x,y):
cons = make_constraints(par_min,par_max)
opt_pars = opt.fmin_cobyla(cost_func,pars,cons,args=([y,x]))
return opt_pars
However I get the error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-63-9e84e10303e1> in <module>()
----> 1 opt_pars = estimate_parameters(par_min,par_max,par_init,x,y)
<ipython-input-61-f38615d82ee5> in estimate_parameters(par_min,par_max,par_init,x,y)
9 cons = make_constraints(par_min,par_max)
10
---> 11 opt_pars = opt.fmin_cobyla(cost_func,par_init,cons,args=([y,x]))
12 return opt_pars
/home/luke/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/scipy/optimize/cobyla.pyc in fmin_cobyla(func, x0, cons, args, consargs, rhobeg, rhoend, iprint, maxfun, disp, catol)
169
170 sol = _minimize_cobyla(func, x0, args, constraints=con,
--> 171 **opts)
172 if iprint > 0 and not sol['success']:
173 print("COBYLA failed to find a solution: %s" % (sol.message,))
/home/luke/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/scipy/optimize/cobyla.pyc in _minimize_cobyla(fun, x0, args, constraints, rhobeg, tol, iprint, maxiter, disp, catol, **unknown_options)
244 xopt, info = _cobyla.minimize(calcfc, m=m, x=np.copy(x0), rhobeg=rhobeg,
245 rhoend=rhoend, iprint=iprint, maxfun=maxfun,
--> 246 dinfo=info)
247
248 if info[3] > catol:
/home/luke/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/scipy/optimize/cobyla.pyc in calcfc(x, con)
238 f = fun(x, *args)
239 for k, c in enumerate(constraints):
--> 240 con[k] = c['fun'](x, *c['args'])
241 return f
242
TypeError: <lambda>() takes exactly 1 argument (3 given)
This error isn't totally clear to me, but my understanding is that 3 arguments are being passed to my constraint functions. However, I can't work out where these 3 arguments are coming from.
I've looked at other stackoverflow questions about this and taken what I can from them, but I am still having this problem
Specifying constraints for fmin_cobyla in scipy
Python SciPy: optimization issue fmin_cobyla : one constraint is not respected
Python: how to create many constraints for fmin_cobyla optimization using lambda functions
If the argument consargs of fmin_cobyla is None, the constraint functions are also passed *args, where args is the argument given to fmin_cobyla. To pass no additional arguments to the constraint functions, use consargs=().
Alternatively, in the function make_constraints, change this
lower = lambda x: x[i] - a
upper = lambda x: b - x[i]
to
lower = lambda x, *args: x[i] - a
upper = lambda x, *args: b - x[i]