How can one optimize a multivariate function by differential_evolution - python

I would like to optimize a multivariate function by differential evolution using lambda function. Actually the parameters based on which I want to optimize the function are matrices each of which has a different dimension.
this is my code:
R0= [[(0,1)]]
Q0=[[(0,1)]]
c0= (0,1)
beta0=[[(-6,9)]]
B0=[ (-2,2),(-2,2),(-2,2),(-2,2),(-2,2),(-2,2), (-2,2),(-2,2),(-2,2),(-2,2),(-2,2),(-2,2),(-2,2),(-2,2)]
B0=np.array([B0])
B0=B0.T
kal_=lambda R,Q,c,beta,B:myfunc(R,Q,c,beta,B,Y,X)
opt=scipy.optimize.differential_evolution(kal_(R0,Q0,c0,beta0,B0), maxiter=1000000,tol=1e-6)
Python returns the following error which is due to the bounds set as the initial values:
ValueError: setting an array element with a sequence.
Can anyone let me know what is wrong with the code?

Related

Python issue with fitting a custom function containing double integrals

I want to fit some data using a custom function which contains a double integral. a,b, and c are pre-defined parameters, and alpha and beta are two angles on which the function must be integrated.
import numpy as np
from scipy import integrate
x=np.linspace(0,100,100)
a=100
b=5
c=1
def custom_function(x,a,b,c):
f = lambda alpha,beta: (np.pi/2)*(np.sin(x*a*np.sin(alpha)*np.cos(beta))/x*a*np.sin(alpha)*np.cos(beta))*(np.sin(x*b*np.sin(alpha)*np.sin(beta))/x*b*np.sin(alpha)*np.sin(beta))*(np.sin(x*c*np.cos(alpha))/x*c*np.cos(alpha))*np.sin(alpha)
return integrate.dblquad(f, 0, np.pi/2, 0, np.pi/2)
when running the code, I get the following error:
TypeError: cannot convert the series to <class 'float'>
I've tried simplyfing the function but I still get the same issue, anyone could help me locate the problem?
Are you sure you are not trying to multiply sinc functions, sin(x*u)/(x*u)? Currently you are multiplying terms like u * sin(x*u) / x because there are not parenthesis in the denominator.
You should be able to fit your function for small a,b,c. But having a = 100, you should have a much higher resolution, I would say steps.
I am asuming you are trying to fit using some local minimization method.
If you have a function with more than many maxima and minima while you are trying to fit you are likely to get stuck. You could try some of non-convex optimization methods available
as well

Bound Scipy optimisation of a function returning 1D data

this is more of a question on what is an appropriate approach to my problem.
I have a function that takes some 1D vector as input and returns a 1D array (in actuality its a 2D array that is flattened). I am looking to do least squares optimisation of this function. I already have my bounds and constraints on x all sorted, and had thought about doing something like this
result = optimize.minimize(func,x0,method='SLSQP',bounds=my_bounds,constraints=dict_of_constraints,args=(my_args,))
however, this approach uses _minimize_slsqp which requires that the objective function return a scalar. Is there any such approach that would work similarly to the above, but work on an objective function that can return 1D (or 2D?) data?
Cheers
You need to form a scalar function (a function that returns a single scalar value). Likely something like
||F(x)||
where ||.|| is a norm. This new scalar function can be passed on to optimize.minimize.

Vector to matrix function in NumPy without accessing elements of vector

I would like to create a NumPy function that computes the Jacobian of a function at a certain point - with the Jacobian hard coded into the function.
Say I have a vector containing two arbitrary scalars X = np.array([[x],[y]]), and a function f(X) = np.array([[2xy],[3xy]]).
This function has Jacobian J = np.array([[2y, 2x],[3y, 3x]])
How can I write a function that takes in the array X and returns the Jacobian? Of course, I could do this using array indices (e.g. x = X[0,0]), but am wondering if there is a way to do this directly without accessing the individual elements of X.
I am looking for something that works like this:
def foo(x,y):
return np.array([[2*y, 2*x],[3*y, 3*x]])
X = np.array([[3],[7]])
J = foo(X)
Given that this is possible on 1-dimensional arrays, e.g. the following works:
def foo(x):
return np.array([x,x,x])
X = np.array([1,2,3,4])
J = foo(X)
You want the jacobian, which is the differential of the function. Is that correct? I'm afraid numpy is not the right tool for that.
Numpy works with fixed numbers not with variables. That is given some number you can calculate the value of a function. The differential is a different function, that has a special relationship to the original function but is not the same. You cannot just calculate the differential but must deduce it from the functional form of the original function using differentiating rules. Numpy cannot do that.
As far as I know you have three options:
use a numeric library to calculate the differential at a specific point. However you only will get the jacobian at a specific point (x,y) and no formula for it.
take a look at a pythen CAS library like e.g. sympy. There you can define expressions in terms of variables and compute the differential with respect to that variables.
Use a library that perform automatic differentiation. Maschine learning toolkits like pytorch or tensorflow have excellent support for automatic differentiation and good integration of numpy arrays. They essentially calculate the differential, by knowing the differential for all basic operation like multiplication or addition. For composed functions, the chain rule is applied and the difderential can be calculated for arbitray complex functions.

Minimize function with trust-ncg method proposes value greater than max_trust_radius

So far I understand the minimize function with method Trust-ncg, the "method specific" parameter "max_trust_radius" is the maximum value for a new step optimization.
However, I experience a weird behaviour.
I work in my doctorate data and I have a code that invokes minimize function (with trust ncg method)
passing parameters
{
'initial_trust_radius':0.1,
'max_trust_radius':1,
'eta':0.15,
'gtol':1e-5,
'disp': True
}
I invoke minimize function as:
res = minimize(bbox, x0, method='trust-ncg',jac=bbox_der, hess=bbox_hess,options=opt_par)
where
bbox is a function to evaluate the objective function
x0 is the initial guess
bbox_der is the gradient function
bbox_hess hessian function
opt_par is the dictionary above with the parameters.
Bbox invokes simulation code and get the data. It works: minimize go back and forth, proposing new values, bbox invokes simulation.
Everything works well until I got a weird issue.
The "x" vector contains 8 values. I realize that one of the iterations, the last value is greater than 1.
Per the max_trust_radius, I think that it should be less than 1, but it is 1.0621612802208713e+00
The issue causes problems because bbox can not receive the value greater than 1, as it invokes a simulation program and there is a constraint that it can not receive 1 or greater than 1.
I found the scipy code and tried to see if I could be able to find a bug or something wrong but I am not.
My main concerns are:
My understanding is that there is a bug in the scipy minimize code as the new value is greater than max_trust_radius .
How can I manipulate or control the values to avoid that values became greater than 1?
Do you suggest something to investigate the issue?
The max_trust_radius controls how large steps you are allowed to take:
max_trust_radius : float
Maximum value of the trust-region radius.
No steps that are longer than this value will be proposed.
Since you are very likely to take many steps during the minimization, each which can be up to 1 long, it is not strange at all that you (assuming ||x0||=0) end up with ||x|| > 1.
If your problem is strictly bounded then you need to apply an optimization algorithm that supports bounds on the parameters.
For scipy.optimize.minimize only L-BFGS-B, TNC and SLSQP methods seem to support the bounds= keyword.

Differential_evolution in Scipy not giving a jacobean

I'm using the differential_evolution algorithm in scipy to fit some data with various exponential functions convolved with gaussian functions - this in itself is not a problem, the function fits it well.
However, it is not giving the jacobian in the result dictionary (which I would like to use to calculate the errors on my fit constants), despite the fact that I have set "polish" (i.e. use scipy.optimize.minimize with the L-BFGS-B method to polish the best population member at the end) to True, and thus the documentation states it should give the jacobian. My function takes the gaussian width and any number of exponents, and is being fit like so:
result = differential_evolution(exponentialfit, bounds, args=(avgspectra, c, fitfrom, errors, numcomponents, 1), tol=0.000000000001, disp=True, polish=True)
Is there any reason it is not giving the jacobian in the result output?

Categories

Resources