fractionnal integral python - python

I'm trying to compute the triple integral of the function abs(x-y)**(H-1)*abs(y-z)**(H-1)*abs(z-x)**(H-1)
on [0,1]^3 for example with a H between (0.5,1), However it seems hard for python to compute it.
So, firstly, i tried with integrate.tplquad from scipy but it's enable to do it, it only returns that the integral is probably divergent or slowly convergent.
To avoid this,I recode a Riemann function by the well-known method and I took an "epsilon" which is a small postive number such as 10**-6 in the function itself in every absolute value, but it's also necessary to take another epsilon on the interval
The first error, i noticed is that 0.0 cannot be raised to a negative power. But then doing that transformation, knowing the answer should be around 29.7, Python returns incompatible value.
I think the problem deals with numeric issue, or the integration scheme itself, although my function Riemann isn't otptimized, i think it should have been close to the real value I expected.
Here's the code
def f(H):
eps=10**-12
return lambda x,y,z:(abs(x-y)+eps)**(H-1)*abs(y-z+eps)**(H-1)*abs(z-x+eps)**(H-1)
def riemann(H,g,a,b,c,d,e,h,n):
s=0
du=(b-a)/n
dv=(c-d)/n
dw=(e-h)/n
for i in range (n):
for j in range(n):
for k in range(n):
s+=g(du*i,dv*j,dw*k)
s=s/(n**3)
return s
Thanks for your help.

Related

Formulating an optimisation problem with Nlopt in Python

I am looking into using Nlopt for solving optimisation problems in Python.
I have a series of simultaneous equations of the form
Ax = b
where A is an NxM matrix, with x the solution. Another way to think about this is that I have N simultaneous equations of the form x_1c_1m + x_2c_2m + .... + x_Nc_Nm = k_M, where x_i are variables to solve for, c_im is a constant associated with x_i when in equation M=m, and k_m is some constant in equation M=m. c_im and k_m are all known.
What confuses me is how to even approach this in Nlopt. Nlopt requires you to have actual callable functions, which I don't have? I suppose I could generalise each of the equations in that matrix equation above to something like:
def fn(x,c_m,k_m):
val = 0
for x_i, c_im in zip(x,c_m):
val += x_i * c_im
return val - k_m
where c_m and k_m would be already known, with the variables to solve for in x. All the examples I've seen have only been looking at a single variable problem, which has kind of thrown me a little. Would I then have to somehow define M copies of this function, and set each copy of fn as an equality constraint in the Nlopt optimisation object? It's all rather confusing. I'm looking to solve for x, which itself has multiple solutions, and I want to try to find the minimum values of x (or atleast an approximate solution if an exact solution cannot be found). Would I have to then set multiple objective functions, ie obj_fn_i = min(x_i) or something like that? It's all a little confusing to me in terms of what needs to be presented to the solver. I've already got an analytical solution to the above problem, so I can check my results reliably. Any help appreciated.
Cheers!
I have been using NLopt for a couple of problems, and what I have come to understand is the solver requires an objective function which returns a float value, so you must set the function as an MSE sum, or still as a single float value to be minimized. And it can solve for an array of variables x, in which both the objective function and constraint must depend. All equations that are involved in the system you can insert either in the objective function directly, or as constraints.
Hope this was helpful somehow!

Constraints seem to be ignored using basinhopping with COBYLA method

I'm having trouble specifying constraints using basinhopping with method='COBYLA'. Here is a test case where things go wrong. Essentially, the constraints are ignored and there are function trials outside the specified range. I specify a simple quadratic with minimum at [0,0], searching for -3<x[0], but as you can see from the output, there are lots of searches outside that range (I increased the stepsize to make it obvious)
import numpy as np
from scipy.optimize import basinhopping
def f(x):
if x[0]<-3 :
print('outside range ',x[0])
return x[0]**2+x[1]**2
cons = [{'type':'ineq','fun': lambda x: x[0]+3}]
kwargs = {'method':'COBYLA','constraints':cons}
ret=basinhopping(f, [5,1],T=1,stepsize=1000,niter=1,minimizer_kwargs=kwargs)
print(ret)
runfile('py/cobyla_test', wdir='/py', post_mortem=True)
outside range -446.14581341127945
outside range -445.14581341127945
outside range -445.14581341127945
outside range -444.14581341127945
[etc... lots of output deleted]
[-4.81217825e-05 -5.23242054e-05] 5.0535284302996725e-09
As written at scipy.optimize.basinhopping — SciPy v1.1.0 Reference Guide, Basin-hopping is a two-step method:
first, a random jump is done (take_step callback)
then a local minimum is found from that point using the specificed minimization method
finally, it's decided if the step is accepted (accept_test callback)
The constraints you've specified are for the minimization method, they don't affect the jump step. For the jump step, either adjust stepsize (the maximum displacement for the random jump), or define your own take_step.
"I thought the point of the constraint is that it would never try an x outside the constraint" -- constraints in mathematical problems, including a constrained optimization problem, don't work that way. They only specify what conditions the solution itself must satisfy. They don't limit what points can be used while obtaining that solution, it's completely up to the algorithm to choose these.
The approach to limit the area in which a numerical method searches is to tweak method parameters in some way specific to the nature of the function and the method, to "guide" the method into the right direction.

Minimize function with trust-ncg method proposes value greater than max_trust_radius

So far I understand the minimize function with method Trust-ncg, the "method specific" parameter "max_trust_radius" is the maximum value for a new step optimization.
However, I experience a weird behaviour.
I work in my doctorate data and I have a code that invokes minimize function (with trust ncg method)
passing parameters
{
'initial_trust_radius':0.1,
'max_trust_radius':1,
'eta':0.15,
'gtol':1e-5,
'disp': True
}
I invoke minimize function as:
res = minimize(bbox, x0, method='trust-ncg',jac=bbox_der, hess=bbox_hess,options=opt_par)
where
bbox is a function to evaluate the objective function
x0 is the initial guess
bbox_der is the gradient function
bbox_hess hessian function
opt_par is the dictionary above with the parameters.
Bbox invokes simulation code and get the data. It works: minimize go back and forth, proposing new values, bbox invokes simulation.
Everything works well until I got a weird issue.
The "x" vector contains 8 values. I realize that one of the iterations, the last value is greater than 1.
Per the max_trust_radius, I think that it should be less than 1, but it is 1.0621612802208713e+00
The issue causes problems because bbox can not receive the value greater than 1, as it invokes a simulation program and there is a constraint that it can not receive 1 or greater than 1.
I found the scipy code and tried to see if I could be able to find a bug or something wrong but I am not.
My main concerns are:
My understanding is that there is a bug in the scipy minimize code as the new value is greater than max_trust_radius .
How can I manipulate or control the values to avoid that values became greater than 1?
Do you suggest something to investigate the issue?
The max_trust_radius controls how large steps you are allowed to take:
max_trust_radius : float
Maximum value of the trust-region radius.
No steps that are longer than this value will be proposed.
Since you are very likely to take many steps during the minimization, each which can be up to 1 long, it is not strange at all that you (assuming ||x0||=0) end up with ||x|| > 1.
If your problem is strictly bounded then you need to apply an optimization algorithm that supports bounds on the parameters.
For scipy.optimize.minimize only L-BFGS-B, TNC and SLSQP methods seem to support the bounds= keyword.

Dealing with SciPy fmin_bfgs precision loss

I'm currently trying to solve numerically a minimization problem and I tried to use the optimization library available in SciPy.
My function and derivative are a bit too complicated to be presented here, but they are based on the following functions, the minimization of which do not work either:
def func(x):
return np.log(1 + np.abs(x))
def grad(x):
return np.sign(x) / (1.0 + np.abs(x))
When calling the fmin_bfgs function (and initializing the descent method to x=10), I get the following message:
Warning: Desired error not necessarily achieved due to precision loss.
Current function value: 2.397895
Iterations: 0
Function evaluations: 24
Gradient evaluations: 22
and the output is equal to 10 (i.e. initial point). I suppose that this error may be caused by two problems:
The objective function is not convex: however I checked with other non-convex functions and the method gave me the right result.
The objective function is "very flat" when far from the minimum because of the log.
Are my suppositions true? Or does the problem come from anything else?
Whatever the error can be, what can I do to correct this? In particular, is there any other available minimization method that I could use?
Thanks in advance.
abs(x) is always somewhat dangerous as it is non-differentiable. Most solvers expect problems to be smooth. Note that we can drop the log from your objective function and then drop the 1, so we are left with minimizing abs(x). Often this can be done better by the following.
Instead of min abs(x) use
min t
-t <= x <= t
Of course this requires a solver that can solve (linearly) constrained NLPs.

scipy 'Minimize the sum of squares of a set of equations'

I face a problem in scipy 'leastsq' optimisation routine, if i execute the following program it says
raise errors[info][1], errors[info][0]
TypeError: Improper input parameters.
and sometimes index out of range for an array...
from scipy import *
import numpy
from scipy import optimize
from numpy import asarray
from math import *
def func(apar):
apar = numpy.asarray(apar)
x = apar[0]
y = apar[1]
eqn = abs(x-y)
return eqn
Init = numpy.asarray([20.0, 10.0])
x = optimize.leastsq(func, Init, full_output=0, col_deriv=0, factor=100, diag=None, warning=True)
print 'optimized parameters: ',x
print '******* The End ******'
I don't know what is the problem with my func optimize.leastsq() call, please help me
leastsq works with vectors so the residual function, func, needs to return a vector of length at least two. So if you replace return eqn with return [eqn, 0.], your example will work. Running it gives:
optimized parameters: (array([10., 10.]), 2)
which is one of the many correct answers for the minimum of the absolute difference.
If you want to minimize a scalar function, fmin is the way to go, optimize.fmin(func, Init).
The issue here is that these two functions, although they look the same for a scalars are aimed at different goals. leastsq finds the least squared error, generally from a set of idealized curves, and is just one way of doing a "best fit". On the other hand fmin finds the minimum value of a scalar function.
Obviously yours is a toy example, for which neither of these really makes sense, so which way you go will depend on what your final goal is.
Since you want to minimize a simple scalar function (func() returns a single value, not a list of values), scipy.optimize.leastsq() should be replaced by a call to one of the fmin functions (with the appropriate arguments):
x = optimize.fmin(func, Init)
correctly works!
In fact, leastsq() minimizes the sum of squares of a list of values. It does not appear to work on a (list containing a) single value, as in your example (even though it could, in theory).
Just looking at the least squares docs, it might be that your function func is defined incorrectly. You're assuming that you always receive an array of at least length 2, but the optimize function is insanely vague about the length of the array you will receive. You might try writing to screen whatever apar is, to see what you're actually getting.
If you're using something like ipython or the python shell, you ought to be getting stack traces that show you exactly which line the error is occurring on, so start there. If you can't figure it out from there, posting the stack trace would probably help us.

Categories

Resources