Differential evolution doesn't give back the global Minimum - python

I am trying to find the global minimum of the Sum of Squared Differences which contains five different parameters x[0], x[1], x[2], x{3], x[4] coming from an affine transformation of data. Since I have a lot of data to compare the different_evolution approach was speeded up with the use of numba as you can see in the code part below. (More details to the code can be found here: How to speed up differential_evolution to find the minimum of Sum of Squared Errors with five variables)
My problem now is, that I get different values for my parameters for running the same Code several times. This problem is also described in Differential evolution algorithm different results for different runs but I use the scipy.optimize package to import differential_evolution instead of mystic. I tried to use mystic instead but then I get de following error message:
TypeError: CPUDispatcher(<function tempsum3 at 0x000002AA043999D0>) is not a Python function
My question now is, if there is any appproach that really gives me back the global minimum of my function? An approach using the powell-method with the minimize command gives back worse values for my parameters than differential_evolution (gets stuck in local minmia even faster). And if there is no other way than different_evolution, how can I be sure that I really have the global Minimum at some point? And in case the mystic approach is expedient, how do i get it going?
I would be thankful for any kind of advice,
Here's my function to minimize with the use of numba to speed it up and the mystic pacakge, that gives me back an error message:
from math import floor
import matplotlib.pyplot as plt
import numpy as np
import mystic as my
from mystic.solvers import diffev
import numba as nb
w1=5 #A1
h1=3
w2=8#A2
h2=5
hw1=w1*h1
hw2=w2*h2
A1=np.ones((h1,w1))
A2=np.ones((h2,w2))
for n1 in np.arange(2,4):
A1[1][n1]=1000
for n2 in np.arange(5,7):
A2[3][n2]=1000
A2[4][n2]=1000
#Norm the raw data
maxA1=np.max(A1)
A1=1/maxA1*A1
#Norm the raw data
maxA2=np.max(A2)
A2=1/maxA2*A2
fig, axes = plt.subplots(1, 2)
axes[0].imshow(A2)
axes[0].set_title('original-A2')
axes[1].imshow(A1)
axes[1].set_title('shifted-A1')
plt.show()
#nb.njit
def xy_to_pixelvalue(x,y,dataArray): #getting the values to the normed pixels
height=dataArray.shape[0]
width=dataArray.shape[1]
xcoord=floor((x+1)/2*(width-1))
ycoord=floor((y+1)/2*(height-1))
if -1 <=x<=1 and -1<=y<=1:
return dataArray[ycoord][xcoord]
else:
return(0)
#norming pixel coordinates
A1x=np.linspace(-1,1,w1)
A1y=np.linspace(-1,1,h1)
A2x=np.linspace(-1,1,w2)
A2y=np.linspace(-1,1,h2)
#normed coordinates of A2 in a matrix
Ap2=np.zeros((h2,w2,2))
for i in np.arange(0,h2):
for j in np.arange(0,w2):
Ap2[i][j]=(A2x[j],A2y[i])
#defining a vector with the coordinates of A2
d=[]
cdata=Ap2[:,:,0:2]
for i in np.arange(0,h2):
for j in np.arange(0,w2):
d.append((cdata[i][j][0],cdata[i][j][1]))
d=np.asarray(d)
coora2=d.transpose()
coora2=np.array(coora2, dtype=np.float64)
#nb.njit('(float64[::1],)')
def tempsum3(x):
tempsum=0
for l in np.arange(0,hw2):
tempsum += (xy_to_pixelvalue(np.cos(np.radians(x[2]))*x[0]*coora2[0][l]-np.sin(np.radians(x[2]))*x[1]*coora2[1][l]+x[3],np.sin(np.radians(x[2]))*x[0]*coora2[0][l]+np.cos(np.radians(x[2]))*x[1]*coora2[1][l]+x[4],A1)-xy_to_pixelvalue(coora2[0][l],coora2[1][l],A2))**2
return tempsum
x0 = np.array([1,1,-0.5,0,0])
bounds = [(0.1,5),(0.1,5),(-1,2),(-4,4),(-4,4)]
result = diffev(tempsum3, x0, npop = 5*15, bounds = bounds, ftol = 1e-11, gtol = 3500, maxiter = 1024**3, maxfun = 1024**3, full_output=True, scale = 0.8)
print(result)

Related

How to fit experimental data in Python to inverse trigonometric function with limited definition area using scipy.curve_fit?

I am trying to fit some experimental data to a nonlinear function with one parameter that includes an arcus cosine function which therefore is limited in its area of definition from -1 to 1. I use scipy's curve_fit to find the parameter of the function, but it returns the following error:
RuntimeError: Optimal parameters not found: Number of calls to function has reached maxfev = 400.
The function I want to fit is this one:
def fitfunc(x, a):
y = np.rad2deg(np.arccos(x*np.cos(np.deg2rad(a))))
return y
For the fitting, I provid a numpy array for x and y respectively which contain values in degree (which is why the function contains conversion to and from radians).
param, param_cov = curve_fit(fitfunc, xs, ys)
When I use other fit functions like for example a polynomial, the curve_fit returns some values, the error mentioned above only occurs when I use this function which includes an arcus cosine.
I suspect that it cannot fit the data points because depending on the parameter of the arcus cosine function, some data points do not lie inside the area of definition of the arcus cosine. I have tried raising the number iterations (maxfev) but without success.
Sample data:
ys = np.array([113.46125, 129.4225, 140.88125, 145.80375, 145.4425,
146.97125, 97.8025, 112.91125, 114.4325, 119.16125,
130.13875, 134.63125, 129.4375, 141.99, 139.86,
138.77875, 137.91875, 140.71375])
xs = np.array([2.786427013, 3.325624466, 3.473013087, 3.598247534, 4.304280248,
4.958273121, 2.679526725, 2.409388637, 2.606306639, 3.661558062,
4.569923009, 4.836843789, 3.377013596, 3.664550526, 4.335401233,
3.064199519, 3.97155254, 4.100567011])
As HS-nebula mentioned in his comments, you need to define an initial value a0 of a as a start guess for the curve-fitting. Moreover, you need to be careful when choosing a0 as your np.arcos() is only defined in [-1,1] and choosing the wrong a0 results in error.
import numpy as np
from scipy.optimize import curve_fit
ys = np.array([113.46125, 129.4225, 140.88125, 145.80375, 145.4425, 146.97125,
97.8025, 112.91125, 114.4325, 119.16125, 130.13875, 134.63125,
129.4375, 141.99, 139.86, 138.77875, 137.91875, 140.71375])
xs = np.array([2.786427013, 3.325624466, 3.473013087, 3.598247534, 4.304280248, 4.958273121,
2.679526725, 2.409388637, 2.606306639, 3.661558062, 4.569923009, 4.836843789,
3.377013596, 3.664550526, 4.335401233, 3.064199519, 3.97155254, 4.100567011])
def fit_func(x, a):
a_in_rad = np.deg2rad(a)
cos_a_in_rad = np.cos(a_in_rad)
arcos_xa_product = np.arccos( x * cos_a_in_rad )
return np.rad2deg(arcos_xa_product)
a0 = 80
param, param_cov = curve_fit(fit_func, xs, ys, a0, bounds = (0, 360))
print('Using curve we retrieve a value of a = ', param[0])
Output:
Using curve we retrieve a value of a = 100.05275506147824
However if you choose a0=60, you get the following error:
ValueError: Residuals are not finite in the initial point.
To be able to use the data with all possible values of a, a normalization as HS-nebula suggested is good idea.

How to pass the source term when solving ODE using odeint

Differential equation of forced harmonic oscillator is given as Mx''+Lx'+(w^2)x=F(t). Here F(t) is a source term. To solve this problem I wrote a code where I define the differential equation in a function 'diff'. I wrote another function 'generate_pulse' that gives the F(t).
Then I use 'odeint', which solves the differential equation by calling the 'diff' function along with other parameters. Now I don't get any error message if I put F=0 inside the 'diff' function (i.e., ignore any F(t) term). Please have a look inside the 'diff' function:
F=0 #No error detected if I put F=0 here. Comment out this line to see the error
Once I keep the F(t), I get an error message 'ValueError: setting an array element with a sequence.'
How to solve the problem?
Code:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import math
import itertools
def diff(y, t,param):
M=param[0]
L=param[1]
w2=param[2]
F=param[3]
F=0 #No error detected if I put F=0 here. Comment out this line to see the error
x,v = y
dydt = [v, (F-L*v - w2*x)/M]
return dydt
def generate_pulse(t,Amp,init_delay,rtime,PW):
L=len(t)
t0=t[0:math.ceil(init_delay*L/100)]
t1=t[len(t0):len(t0)+math.ceil(rtime*L/100)]
t2=t[len(t0)+len(t1):len(t0)+len(t1)+math.ceil(PW*L/100)]
t3=t[len(t0)+len(t1)+len(t2):len(t0)+len(t1)+len(t2)+2*math.ceil(rtime*L/100)]
t4=t[len(t0)+len(t1)+len(t2)+len(t3):len(t0)+len(t1)+len(t2)+len(t3)+math.ceil(PW*L/100)]
t5=t[len(t0)+len(t1)+len(t2)+len(t3)+len(t4):len(t0)+len(t1)+len(t2)+len(t3)+len(t4)+math.ceil(rtime*L/100)]
t6=t[len(t0)+len(t1)+len(t2)+len(t3)+len(t4)+len(t5):]
s0=0*t0
s1=(Amp/(t1[-1]-t1[0]))*(t1-t1[0])
s2=np.full((1,len(t2)),Amp)
s2=list(itertools.chain.from_iterable(s2)) #The 'tuple' is converted into array
s3=-Amp/(t1[-1]-t1[0])*(t3-t3[0])+Amp
s4=np.full((1,len(t4)),-Amp)
s4=list(itertools.chain.from_iterable(s4)) #The 'tuple' is converted into array
s5=(Amp/(t5[-1]-t5[0]))*(t5-t5[0])-Amp
s6=np.full((1,len(t6)),0)
s6=list(itertools.chain.from_iterable(s6)) #The 'tuple' is converted into array
s=[s0,s1,s2,s3,s4,s5,s6]
s=list(itertools.chain.from_iterable(s))
return s
###############################################################################
# Main code from here
t = np.linspace(0, 30, 200)
y0 = [- 10, 0.0]
M=5
L = 1
w2 = 15.0
Amp=5
init_delay=10
rtime=10
PW=10
F=generate_pulse(t,Amp,init_delay,rtime,PW)
Param=([M,L,w2,F],) #Making the 'Param' a tuple. Because args of odeint takes tuple as argument.
sol = odeint(diff, y0, t, args=Param)
plt.plot(t, sol[:, 0], 'b', label='x(t)')
plt.plot(t,F,'g',label='Force(t)')
plt.legend(loc='best')
plt.show()
You get the error because the value of F that you pass is the array that you generated.
Use the interpolation functions of numpy or scipy to make an actual function out of the arrays. Then evaluate that function at time t. Or directly implement the forcing term as a piecewise defined function of the scalar t.
Also note that the list of sampling times you give in t to odeint has (almost) nothing to do with the times at which odeint calls the ODE function diff. If you want to control that you would have to implement your own fixed-step method, probably not Runge-Kutta but some multi-step method.

Python rfft does not yield correct phase information

So I'm trying to write a code to implement the Forman Phase Correction to correct phase-related asymmetries in an interferogram. The premise of the routine is to convolute the fourier transform of exp(-i*PhaseAngle) with your interferogram. This technique has been around since the 60s, and it should work, but I've been having difficulties.
As such, I used python to generate an interferogram with exactly 0 phase angle to debug my code. Once I got that working, I generated an interferogram with 0.3 radians of phase, and my phase array does not look like a constant 0.3, and I'm not sure whats happening. I've tried numpy arctan, numpy arctan2, and numpy angle all to bad results. I've also tried apodizing my data and that didn't help, so I removed that for clarity.
Any ideas why I'm not getting the correct phase information from my program? If you plug this code into python you get some wonky phase graph.
I'm fairly confident in my processing functions, and the true meat of the code is below those, where I call those functions to process my interferogram. The basic routine to calculate the phase angle is:
-Truncate the interferogram to some small number of points (I used 401)
-Rotate it so that the zero path difference (ZPD) is the first data point
-Multiply the ZPD value by 0.5 (not sure the reason for this, but academic papers say that you have to do this)
-Take the real (one sided) fft of the resulting data
-Take the arc tangent of the imaginary divided by the real to get the phase angle
So I'm following the above routine for an artificially generated interferogram with 0.3 radians of phase shift, but have been unable to reproduce that phase error with the routine. Any ideas? Thanks for any help you all can offer!
EDIT: A lot of what this question boils down to is how should I arrange my signal(that has a known, constant, phase error of 0.3) before applying the fft such that the imaginary part of my result is a constant 0.3? And further, what type of fft in python should I use to achieve this result? I'm highly confident that I'm successfully generating a signal with a constant phase error of 0.3.
Code:
## Importer
import numpy as np
import scipy.integrate
import matplotlib.pyplot as plt
plt.style.use('ggplot')
from scipy.fftpack import dct,idct,dst,idst
import cmath
from scipy import signal
from scipy.integrate import simps
import csv
## Generate interferogram
xlist = np.arange(-0.03, 0.03,(1/15798.69))
numPhase = 200
print 'generating interferogram...'
def infGen(nu, xval):
h = 6.626*10**-34 #J*s
c = 3*10**8 #m/s
kB = 1.38*10**-23 #J/K
T = 300.0 # K
return 0.5*2*h*c*c*nu*nu*nu*(100.0**3)*1/(np.exp(h*c*nu*100/(kB*T))-1)*np.cos(2*np.pi*nu*xval+0.3)
interferogram = list()
intError = list()
for i in range(len(xlist)):
xval = xlist[i]
value, error = scipy.integrate.quad(infGen,500,4000,args = (xval),limit=150)
interferogram.append(value)
intError.append(error)
## Processing functions
def truncate(interferogram, centerburst):
truncated=list()
truncatedLength=2*numPhase+1
for i in range(truncatedLength):
truncated.append(interferogram[centerburst-(truncatedLength/2)+i])
#Compute interferogram averages to normalize
truncavg=np.average(truncated)
#Normalize the interferograms by their averages
for i in range(len(truncated)):
truncated[i]=truncated[i]-truncavg
return truncated
def rotateMertz(interferogram):
rotated=list()
for i in range(len(interferogram)-numPhase):
rotated.append(interferogram[i+numPhase])
for i in range(numPhase):
rotated.append(interferogram[i])
return rotated
def getphaseangle(fftdata):
phaseangle=list()
for i in range(len(fftdata)):
#phaseangle.append(np.angle(fftdata[i]))
"""if fftdata[i].imag < 0:
phaseangle[i] = phaseangle[i]+2*np.pi
if fftdata[i].real>0 and np.abs(fftdata[i].imag)<(10**-10) and fftdata[i].imag<0:
phaseangle[i] = 0"""
phaseangle.append(np.arctan(fftdata[i].imag/fftdata[i].real))
return phaseangle
def findZPD(inf):
# finds the index of the zpd for a given interferogram
maxVal=max(np.abs(inf))
x = [i for i, j in enumerate(inf) if j==maxVal]
return x[0]
## Process Interferogram
inf = interferogram
phasedataTrunc = truncate(inf, findZPD(inf))
phasedataRot = rotateMertz(phasedataTrunc) # Rotates so ZPD is first value
phasedataRot[0] = phasedataRot[0]*0.5 # Multiply ZPD value by 0.5
phasedataFFT = np.fft.rfft(phasedataRot)
plt.figure('real part')
plt.plot(phasedataFFT.real)
plt.figure('imaginary part')
plt.plot(phasedataFFT.imag)
plt.figure('imaginary/real')
plt.plot(phasedataFFT.imag/phasedataFFT.real)
phaseAngle = getphaseangle(phasedataFFT)
plt.figure('phase angle')
plt.plot(phaseAngle)
plt.show()

How to minimize a multivariable function using scipy

So I have the function
f(x) = I_0(exp(Q*x/nKT)
Where Q, K and T are constants, for the sake of clarity I'll add the values
Q = 1.6x10^(-19)
K = 1.38x10^(-23)
T = 77.6
and n and I_0 are the two constraints that I'm trying to minimize.
my xdata is a list of 50 datapoints and as is my ydata. So as of yet this is my code:
from __future__ import division
import scipy.optimize as optimize
import numpy
xdata = numpy.array([1.07,1.07994,1.08752,1.09355,
1.09929,1.10536,1.10819,1.11321,
1.11692,1.12099,1.12435,1.12814,
1.13181,1.13594,1.1382,1.14147,
1.14443,1.14752,1.15023,1.15231,
1.15514,1.15763,1.15985,1.16291,1.16482])
ydata = [0.00205,
0.004136,0.006252,0.008252,0.010401,
0.012907,0.014162,0.016498,0.018328,
0.020426,0.022234,0.024363,0.026509,
0.029024,0.030457,0.032593,0.034576,
0.036725,0.038703,0.040223,0.042352,
0.044289,0.046043,0.048549,0.050146]
#data and ydata is experimental data, xdata is voltage and ydata is current
def f(x,I0,N):
# I0 = 7.85E-07
# N = 3.185413895
Q = 1.66E-19
K = 1.38065E-23
T = 77.3692
return I0*(numpy.e**((Q*x)/(N*K*T))-1)
result = optimize.curve_fit(f, xdata,ydata) #trying to minize I0 and N
But the answer doesn't give suitably optimized constraints
Any help would be hugely appreciated I realize there may be something obvious I am missing, I just can't see what it is!
I have tried this, but for some reason if you throw out those constants so function becomes
def f(x,I0,N):
return I0*(numpy.exp(x/N)-1)
you get something reasonable.
1.86901114e-13, 4.41838309e-02
Its true, that when we get rid off constants its better. Define function as:
def f(x,A,B):
return A*(np.e**(B*x)-1)
and fit it by curve_fit, you'll be able to get A that is explicitly I0 (A=I0) and B (you can obtain N simply by N=Q/(BKT) ). I managed to get pretty good fit.
I think if there is too much constants, algorithm gets confused some way.

Using adaptive step sizes with scipy.integrate.ode

The (brief) documentation for scipy.integrate.ode says that two methods (dopri5 and dop853) have stepsize control and dense output. Looking at the examples and the code itself, I can only see a very simple way to get output from an integrator. Namely, it looks like you just step the integrator forward by some fixed dt, get the function value(s) at that time, and repeat.
My problem has pretty variable timescales, so I'd like to just get the values at whatever time steps it needs to evaluate to achieve the required tolerances. That is, early on, things are changing slowly, so the output time steps can be big. But as things get interesting, the output time steps have to be smaller. I don't actually want dense output at equal intervals, I just want the time steps the adaptive function uses.
EDIT: Dense output
A related notion (almost the opposite) is "dense output", whereby the steps taken are as large as the stepper cares to take, but the values of the function are interpolated (usually with accuracy comparable to the accuracy of the stepper) to whatever you want. The fortran underlying scipy.integrate.ode is apparently capable of this, but ode does not have the interface. odeint, on the other hand, is based on a different code, and does evidently do dense output. (You can output every time your right-hand-side is called to see when that happens, and see that it has nothing to do with the output times.)
So I could still take advantage of adaptivity, as long as I could decide on the output time steps I want ahead of time. Unfortunately, for my favorite system, I don't even know what the approximate timescales are as functions of time, until I run the integration. So I'll have to combine the idea of taking one integrator step with this notion of dense output.
EDIT 2: Dense output again
Apparently, scipy 1.0.0 introduced support for dense output through a new interface. In particular, they recommend moving away from scipy.integrate.odeint and towards scipy.integrate.solve_ivp, which as a keyword dense_output. If set to True, the returned object has an attribute sol that you can call with an array of times, which then returns the integrated functions values at those times. That still doesn't solve the problem for this question, but it is useful in many cases.
Since SciPy 0.13.0,
The intermediate results from the dopri family of ODE solvers can
now be accessed by a solout callback function.
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
def logistic(t, y, r):
return r * y * (1.0 - y)
r = .01
t0 = 0
y0 = 1e-5
t1 = 5000.0
backend = 'dopri5'
# backend = 'dop853'
solver = ode(logistic).set_integrator(backend)
sol = []
def solout(t, y):
sol.append([t, *y])
solver.set_solout(solout)
solver.set_initial_value(y0, t0).set_f_params(r)
solver.integrate(t1)
sol = np.array(sol)
plt.plot(sol[:,0], sol[:,1], 'b.-')
plt.show()
Result:
The result seems to be slightly different from Tim D's, although they both use the same backend. I suspect this having to do with FSAL property of dopri5. In Tim's approach, I think the result k7 from the seventh stage is discarded, so k1 is calculated afresh.
Note: There's a known bug with set_solout not working if you set it after setting initial values. It was fixed as of SciPy 0.17.0.
I've been looking at this to try to get the same result. It turns out you can use a hack to get the step-by-step results by setting nsteps=1 in the ode instantiation. It will generate a UserWarning at every step (this can be caught and suppressed).
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
import warnings
def logistic(t, y, r):
return r * y * (1.0 - y)
r = .01
t0 = 0
y0 = 1e-5
t1 = 5000.0
#backend = 'vode'
backend = 'dopri5'
#backend = 'dop853'
solver = ode(logistic).set_integrator(backend, nsteps=1)
solver.set_initial_value(y0, t0).set_f_params(r)
# suppress Fortran-printed warning
solver._integrator.iwork[2] = -1
sol = []
warnings.filterwarnings("ignore", category=UserWarning)
while solver.t < t1:
solver.integrate(t1, step=True)
sol.append([solver.t, solver.y])
warnings.resetwarnings()
sol = np.array(sol)
plt.plot(sol[:,0], sol[:,1], 'b.-')
plt.show()
result:
The integrate method accepts a boolean argument step that tells the method to return a single internal step. However, it appears that the 'dopri5' and 'dop853' solvers do not support it.
The following code shows how you can get the internal steps taken by the solver when the 'vode' solver is used:
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
def logistic(t, y, r):
return r * y * (1.0 - y)
r = .01
t0 = 0
y0 = 1e-5
t1 = 5000.0
backend = 'vode'
#backend = 'dopri5'
#backend = 'dop853'
solver = ode(logistic).set_integrator(backend)
solver.set_initial_value(y0, t0).set_f_params(r)
sol = []
while solver.successful() and solver.t < t1:
solver.integrate(t1, step=True)
sol.append([solver.t, solver.y])
sol = np.array(sol)
plt.plot(sol[:,0], sol[:,1], 'b.-')
plt.show()
Result:
FYI, although an answer has been accepted already, I should point out for the historical record that dense output and arbitrary sampling from anywhere along the computed trajectory is natively supported in PyDSTool. This also includes a record of all the adaptively-determined time steps used internally by the solver. This interfaces with both dopri853 and radau5 and auto-generates the C code necessary to interface with them rather than relying on (much slower) python function callbacks for the right-hand side definition. None of these features are natively or efficiently provided in any other python-focused solver, to my knowledge.
Here's another option that should also work with dopri5 and dop853. Basically, the solver will call the logistic() function as often as needed to calculate intermediate values so that's where we store the results:
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
sol = []
def logistic(t, y, r):
sol.append([t, y])
return r * y * (1.0 - y)
r = .01
t0 = 0
y0 = 1e-5
t1 = 5000.0
# Maximum number of steps that the integrator is allowed
# to do along the whole interval [t0, t1].
N = 10000
#backend = 'vode'
backend = 'dopri5'
#backend = 'dop853'
solver = ode(logistic).set_integrator(backend, nsteps=N)
solver.set_initial_value(y0, t0).set_f_params(r)
# Single call to solver.integrate()
solver.integrate(t1)
sol = np.array(sol)
plt.plot(sol[:,0], sol[:,1], 'b.-')
plt.show()

Categories

Resources