Scipy, differential evolution - python

The thing is, im trying to design of fitting procedure for my purposes and want to use scipy`s differential evolution algorithm as a general estimator of initial values which then will be used in LM algorithm for better fitting. The function i want to minimize with DE is the least squares between analytically defined non-linear function and some experimental values. Point at which i stuck is the function design. As its stated in scipy reference: "function must be in the form f(x, *args) , where x is the argument in the form of a 1-D array and args is a tuple of any additional fixed parameters needed to completely specify the function"
There is an ugly example of code which i wrote just for illustrative purposes:
def func(x, *args):
"""args[0] = x
args[1] = y"""
result = 0
for i in range(len(args[0][0])):
result += (x[0]*(args[0][0][i]**2) + x[1]*(args[0][0][i]) + x[2] - args[0][1][i])**2
return result**0.5
if __name__ == '__main__':
bounds = [(1.5, 0.5), (-0.3, 0.3), (0.1, -0.1)]
x = [0,1,2,3,4]
y = [i**2 for i in x]
args = (x, y)
result = differential_evolution(func, bounds, args=args)
print(func(bounds, args))
I wanted to supply raw data as a tuple into the function but it seems that its not how its suppose to be since interpreter isn't happy with the function. The problem should be easy solvable, but i really frustrated, so advice will be much appreciated.

This is kinda straightforward solution which shows the idea, also code isn`t very pythonic but for simplicity i think its good enough. Ok as example we want to fit equation of a kind y = ax^2 + bx + c to a data obtained from equation y = x^2. It obvious that parameter a = 1 and b,c should equal to 0. Since differential evolution algorithm finds minimum of a function we want to find a minimum of a root mean square deviation (again, for simplicity) of analytic solution of general equation (y = ax^2 + bx + c) with given parameters (providing some initial guess) vs "experimental" data. So, to the code:
from scipy.optimize import differential_evolution
def func(parameters, *data):
#we have 3 parameters which will be passed as parameters and
#"experimental" x,y which will be passed as data
a,b,c = parameters
x,y = data
result = 0
for i in range(len(x)):
result += (a*x[i]**2 + b*x[i]+ c - y[i])**2
return result**0.5
if __name__ == '__main__':
#initial guess for variation of parameters
# a b c
bounds = [(1.5, 0.5), (-0.3, 0.3), (0.1, -0.1)]
#producing "experimental" data
x = [i for i in range(6)]
y = [x**2 for x in x]
#packing "experimental" data into args
args = (x,y)
result = differential_evolution(func, bounds, args=args)
print(result.x)

Related

How can I solve this matrix ODE using python?

I would like to numerically compute this ODE from time 0 -> T :
ODE equation where all of the sub-matrix are numerically given in a paper. Here are all of the variables :
import numpy as np
T = 1
eta = np.diag([2e-7, 2e-7])
R = [[0.33, 3.95],
[-2.52, 10.23]]
R = np.array(R)
gamma = 2e-5
GAMMA = 100
S_bar = [54.23, 27.45]
cov = [[0.47, 0.2],
[0.2, 0.14]]
cov = np.array(cov)
shape = cov.shape
Q = 0.5*np.block([[gamma*cov, R],
[np.transpose(R), np.zeros(shape)]])
Y = np.block([[np.zeros(shape), np.zeros(shape)],
[gamma*cov, R]])
U = np.block([[-linalg.inv(eta), np.zeros(shape)],
[np.zeros(shape), 2*gamma*cov]])
P_T = np.block([[-GAMMA*np.ones(shape), np.zeros(shape)],
[np.zeros(shape), np.zeros(shape)]])
Now I define de function f so that P' = f(t, P) :
n = len(P_T)
def f(t, X):
X = X.reshape([n, n])
return (Q + np.transpose(Y)#X + X#Y + X#U#X).reshape(-1)
Now my goal is to numerically solve this ODE, im trying to figure out the right function solve so that if I integrate the ODE from T to 0, then using the final value I get, I integrate back from 0 to T, the two matrices I get are actually (nearly) the same. Here is my solve function :
from scipy import integrate
def solve(interval, initial_value):
return integrate.solve_ivp(f, interval, initial_value, method="LSODA", max_step=1e-4)
Now I can test wether the computation is right :
solv = solve([T, 0], P_T.reshape(-1))
y = np.array(solv.y)
solv2 = solve([0, T], y[:, -1])
y2 = np.array(solv2.y)
# print(solv.status)
# print(solv2.status)
# this lines shows the diffenrence between the initial matrix at T and the final matrix computed at T
# the smallest is the value, the better is the computation
print(sum(sum(abs((P_T - y2[:, -1].reshape([n, n]))))))
My issue is : No matter what "solve" function im using (using different methods, different step sizes, testing all the parameters...) I always get either errors or a very bad convergence (the difference between the two matrices is too high).
Knowing that according to the paper where this ODE comes from ( (23) in https://arxiv.org/pdf/2103.13773v4.pdf) there exists a solution, how can I numerically compute it?

Is there a way to easily integrate a set of differential equations over a full grid of points?

The problem is that I would like to be able to integrate the differential equations starting for each point of the grid at once instead of having to loop over the scipy integrator for each coordinate. (I'm sure there's an easy way)
As background for the code I'm trying to solve the trajectories of a Couette flux alternating the direction of the velocity each certain period, that is a well known dynamical system that produces chaos. I don't think the rest of the code really matters as the part of the integration with scipy and my usage of the meshgrid function of numpy.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation, writers
from scipy.integrate import solve_ivp
start_T = 100
L = 1
V = 1
total_run_time = 10*3
grid_points = 10
T_list = np.arange(start_T, 1, -1)
x = np.linspace(0, L, grid_points)
y = np.linspace(0, L, grid_points)
X, Y = np.meshgrid(x, y)
condition = True
totals = np.zeros((start_T, total_run_time, 2))
alphas = np.zeros(start_T)
i = 0
for T in T_list:
alphas[i] = L / (V * T)
solution = np.array([X, Y])
for steps in range(int(total_run_time/T)):
t = steps*T
if condition:
def eq(t, x):
return V * np.sin(2 * np.pi * x[1] / L), 0.0
condition = False
else:
def eq(t, x):
return 0.0, V * np.sin(2 * np.pi * x[1] / L)
condition = True
time_steps = np.arange(t, t + T)
xt = solve_ivp(eq, time_steps, solution)
solution = np.array([xt.y[0], xt.y[1]])
totals[i][t: t + T][0] = solution[0]
totals[i][t: t + T][1] = solution[1]
i += 1
np.save('alphas.npy', alphas)
np.save('totals.npy', totals)
The error given is :
ValueError: y0 must be 1-dimensional.
And it comes from the 'solve_ivp' function of scipy because it doesn't accept the format of the numpy function meshgrid. I know I could run some loops and get over it but I'm assuming there must be a 'good' way to do it using numpy and scipy. I accept advice for the rest of the code too.
Yes, you can do that, in several variants. The question remains if it is advisable.
To implement a generally usable ODE integrator, it needs to be abstracted from the models. Most implementations do that by having the state space a flat-array vector space, some allow a vector space engine to be passed as parameter, so that structured vector spaces can be used. The scipy integrators are not of this type.
So you need to translate the states to flat vectors for the integrator, and back to the structured state for the model.
def encode(X,Y): return np.concatenate([X.flatten(),Y.flatten()])
def decode(U): return U.reshape([2,grid_points,grid_points])
Then you can implement the ODE function as
def eq(t,U):
X,Y = decode(U)
Vec = V * np.sin(2 * np.pi * x[1] / L)
if int(t/T)%2==0:
return encode(Vec, np.zeros(Vec.shape))
else:
return encode(np.zeros(Vec.shape), Vec)
with initial value
U0 = encode(X,Y)
Then this can be directly integrated over the whole time span.
Why this might be not such a good idea: Thinking of each grid point and its trajectory separately, each trajectory has its own sequence of adapted time steps for the given error level. In integrating all simultaneously, the adapted step size is the minimum over all trajectories at the given time. Thus while the individual trajectories might have only short intervals with very small step sizes amid long intervals with sparse time steps, these can overlap in the ensemble to result in very small step sizes everywhere.
If you go beyond the testing stage, switch to a more compiled solver implementation, odeint is a Fortran code with wrappers, so half a solution. JITcode translates to C code and links with the compiled solver behind odeint. Leaving python you get sundials, the diffeq module of julia-lang, or boost::odeint.
TL;DR
I don't think you can "integrate the differential equations starting for each point of the grid at once".
MWE
Please try to provide a MWE to reproduce your problem, like you said : "I don't think the rest of the code really matters", and it makes it harder for people to understand your problem.
Understanding how to talk to the solver
Before answering your question, there are several things that seem to be misunderstood :
by defining time_steps = np.arange(t, t + T) and then calling solve_ivp(eq, time_steps, solution) : the second argument of solve_ivp is the time span you want the solution for, ie, the "start" and "stop" time as a 2-uple. Here your time_steps is 30-long (for the first loop), so I would probably replace it by (t, t+T). Look for t_span in the doc.
from what I understand, it seems like you want to control each iteration of the numerical resolution : that's not how solve_ivp works. More over, I think you want to switch the function "eq" at each iteration. Since you have to pass the "the right hand side" of the equation, you need to wrap this behavior inside a function. It would not work (see right after) but in terms of concept something like this:
def RHS(t, x):
# unwrap your variables, condition is like an additional variable of your problem,
# with a very simple differential equation
x0, x1, condition = x
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x[1] / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x[1] / L)
# compute new result for condition
condition_out = not(condition)
return [x0_out, x1_out, condition_out]
This would not work because the evolution of condition doesn't satisfy some mathematical properties of derivation/continuity. So condition is like a boolean switch that parametrizes the model, we can use global to control the state of this boolean :
condition = True
def RHS_eq(t, y):
global condition
x0, x1 = y
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x1 / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x1 / L)
# update condition
condition = 0 if condition==1 else 1
return [x0_out, x1_out]
finaly, and this is the ValueError you mentionned in your post : you define solution = np.array([X, Y]) which actually is initial condition and supposed to be "y0: array_like, shape (n,)" where n is the number of variable of the problem (in the case of [x0_out, x1_out] that would be 2)
A MWE for a single initial condition
All that being said, lets start with a simple MWE for a single starting point (0.5,0.5), so we have a clear view of how to use the solver :
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
# initial conditions for x0, x1, and condition
initial = [0.5, 0.5]
condition = True
# time span
t_span = (0, 100)
# constants
V = 1
L = 1
# define the "model", ie the set of equations of t
def RHS_eq(t, y):
global condition
x0, x1 = y
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x1 / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x1 / L)
# update condition
condition = 0 if condition==1 else 1
return [x0_out, x1_out]
solution = solve_ivp(RHS_eq, # Right Hand Side of the equation(s)
t_span, # time span, a 2-uple
initial, # initial conditions
)
fig, ax = plt.subplots()
ax.plot(solution.t,
solution.y[0],
label="x0")
ax.plot(solution.t,
solution.y[1],
label="x1")
ax.legend()
Final answer
Now, what we want is to do the exact same thing but for various initial conditions, and from what I understand, we can't : again, quoting the doc
y0 : array_like, shape (n,) : Initial state. . The solver's initial condition only allows one starting point vector.
So to answer the initial question : I don't think you can "integrate the differential equations starting for each point of the grid at once".

Python lambda function with arrays as parameters

I am trying to define a function of n variables to fit to a data set. The function looks like this.
Kelly Function
I then want to find the optimal ai's and bj's to fit my data set using scipy.optimize.leastsq
Here's my code so far.
from scipy.optimize import leastsq
import numpy as np
def kellyFunc(a, b, x): #Function to fit.
top = 0
bot = 0
a = [a]
b = [b]
for i in range(len(a)):
top = top + a[i]*x**(2*i)
bot = bot + b[i]*x**(2*i)
return(top/bot)
def fitKelly(x, y, n):
line = lambda params, x : kellyFunc(params[0,:], params[1,:], x) #Lambda Function to minimize
error = lambda params, x, y : line(params, x) - y #Kelly - dataset
paramsInit = [[1 for x in range(n)] for y in range(2)] #define all ai and bi = 1 for initial guess
paramsFin, success = leastsq(error, paramsInit, args = (x,y)) #run leastsq optimization
#line of best fit
xx = np.linspace(x.min(), x.max(), 100)
yy = line(paramsFin, xx)
return(paramsFin, xx, yy)
At the moment it's giving me the error:
"IndexError: too many indices" because of the way I've defined my initial lambda function with params[0,:] and params[1,:].
There are a few problems with your approach that makes me write a full answer.
As for your specific question: leastsq doesn't really expect multidimensional arrays as parameter input. The documentation doesn't make this clear, but parameter inputs are flattened when passed to the objective function. You can verify this by using full functions instead of lambdas:
from scipy.optimize import leastsq
import numpy as np
def kellyFunc(a, b, x): #Function to fit.
top = 0
bot = 0
for i in range(len(a)):
top = top + a[i]*x**(2*i)
bot = bot + b[i]*x**(2*i)
return(top/bot)
def line(params,x):
print(repr(params)) # params is 1d!
params = params.reshape(2,-1) # need to reshape back
return kellyFunc(params[0,:], params[1,:], x)
def error(params,x,y):
print(repr(params)) # params is 1d!
return line(params, x) - y # pass it on, reshape in line()
def fitKelly(x, y, n):
#paramsInit = [[1 for x in range(n)] for y in range(2)] #define all ai and bi = 1 for initial guess
paramsInit = np.ones((n,2)) #better
paramsFin, success = leastsq(error, paramsInit, args = (x,y)) #run leastsq optimization
#line of best fit
xx = np.linspace(x.min(), x.max(), 100)
yy = line(paramsFin, xx)
return(paramsFin, xx, yy)
Now, as you see, the shape of the params array is (2*n,) instead of (2,n). By doing the re-reshape ourselves, your code (almost) works. Of course the print calls are only there to show you this fact; they are not needed for the code to run (and will produce bunch of needless output in each iteration).
See my other changes, related to other errors: you had a=[a] and b=[b] in your kellyFunc, for no good reason. This turned the input arrays into lists containing arrays, which made the next loop do something very different from what you intended.
Finally, the sneakiest error: you have input variables named x, y in fitKelly, then you use x and y is loop variables in a list comprehension. Please be aware that this only works as you expect it to in python 3; in python 2 the internal variables of list comprehensions actually leak outside the outer scope, overwriting your input variables named x and y.

Python, scipy.optimize.curve_fit do not fit to a linear equation where the slope is known

I think I have a relatively simple problem but I have been trying now for a few hours without luck. I am trying to fit a linear function (linearf) or power-law function (plaw) where I already known the slope of these functions (b, I have to keep it constant in this study). The results should give an intercept around 1.8, something I have not managed to get. I must do something wrong but I can not point my finger on it. Does somebody have an idea how to get around this problem?
Thank you in advance!
import numpy as np
from scipy import optimize
p2 = np.array([ 8.08543600e-06, 1.61708700e-06, 1.61708700e-05,
4.04271800e-07, 4.04271800e-06, 8.08543600e-07])
pD = np.array([ 12.86156, 16.79658, 11.52103, 21.092 , 14.47469, 18.87318])
# Power-law function
def plaw(a,x):
b=-0.1677 # known slope
y = a*(x**b)
return y
# linear function
def linearf(a,x):
b=-0.1677 # known slope
y = b*x + a
return y
## First way, via power-law function ##
popt, pcov = optimize.curve_fit(plaw,p2,pD,p0=1.8)
# array([ 7.12248200e-37]) wrong
popt, pcov = optimize.curve_fit(plaw,p2,pD)
# >>> return 0.9, it is wrong too (the results should be around 1.8)
## Second way, via log10 and linear function ##
x = np.log10(p2)
y = np.log10(pD)
popt, pcov = optimize.curve_fit(linearf,x,y,p0=0.3)
K = 10**popt[0]
## >>>> return 3.4712954470408948e-41, it is wrong
I just discover an error in the functions:
It should be :
def plaw(x,a):
b=-0.1677 # known slope
y = a*(x**b)
return y
and not
def plaw(a,x):
b=-0.1677 # known slope
y = a*(x**b)
return y
Stupid mistake.

Solving an ordinary differential equation on a fixed grid (preferably in python)

I have a differential equation of the form
dy(x)/dx = f(y,x)
that I would like to solve for y.
I have an array xs containing all of the values of x for which I need ys.
For only those values of x, I can evaluate f(y,x) for any y.
How can I solve for ys, preferably in python?
MWE
import numpy as np
# these are the only x values that are legal
xs = np.array([0.15, 0.383, 0.99, 1.0001])
# some made up function --- I don't actually have an analytic form like this
def f(y, x):
if not np.any(np.isclose(x, xs)):
return np.nan
return np.sin(y + x**2)
# now I want to know which array of ys satisfies dy(x)/dx = f(y,x)
Assuming you can use something simple like Forward Euler...
Numerical solutions will rely on approximate solutions at previous times. So if you want a solution at t = 1 it is likely you will need the approximate solution at t<1.
My advice is to figure out what step size will allow you to hit the times you need, and then find the approximate solution on an interval containing those times.
import numpy as np
#from your example, smallest step size required to hit all would be 0.0001.
a = 0 #start point
b = 1.5 #possible end point
h = 0.0001
N = float(b-a)/h
y = np.zeros(n)
t = np.linspace(a,b,n)
y[0] = 0.1 #initial condition here
for i in range(1,n):
y[i] = y[i-1] + h*f(t[i-1],y[i-1])
Alternatively, you could use an adaptive step method (which I am not prepared to explain right now) to take larger steps between the times you need.
Or, you could find an approximate solution over an interval using a coarser mesh and interpolate the solution.
Any of these should work.
I think you should first solve ODE on a regular grid, and then interpolate solution on your fixed grid. The approximate code for your problem
import numpy as np
from scipy.integrate import odeint
from scipy import interpolate
xs = np.array([0.15, 0.383, 0.99, 1.0001])
# dy/dx = f(x,y)
def dy_dx(y, x):
return np.sin(y + x ** 2)
y0 = 0.0 # init condition
x = np.linspace(0, 10, 200)# here you can control an accuracy
sol = odeint(dy_dx, y0, x)
f = interpolate.interp1d(x, np.ravel(sol))
ys = f(xs)
But dy_dx(y, x) should always return something reasonable (not np.none).
Here is the drawing for this case

Categories

Resources