Hi I'm writing a code that applies an RK4 function to different types of ODEs. I have gotten it to work for the first function but not for the second, which is 2 eqns and outputs two answers into a vector. I was able to get the whole code to work for one that outputs a single scalar variable(f2a) but I'm not sure how to change the code so that it can still do that and this second function(fb2). I'm extremely new to coding and don't really know what I'm doing so all advice and help is welcome. Code posted below:
import numpy as np
import math
import matplotlib.pyplot as plt
#defining functions
H0=7 #initial height, meters
def f2a(t,H,k,Vin,D):
dhdt=4/(math.pi*D**2)*(Vin-k*np.sqrt(H))
return(dhdt)
def fb2(J,t):
x=J[0]
y=J[1]
dxdt=0.25*y-x
dydt=3*x-y
#X0,Y0=1,1 initial conditions
return([dxdt,dydt])
#x0 and y0 are initial conditions
def odeRK4(function,tspan,R,h,*args):
#R is vector of inital conditions
x0=R[0]
y0=R[1]
#writing statement for what to do if h isnt given/other thing
if h==None:
h=.01*(tspan[1]-tspan[0])
elif h> tspan[1]-tspan[0]:
h=.01*(tspan[1]-tspan[0])
else:
h=h
#defining the 2-element array (i hope)
#pretty sure tspan is range of t values
x0=tspan[0] #probably 0 if this is meant for time
xn=tspan[1] #whatever time we want it to end at?
#xn is final x value-t
#x0 is initial
t_values=np.arange(x0,21,1) #0-20
N=len(t_values)
y_val=np.zeros(N)
y_val[0]=y0
#I am trying to print all the Y values into this array
for i in range(1,N):
#rk4 method
#k1
t1=t_values[i-1] #started range # 1, n-1 starts at 0
y1=y_val[i-1]
k1=function(t1,y1,*args)
#k2
t2=t_values[i-1]+0.5*h
y2=y_val[i-1]+0.5*k1*h
k2=function(t2,y2,*args)
#k3
t3=t_values[i-1]+0.5*h
y3=y_val[i-1]+0.5*k2*h
k3=function(t3,y3,*args)
#k4
t4=t_values[i-1]+h
y4=y_val[i-1]+h*k3
k4=function(t4,y4,*args)
y_val[i]=y_val[i-1]+(1/6)*h*(k1+2*k2+2*k3+k4)
#this fills the t_val array and keeps the loop going
a=np.column_stack((t_values,y_val))
print('At time t, Y= (t on left,Y on right)')
print(a)
plt.plot(t_values,y_val)
print('For 3A:')
#k=10, told by professor bc not included in instructions
odeRK4(f2a, [0,20],[0,7], None, 10,150,7)
print('for 3B:')
odeRK4(fb2,[0,20],[1,1],None)
I have a complex function that includes (sin(x)/x). At x=0, this function has a limit of 1, but when numerically evaluated is NaN.
This function is vectorized to get high performance when evaluating a large number of values, but fails when x=0.
A simplified version of this problem is shown below.
import numpy as np
def f(x):
return np.sin(x)/x
x = np.arange(-5,6)
y = f(x)
print(y)
When executed, this returns:
... RuntimeWarning: invalid value encountered in true_divide
return np.sin(x)/x
[-0.19178485 -0.18920062 0.04704 0.45464871 0.84147098 nan
0.84147098 0.45464871 0.04704 -0.18920062 -0.19178485]
This can be addressed by trapping the error, finding nan and substituting for the limit.
Is there a better way to handle a function like this?
Note: The function is more complex than sin(x)/x. The limit is know. Theuse of sinc is not an option. sin(x)/s is used to illustrate the problem.
You could try to use true_divide with where to specify the place where you want to divide and out to pass in an out array that contains the result you expect at the places were you don't want to divide. Not sure if this is the most optimal solution but that would work. In code that should read liad
res = np.true_divide(sin(x), x, where=x!=0, out=np.ones_like(x))
I'm used to this option while I'm doing my plots:
x = np.arange(-5, 6, dtype=float)
domain = x!=0
fill_with = np.nan
f = np.divide(np.sin(x), x, out=np.full_like(x, fill_with), where=domain)
You can customize any domain and value to fill with outside the domain.
there's an implementation of numpy sinc that you can use.
I want to find the x value for a given y (I want to know at what t, X, the conversion, reaches 0.9). There are questions like this all over SO and they say use np.interp but I did that in two ways and both were wrong. The code is:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
# Create time domain
t = np.linspace(0,4000,100)
# Parameters
A = 1.5*10**(-3) # Arrhenius constant
T = 300 # Temperature [K]
R = 8.31 # Ideal gas constant [J/molK]
E_a= 1000 # Activation energy [J/mol]
V = 5 # Reactor volume [m3]
# Initial condition
C_A0 = 0.1 # Initial concentration [mol/m3]
def dNdt(C_A,t):
r_A = (-k*C_A)/V
dNdt = r_A*V
return dNdt
k=A*np.exp(-E_a/(R*T))
C_A = odeint(dNdt,C_A0,t)
N_A0 = C_A0*V
N_A = C_A*V
X = (N_A0 - N_A)/N_A0
# Plot
plt.figure()
plt.plot(t,X,'b-',label='Conversion')
plt.plot(t,C_A,'r--',label='Concentration')
plt.legend(loc='best')
plt.grid(True)
plt.xlabel('Time [s]')
plt.ylabel('Conversion')
Looking at the graph, at roughly t=2300, the conversion is 0.9.
Method 1:
I wrote this function so I can ask for any given point and get the x-value:
def find(x_val,f):
f = np.reshape(f,len(f))
global t
t = np.reshape(t,len(t))
return np.interp(x_val,t,f)
print('Conversion of 0.9 is reached at: ',int(find(0.9,X)),'s')
When I call the function at 0.9 I get 0.0008858 which gets rounded to 0 which is wrong. I thought maybe something is going wrong when I declare global t??
Method 2:
When I do it outside the function; so I manually reshape X and t and use np.interp(0.9,t,X), the output is 0.9.
X = np.reshape(X,len(X))
t = np.reshape(t,len(t))
print(np.interp(0.9,t,X))
I thought I made a mistake in the order of the variables so I did np.interp(0.9,X,t), and again it surprised me with 0.9.
I'm unsure as to where I'm going wrong. Any help would be appreciated. Many thanks :)
On your plot, t is horizontal and X is vertical. You want to find the horizontal coordinate where the vertical one is 0.9. That is, find t for a given X. Saying
find x value for a given y
is bound to lead to confusion, as it did here.
The problem is solved with
print(np.interp(0.9, X.ravel(), t)) # prints 2292.765497278863
(It's better to use ravel for flattening, instead of the reshape as you did). There is no need to reshape t, which is already one-dimensional.
I did np.interp(0.9,X,t), and again it surprised me with 0.9.
That sounds unlikely, you probably mistyped. This was the correct order.
I would like to solve the ODE dy/dt = -2y + data(t), between t=0..3, for y(t=0)=1.
I wrote the following code:
import numpy as np
from scipy.integrate import odeint
from scipy.interpolate import interp1d
t = np.linspace(0, 3, 4)
data = [1, 2, 3, 4]
linear_interpolation = interp1d(t, data)
def func(y, t0):
print 't0', t0
return -2*y + linear_interpolation(t0)
soln = odeint(func, 1, t)
When I run this code, I get several errors like this:
ValueError: A value in x_new is above the interpolation range.
odepack.error: Error occurred while calling the Python function named
func
My interpolation range is between 0.0 and 3.0.
Printing the value of t0 in func, I realized that t0 is actually sometimes above my interpolation range: 3.07634612585, 3.0203768998, 3.00638459329, ... It's why linear_interpolation(t0) raises the ValueError exceptions.
I have a few questions:
how does integrate.ode makes t0 vary? Why does it make t0 exceed the infimum (3.0) of my interpolation range?
in spite of these errors, integrate.ode returns an array which seems to contain correct value. So, should I just catch and ignore these errors? Should I ignore them whatever the differential equation(s), the t range and the initial condition(s)?
if I shouldn't ignore these errors, what is the best way to avoid them? 2 suggestions:
in interp1d, I could set bounds_error=False and fill_value=data[-1] since the t0 outside of my interpolation range seem to be closed to t[-1]:
linear_interpolation = interp1d(t, data, bounds_error=False, fill_value=data[-1])
But first I would like to be sure that with any other func and any other data the t0 will always remain closed to t[-1]. For example, if integrate.ode chooses a t0 below my interpolation range, the fill_value would be still data[-1], which would not be correct. Maybe to know how integrate.ode makes t0 vary would help me to be sure of that (see my first question).
in func, I could enclose the linear_interpolation call in a try/except block, and, when I catch a ValueError, I recall linear_interpolation but with t0 truncated:
def func(y, t0):
try:
interpolated_value = linear_interpolation(t0)
except ValueError:
interpolated_value = linear_interpolation(int(t0)) # truncate t0
return -2*y + interpolated_value
At least this solution permits linear_interpolation to still raise an exception if integrate.ode makes t0 >= 4.0 or t0 <= -1.0. I can then be alerted of incoherent behavior. But it is not really readable and the truncation seems to me a little arbitrary by now.
Maybe I'm just over-thinking about these errors. Please let me know.
It is normal for the odeint solver to evaluate your function at time values past the last requested time. Most ODE solvers work this way--they take internal time steps with sizes determined by their error control algorithm, and then use their own interpolation to evaluate the solution at the times requested by the user. Some solvers (e.g. the CVODE solver in the Sundials library) allow you to specify a hard bound on the time, beyond which the solver is not allowed to evaluate your equations, but odeint does not have such an option.
If you don't mind switching from scipy.integrate.odeint to scipy.integrate.ode, it looks like the "dopri5" and "dop853" solvers don't evaluate your function at times beyond the requested time. Two caveats:
The ode solvers use a different convention for the order of the arguments that define the differential equation. In the ode solvers, t is the first argument. (Yeah, I know, grumble, grumble...)
The "dopri5" and "dop853" solvers are for non-stiff systems. If your problem is stiff, they should still give correct answers, but they will do a lot more work than a stiff solver would do.
Here's a script that shows how to solve your example. To emphasize the change in the arguments, I renamed func to rhs.
import numpy as np
from scipy.integrate import ode
from scipy.interpolate import interp1d
t = np.linspace(0, 3, 4)
data = [1, 2, 3, 4]
linear_interpolation = interp1d(t, data)
def rhs(t, y):
"""The "right-hand side" of the differential equation."""
#print 't', t
return -2*y + linear_interpolation(t)
# Initial condition
y0 = 1
solver = ode(rhs).set_integrator("dop853")
solver.set_initial_value(y0)
k = 0
soln = [y0]
while solver.successful() and solver.t < t[-1]:
k += 1
solver.integrate(t[k])
soln.append(solver.y)
# Convert the list to a numpy array.
soln = np.array(soln)
The rest of this answer looks at how you could continue to use odeint.
If you are only interested in linear interpolation, you could simply extend your data linearly, using the last two points of the data. A simple way to extend the data array is to append the value 2*data[-1] - data[-2] to the end of the array, and do the same for the t array. If the last time step in t is small, this might not be a sufficiently long extension to avoid the problem, so in the following, I've used a more general extension.
Example:
import numpy as np
from scipy.integrate import odeint
from scipy.interpolate import interp1d
t = np.linspace(0, 3, 4)
data = [1, 2, 3, 4]
# Slope of the last segment.
m = (data[-1] - data[-2]) / (t[-1] - t[-2])
# Amount of time by which to extend the interpolation.
dt = 3.0
# Extended final time.
t_ext = t[-1] + dt
# Extended final data value.
data_ext = data[-1] + m*dt
# Extended arrays.
extended_t = np.append(t, t_ext)
extended_data = np.append(data, data_ext)
linear_interpolation = interp1d(extended_t, extended_data)
def func(y, t0):
print 't0', t0
return -2*y + linear_interpolation(t0)
soln = odeint(func, 1, t)
If simply using the last two data points to extend the interpolator linearly is too crude, then you'll have to use some other method to extrapolate a little beyond the final t value given to odeint.
Another alternative is to include the final t value as an argument to func, and explicitly handle t values larger than it in func. Something like this, where extrapolation is something you'll have to figure out:
def func(y, t0, tmax):
if t0 > tmax:
f = -2*y + extrapolation(t0)
else:
f = -2*y + linear_interpolation(t0)
return f
soln = odeint(func, 1, t, args=(t[-1],))
I'm having trouble with the scipy.optimize.fmin and scipy.optimize.minimize functions. I've checked and confirmed that all the arguments passed to the function are of type numpy.array, as well as the return value of the error function. Also, the carreau function returns a scalar value.
The reason for some of the extra arguments, such as size, is this: I need to fit data with a given model (Carreau). The data is taken at different temperatures, which are corrected with a shift factor (which is also fitted by the model), I end up with several sets of data which should all be used to calculate the same 4 constants (parameters p).
I read that I can't pass the fmin function a list of arrays, so I had to concatenate all data into x_data_lin, keeping track of the different sets with the size parameter. t holds different test temperatures, while t_0 is a one-element array which holds the reference temperature.
I am positive (triple checked) that all the arguments passed to the function, as well as the result, are one-dimensional arrays. Here's the code aside from that:
import numpy as np
import scipy.optimize
from scipy.optimize import fmin as simplex
def err_func2(p, x, y, t, t_0, size):
result = array([])
temp = 0
for i in range(0, int(len(size)-1)):
for j in range(int(temp), int(temp+size[i])):
result = np.append(result, (carreau(p, x[j], t[i], t_0[0])-y[i]))
temp += size[i]
return result
p1 = simplex(err_func2, initial_guess,
args=(x_data_lin, y_data_lin, t_list, t_0, size), full_output=0)
Here's the error:
Traceback (most recent call last):
File "C:\Python27\Scripts\projects\Carreau - WLF\carreau_model_fit.py", line 146, in <module>
main()
File "C:\Python27\Scripts\projects\Carreau - WLF\carreau_model_fit.py", line 105, in main
args=(x_data_lin, y_data_lin, t_list, t_0, size), full_output=0)
File "C:\Python27\lib\site-packages\scipy\optimize\optimize.py", line 351, in fmin
res = _minimize_neldermead(func, x0, args, callback=callback, **opts)
File "C:\Python27\lib\site-packages\scipy\optimize\optimize.py", line 415, in _minimize_neldermead
fsim[0] = func(x0)
ValueError: setting an array element with a sequence.
It's worth noting that I got the leastsq function working while passing it lists of arrays. Unfortunately, it did a poor job of fitting the data. But, as it took me a lot of time and research to get to that point, I'll post the code as follows. If somebody is interested in seeing all of the code, I would gladly post it, if you can recommend me somewhere to upload a few files(as it includes another imported script and of course sample data):
##def error_function(p, x, y, t, t_0):
## result = array([])
## for index in range(len(x)):
## result = np.append(result,(carreau(p, x[index],
## t[index], t_0) - y[index]))
## return result
## p1, success = scipy.optimize.leastsq(error_function, initial_guess,
## args=(x_list, y_list, t_list, t_0),
## maxfev=10000)
:( I was going to post a picture of the graphed data with the leastsq fit, but I don't have the requisite 10 points.
Late Edit: I now have gotten optimize.curvefit and optimize.leastsq to work (which probably not-so-coincidentally give the same answer), but the curve is bad. I've been trying to figure out optimize.minimize, but it's been a bit of a headache. the simplex (fmin, Nelder_Mead, whatever you want to call it) will run, but produces a crazy answer nowhere close. I've never worked with nonlinear optimization problems before, and I don't really know what direction to head.
Here's the working curve_fit code:
def temp_shift(t_s, t, t_0):
""" This function calculates the a_t temperature shift factor for polymer
viscosity curves. Variable is the standard temperature, t_s
"""
C_1 = 8.86
C_2 = 101.6
return(np.exp(
(C_1*(t_0-t_s) / (C_2+(t_0-t_s))) - (C_1*(t-t_s) / (C_2 + (t-t_s)))
))
def pass_data(t, t_0):
def carreau_2(x, p0, p1, p2, p3):
visc_0 = p0
m = p1
n = p2
t_s = p3
a_T = temp_shift(p3, t, t_0)
return (visc_0 * a_T / (1 + m * x * a_T)**n)
return carreau_2
initial_guess = array([20000, 3, 0.94, -20])
p1, conv = scipy.optimize.curve_fit(pass_data(t_all, t_0), x_data_lin,
y_data_lin, initial_guess)
Here's some sample data:
x_data_lin = array([0.01998, 0.04304, 0.2004, 0.43160, 0.92870, 2.0000, 4.30900,
9.28500, 15.51954, 21.94936, 37.52960, 90.41786, 204.35230,
331.58495, 811.92250, 1694.55309, 3464.27648, 8826.65738,
14008.00242])
y_data_lin = array([13520.00000, 13740.00000, 12540.00000, 9384.00000, 5201,
3232.00000, 2094.00000, 1484.00000, 999.00000, 1162.05088
942.56946, 705.62489, 429.47341, 254.15136, 185.22916,
122.07113, 76.46324, 47.85064, 25.74315, 18.84875])
t_all = array([190, 190, 190, 190, 190, 190, 190, 190, 190, 190, 190, 190,
190, 190, 190, 190, 190, 190, 190])
t_0 = 80
Here's a picture of the result of curve_fit (now that I have 10 points and can post!). Note there are 3 curves drawn because I used 3 sets of data to optimize the curve, at 3 different temperatures. Polymers have the property that the shear_rate - viscosity relationship stays the same, just shifted by a temperature factor a_T:
I'd really appreciate any suggestions about how to improve the fit, or how to define the function so that optimize.minimize works, and which method (Nelder-Mead, Powel, BFGS) might work.
Another edit to add: I got the Nelder-Mead (optimize.fmin, and the default of optimize.minimize) function to work - I'll include the revised error function below. Before, I simply summed the result array and returned it. This led to extremely negative values (obviously, since the function's goal is to minimize). Squaring the result before summing it solved that problem. Note that I also changed the function completely to take advantage of numpy's array broadcasting, as suggested by JaminSore (Thanks Jamin!)
def err_func2(p, x, y, t, t_0):
return ((carreau(p, x, t, t_0)-y)**2).sum()
Unfortunately, the Nelder-Mead function gives me the same result as leastsq and curve_fit. You can see in the graph above that it's not the optimal fit; in fact, at this point, Microsoft Excel's solver function is doing a better job on the data.
At least, I hope this thread can be useful for beginners to scipy.optimize in the future, since it's taken me quite awhile to discover all of this.
Unlike leastsq, fmin can only deal with error functions that return a scalar so if possible you have to rewrite your error function so that it returns a scalar. Here is a simple working example.
Import the necessary libraries
import numpy as np
from scipy.optimize import fmin
Define a helper function (you'll see later)
def prob(a, b):
return (1 + np.exp(b - a))**-1
Simulate some data
true_ = np.random.normal(size = 100) #parameters we're trying to recover
b = np.random.normal(size = 20)
exp_ = prob(true_[:, None], b) #expected
a_s, b_s = true_.shape[0], b.shape[0]
noise = np.random.uniform(size = (a_s, b_s))
response = (noise > (1 - exp_)).astype(int)
Define our error function (I'm using lambdas but this is not recommended in practice)
# sum of the squared residuals
err_func = lambda a : ((prob(a[:, None], b) - response) ** 2).sum()
result = fmin(err_func, np.zeros_like(true_)) #solve
If I remove the .sum() at the end of my error function definition, I get the same error.
OK, now I finally know the answer! First, the final piece, then a recap. The problem of the fit wasn't the fault of curve_fit, leastsq, Nelder_Mead, or Powell (the methods I've tried). It has to do with the relative weights of the errors. Since this data is on a log scale, the errors in the fit near the high y values are very costly, while errors near the low y values are insignificant. To correct this, I made the error relative by dividing by the y value of the data, as follows:
def err_func2(p, x, y, t, t_0):
return (((carreau(p, x, t, t_0)-y)/y)**2).sum()
Now, each relative error is squared, summed, then minimized, giving the following fit (using optimize.minimize with the Powell method, although it should be the same for the other methods as well.)
So now a recap of the answers found in this thread:
The easiest way (or at least for me, most fool-proof) to deal with curve fitting is to collect all the data into 1D numpy.arrays. Then, you can rely on numpy's array broadcasting to perform all operations. This means that arithmetic operations are treated the same way a vector dot-product would be. For example, array_1 = [a,b], array_2 = [c,d], then array_1 + array_2 = [a+c, b+d]. This works for addition, subtraction, multiplication, division, and powers: array+1array_2 = [ac, b**d].
For the optimize.leastsq function, you need to let the objective function return an array; i.e. return result where result is an array. For optimize.curve_fit, you also return an array. In this case, it's a bit more complicated to pass extra arguments (think other constants), but you can do it with a nested function, as I demonstrated above in the pass_data function.
For optimize.minimize, you need to return a scalar - that is, a single number. You can also return an array of answers, I think, but I avoided this by getting all the data into 1D arrays, as I mentioned earlier. To get this scalar, you can simply square and sum the result (like I have written in this post under err_func2) Squaring the data is very important, otherwise negative errors take over and drive the resulting scalar extremely negative.
Finally, as mentioned, when your data crosses several scales (105, 104, 10**3, etc), it may be necessary to normalize the errors. I did this by dividing each error by the y value.
So... I guess that's it? Finally?