I'm transitioning my code from using scipy's odeint to scipy's solve_ivp. When using odeint I would use a while loop as follows:
while solver.successful() :
solver.integrate(t_final, step=True)
# do other operations
This method allowed me to store values that depended on the solutions after each timestep.
I'm now switching to using solve_ivp but not sure how to accomplish this functionality with the solve_ivp solver. Has anyone accomplished this functionality with solve_ivp?
Thanks!
I think i know what your trying to ask. I had a program that used solve_ivp to integrate between each time step individually, then used the values to calculate the values for the next iteration. (i.e. heat transfer coefficients, transport coefficients etc.) I used two nested for loops. The inner for loop calculates or completes the operations you need to at each step. Then save each value in a list or array and then the inner loop should terminate. The outer loop should only be used to feed time values and possibly reload necessary constants.
For example:
for i in range(start_value, end_value, time_step):
start_time = i
end_time = i + time_step
# load initial values and used most recent values
for j in range(0, 1, 1):
answer = solve_ivp(function,(start_time,end_time), [initial_values])
# Save new values at the end of a list storing all calculated values
Say you have a system such as
d(Y1)/dt = a1*Y2 + Y1
d(Y2)/dt = a2*Y1 + Y2
and you want to solve it from t = 0, 10. With a 0.1 time step. Where a1 and a2 are values calculated or determined elsewhere. This code would work.
from scipy.integrate import solve_ivp
import sympy as sp
import numpy as np
import math
import matplotlib.pyplot as plt
def a1(n):
return 1E-10*math.exp(n)
def a2(n):
return 2E-10*math.exp(n)
def rhs(t,y, *args):
a1, a2 = args
return [a1*y[1] + y[0],a2*y[0] + y[1]]
Y1 = [0.02]
Y2 = [0.01]
A1 = []
A2 = []
endtime = 10
time_step = 0.1
times = np.linspace(0,endtime, int(endtime/time_step)+1)
tsymb = sp.symbols('t')
ysymb = sp.symbols('y')
for i in range(0,endtime,1):
for j in range(0,int(1/time_step),1):
tstart = i + j*time_step
tend = i + j*time_step + time_step
A1.append(a1(tstart/100))
A2.append(a2(tstart/100))
Y0 = [Y1[-1],Y2[-1]]
args = [A1[-1],A2[-1]]
answer = solve_ivp(lambda tsymb, ysymb : rhs(tsymb,ysymb, *args), (tstart,tend), Y0)
Y1.append(answer.y[0][-1])
Y2.append(answer.y[1][-1])
fig = plt.figure()
plt1 = plt.plot(times,Y1, label = "Y1")
plt2 = plt.plot(times,Y2, label = "Y2")
plt.xlabel('Time')
plt.ylabel('Y Values')
plt.legend()
plt.grid()
plt.show()
Related
I was tasked of to plot a possible trajectory knowing the derivative of a function : X' = 0.05X *(1-X/100) using the eulor method. So i came up with the following
import numpy as np
import matplotlib.pyplot as plt
##defining derivative
def changevector(X):
dX = 0.05*X *(1-(X/100))
return dX
#defining initial values
timestep = 0.001
itera = 1000
x = np.zeros(itera+1)
passedtime = np.zeros(itera+1)
x[0] = 100
#stepping trough time
for i in range(itera):
x[i+1] = x[i] + timestep*changevector(x[i])
passedtime[i] = i * timestep
fig1 = plt.figure(figsize=(5,4))
ax = fig1.add_axes(([0.1, 0.1, 0.9, 0.9]))
plt.plot(passedtime, x)
plt.show()
This would give me a flat line instead of a trajectory and I cant figure out why or how to fix it. Thank you in advance
I am experimenting with RK45/RK23 solver from python scipy module. Using it to solve simple ODEs and it is not giving me the correct results. When I manually code for Runge Kutta 4th order it works perfectly so does the odeint solver in the module but RK23/RK45 does not. If someone could aid me in figuring out the issue it would be help full. I have so far implemented only simple ODEs
dydt = -K*(y^2)
Code:
import numpy as np
from scipy.integrate import solve_ivp,RK45,odeint
import matplotlib.pyplot as plt
# function model fun(y,t)
def model(y,t):
k = 0.3
dydt = -k*(y**2)
return dydt
# intial condition
y0 = np.array([0.5])
y = np.zeros(100)
t = np.linspace(0,20)
t_span = [0,20]
#RK45 implementation
yr = solve_ivp(fun=model,t_span=t_span,y0=y,t_eval=t,method=RK45)
##odeint solver
yy = odeint(func=model,y0=y0,t=t)
##manual implementation
t1 = 0
h = 0.05
y = np.zeros(21)
y[0]=y0;
i=0
k=0.3
##Runge Kutta 4th order implementation
while (t1<1.):
m1 = -k*(y[i]**2)
y1 = y[i]+ m1*h/2
m2 = -k*(y1**2)
y2 = y1 + m2*h/2
m3 = -k*(y2**2)
y3 = y2 + m3*h/2
m4 = -k*(y3**2)
i=i+1
y[i] = y[i-1] + (m1 + 2*m2 + 2*m3 + m4)/6
t1 = t1 + h
#plotting
t2 = np.linspace(0,20,num=21)
plt.plot(t2,y,'r-',label='RK4')
plt.plot(t,yy,'b--',label='odeint')
#plt.plot(t2,yr.y[0],'g:',label='RK45')
plt.xlabel('time')
plt.ylabel('y(t)')
plt.legend()
plt.show()
Output: (without showing RK45 result)
enter image description here
Output: (with just RK45 plot displayed)
enter image description here
I cant find out where am I making a mistake
Okay I have figured out the solution. RK45 requires function definition to be like fun(t,y) and odeint requires it to be func(y,t) due to which giving them same function will result in different results.
I am trying to fit below mentioned two equations using python leastsq method but am not sure whether this is the right approach. First equation has incomplete gamma function in it while the second one is slightly complex, and along with an exponential function contains a term which is obtained by using a separate fitting formula.
J_mg = T_incomplete(hw/T_mag)
J_nmg = e^(-hw/T)*g(w,T)
Here g is a function of w and T and is calucated using a given fitting formula.
I am following the steps outlined in this question.
Here is what I have done
import numpy as np
from scipy.optimize import leastsq
from scipy.special import gammaincc
from scipy.special import gamma
from matplotlib.pyplot import plot
# generating data
NPTS = 10
hw = np.linspace(0.5, 10, NPTS)
j1 = np.linspace(0.001,10,NPTS)
j2 = np.linspace(0.003,10,NPTS)
T_mag = np.linspace(0.3,0.5,NPTS)
#defining functions
def calc_gaunt_factor(hw,T):
fitting_coeff= np.loadtxt('fitting_coeff.txt', skiprows=1)
#T is in KeV
#K_b = 8.6173303(50)e−5 ev/K
g = 0
gamma = 0.0136/T
theta= hw/T
A= (np.log10(gamma**2) +0.5)*0.4
B= (np.log10(theta)+1.5)*0.4
for i in range(11):
for j in range(11):
g_ij = fitting_coeff[i][j]*(A**i)*(B**j)
g = g_ij+g
return g
def j_w_mag(hw,T_mag):
order= 0.001
return np.sqrt(1/T_mag)*gamma(order)*gammaincc(order,hw/T_mag)
def j_w_nonmag(hw,T):
gamma = 0.0136/T
theta= hw/T
return np.sqrt(1/T)*np.exp((-hw)/T)*calc_gaunt_factor(hw,T)
def residual_func(T,T_mag,hw,j1,j2):
err_unmag = np.nan_to_num(j1 - j_w_nonmag(hw,T))
err_mag = np.nan_to_num(j2 - j_w_mag(hw,T_mag))
err= np.concatenate((err_unmag, err_mag))
return err
par_init = np.array([.35])
best, cov, info, message, ler = leastsq(residual_func,par_init,args=(T_mag,hw,j1,j2),full_output=True)
print("Best-Fit Parameters:")
print("T=%s" %(best[0]))
I am getting weird value for my fitting parameter, T. Is this the right approach? Thanks.
I wrote a program to use odeint to solve a differential equation. But it had a problem. When I setted Cosmopara as np.array([70.0,0.3,0,-1.0,0]), it gave a warning that invalid value encountered in sqrt and invalid value encountered in double_scalars in h = np.sqrt(y1 ** 2 + Omega_M * t ** (-3) + Omega_DE*y2). But I checked that line and didn't find any error. If Cosmopara = np.array([70.0,0.3,0.0,-1.0,0.0]), Y shouldn't change but it changed. Besides, If I chose Cosmopara = np.array([70.0,0.3,0.1,-1.0,0.1]), this program could gives the right result.
What am I doing wrong?
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
global CosmoPara
Cosmopara = np.array([70.0,0.3,0.0,-1.0,0.0])
def derivfun(Y,t):
Omega_M = Cosmopara[1]
Sigma_0 = Cosmopara[2]
omega = Cosmopara[3]
delta = Cosmopara[4]
Omega_DE = 1-Omega_M-Sigma_0**2
y1 = Y[0]
y2 = Y[1]
h = np.sqrt(y1**2 + Omega_M*t**(-3) + Omega_DE*y2)
dy1dt = -3.0*y1/t + (delta*Omega_DE*y2)/(t*h)
dy2dt = -(3.0*(1+omega) + 2.0*delta*y1/h)*y2/t
return np.array([dy1dt,dy2dt])
z = np.linspace(1,2.5,15001)
time = 1.0/z
Omega_M = Cosmopara[1]
Sigma_0 = Cosmopara[2]
omega = Cosmopara[3]
delta = Cosmopara[4]
Omega_DE = 1-Omega_M-Sigma_0**2
y1init = Sigma_0
y2init = 1.0
Yinit = np.array([y1init,y2init])
Y = odeint(derivfun,Yinit,time)
y1 = Y[:,0]
y2 = Y[:,1]
h = np.sqrt(y1**2 + Omega_M*time**(-3) + Omega_DE*y2)
plt.figure()
plt.plot(z,h)
plt.show()
The error caused by the situation when the value in the sqrt() is less than 0. And the value in the sqrt() will be less than 0 is caused by the time value t suddenly decreasing to -0.0000999.
Reason
I have tested several cases to find out the reason, and I found that the time spacing of the time value t given to the function derivfun and the time spacing of the global array time are different even when the function odeint works fine. I guess it is a mechanism for speeding up the odeint. Because when the derivatives dydt1 and dydt2 do not change fast, the result can be considered as a linear function with a larger time spacing. If the derivatives will not change fast, this function will increase next step's time spacing and the function can solve it in less steps. In the case you provided. the derivatives dydt1 and dydt2 always equal to zero and do not change. This situation causes the error time value.
Based on this result, we can know that the function odeint will give the wrong time value which is not in the range of the time array users give when the derivatives do not change. If you want to avoid this situation, you can use global time variables instead of the original time value. But it will cost more time when you using odeint to solve ordinary differential equations.
Code
The following code is the code you provided and I add some lines for testing. You can change the parameters and check out the result.
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
Cosmopara = np.array([70.0,0.3,0.1,-1.0,0.1])
i = 0
def derivfun(Y,t):
global i
Omega_M = Cosmopara[1]
Sigma_0 = Cosmopara[2]
omega = Cosmopara[3]
delta = Cosmopara[4]
Omega_DE = 1-Omega_M-Sigma_0**2
y1 = Y[0]
y2 = Y[1]
h = np.sqrt(y1**2 + Omega_M*t**(-3) + Omega_DE*y2)
dy1dt = -3.0*y1/t + (delta*Omega_DE*y2)/(t*h)
dy2dt = -(3.0*(1+omega) + 2.0*delta*y1/h)*y2/t
print "Time: %14.12f / Global time: %14.12f / dy1dt: %8.5f / dy2dt: %8.5f" % (t, time[i], dy1dt, dy2dt)
i += 1
return np.array([dy1dt,dy2dt])
z = np.linspace(1,2.5,15001)
time = 1.0/z
Omega_M = Cosmopara[1]
Sigma_0 = Cosmopara[2]
omega = Cosmopara[3]
delta = Cosmopara[4]
Omega_DE = 1-Omega_M-Sigma_0**2
y1init = Sigma_0
y2init = 1.0
Yinit = np.array([y1init,y2init])
Y = odeint(derivfun,Yinit,time)
y1 = Y[:,0]
y2 = Y[:,1]
h = np.sqrt(y1**2 + Omega_M*time**(-3) + Omega_DE*y2)
plt.figure()
plt.plot(z,h)
plt.show()
Testing results
The following picture shows that the time value in the function and the global time value have different time spacing when the function works fine (using the parameters you provided):
The following picture shows that when the derivatives dydt1 and dydt2 do not change (using the parameters you provided, too), and the time value in the function suddenly decreases to a value less than 0:
I had a similar problem, it seems to arrise because as previousely stated, the odeint program seems to want to take a shortcut by increasing the step size. I fixed this for myself by capping the step size. You can do this very easilly:
y = odeint(model, x, t, hmax= maximum step size)
This does make the computing time quite a bit slower, so for big computations and long time ranges, you want to avoid it if possible. I found it also worked to decrease the time range, but this is not always possible.
(in my case, I wanted to do a time range of 2000-20000 seconds, but I couldn't go any higher than 200 before values would just be all over the place)
I'm new to python, and I have this code for calculating the potential inside a 1x1 box using fourier series, but a part of it is going way too slow (marked in the code below).
If someone could help me with this, I suspect I could've done something with the numpy library, but I'm not that familiar with it.
import matplotlib.pyplot as plt
import pylab
import sys
from matplotlib import rc
rc('text', usetex=False)
rc('font', family = 'serif')
#One of the boundary conditions for the potential.
def func1(x,n):
V_c = 1
V_0 = V_c * np.sin(n*np.pi*x)
return V_0*np.sin(n*np.pi*x)
#To calculate the potential inside a box:
def v(x,y):
n = 1;
sum = 0;
nmax = 20;
while n < nmax:
[C_n, err] = quad(func1, 0, 1, args=(n), );
sum = sum + 2*(C_n/np.sinh(np.pi*n)*np.sin(n*np.pi*x)*np.sinh(n*np.pi*y));
n = n + 1;
return sum;
def main(argv):
x_axis = np.linspace(0,1,100)
y_axis = np.linspace(0,1,100)
V_0 = np.zeros(100)
V_1 = np.zeros(100)
n = 4;
#Plotter for V0 = v_c * sin () x
for i in range(100):
V_0[i] = V_0_1(i/100, n)
plt.plot(x_axis, V_0)
plt.xlabel('x/L')
plt.ylabel('V_0')
plt.title('V_0(x) = sin(m*pi*x/L), n = 4')
plt.show()
#Plot for V_0 = V_c(1-(x-1/2)^4)
for i in range(100):
V_1[i] = V_0_2(i/100)
plt.figure()
plt.plot(x_axis, V_1)
plt.xlabel('x/L')
plt.ylabel('V_0')
plt.title('V_0(x) = 1- (x/L - 1/2)^4)')
#plt.legend()
plt.show()
#Plot V(x/L,y/L) on the boundary:
V_0_Y = np.zeros(100)
V_1_Y = np.zeros(100)
V_X_0 = np.zeros(100)
V_X_1 = np.zeros(100)
for i in range(100):
V_0_Y[i] = v(0, i/100)
V_1_Y[i] = v(1, i/100)
V_X_0[i] = v(i/100, 0)
V_X_1[i] = v(i/100, 1)
# V(x/L = 0, y/L):
plt.figure()
plt.plot(x_axis, V_0_Y)
plt.title('V(x/L = 0, y/L)')
plt.show()
# V(x/L = 1, y/L):
plt.figure()
plt.plot(x_axis, V_1_Y)
plt.title('V(x/L = 1, y/L)')
plt.show()
# V(x/L, y/L = 0):
plt.figure()
plt.plot(x_axis, V_X_0)
plt.title('V(x/L, y/L = 0)')
plt.show()
# V(x/L, y/L = 1):
plt.figure()
plt.plot(x_axis, V_X_1)
plt.title('V(x/L, y/L = 1)')
plt.show()
#Plot V(x,y)
#######
# This is where the code is way too slow, it takes like 10 minutes when n in v(x,y) is 20.
#######
V = np.zeros(10000).reshape((100,100))
for i in range(100):
for j in range(100):
V[i,j] = v(j/100, i/100)
plt.figure()
plt.contour(x_axis, y_axis, V, 50)
plt.savefig('V_1')
plt.show()
if __name__ == "__main__":
main(sys.argv[1:])
You can find how to use FFT/DFT in this document :
Discretized continuous Fourier transform with numpy
Also, regarding your V matrix, there are many ways to improve the execution speed. One is to make sure you use Python 3, or xrange() instead of range() if you a are still in Python 2.. I usually put these lines in my Python code, to allow it to run evenly wether I use Python 3. or 2.*
# Don't want to generate huge lists in memory... use standard range for Python 3.*
range = xrange if isinstance(range(2),
list) else range
Then, instead of re-computing j/100 and i/100, you can precompute these values and put them in an array; knowing that a division is much more costly than a multiplication ! Something like :
ratios = np.arange(100) / 100
V = np.zeros(10000).reshape((100,100))
j = 0
while j < 100:
i = 0
while i < 100:
V[i,j] = v(values[j], values[i])
i += 1
j += 1
Well, anyway, this is rather cosmetic and will not save your life; and you still need to call the function v()...
Then, you can use weave :
http://docs.scipy.org/doc/scipy-0.14.0/reference/tutorial/weave.html
Or write all your pure computation/loop code in C, compile it and generate a module which you can call from Python.
You should look into numpy's broadcasting tricks and vectorization (several references, one of the first good links that pops up is from Matlab but it is just as applicable to numpy - can anyone recommend me a good numpy link in the comments that I might point other users to in the future?).
What I saw in your code (once you remove all the unnecessary bits like plots and unused functions), is that you are essentially doing this:
from __future__ import division
from scipy.integrate import quad
import numpy as np
import matplotlib.pyplot as plt
def func1(x,n):
return 1*np.sin(n*np.pi*x)**2
def v(x,y):
n = 1;
sum = 0;
nmax = 20;
while n < nmax:
[C_n, err] = quad(func1, 0, 1, args=(n), );
sum = sum + 2*(C_n/np.sinh(np.pi*n)*np.sin(n*np.pi*x)*np.sinh(n*np.pi*y));
n = n + 1;
return sum;
def main():
x_axis = np.linspace(0,1,100)
y_axis = np.linspace(0,1,100)
#######
# This is where the code is way too slow, it takes like 10 minutes when n in v(x,y) is 20.
#######
V = np.zeros(10000).reshape((100,100))
for i in range(100):
for j in range(100):
V[i,j] = v(j/100, i/100)
plt.figure()
plt.contour(x_axis, y_axis, V, 50)
plt.show()
if __name__ == "__main__":
main()
If you look carefully (you could use a profiler too), you'll see that you're integrating your function func1 (which I'll rename into the integrand) about 20 times for each element in the 100x100 array V. However, the integrand doesn't change! So you can already bring it out of your loop. If you do that, and use broadcasting tricks, you could end up with something like this:
import numpy as np
from scipy.integrate import quad
import matplotlib.pyplot as plt
def integrand(x,n):
return 1*np.sin(n*np.pi*x)**2
sine_order = np.arange(1,20).reshape(-1,1,1) # Make an array along the third dimension
integration_results = np.empty_like(sine_order, dtype=np.float)
for enu, order in enumerate(sine_order):
integration_results[enu] = quad(integrand, 0, 1, args=(order,))[0]
y,x = np.ogrid[0:1:.01, 0:1:.01]
term = integration_results / np.sinh(np.pi * sine_order) * np.sin(sine_order * np.pi * x) * np.sinh(sine_order * np.pi * y)
# This is the key: you have a 3D matrix here and with this summation,
# you're basically squashing the entire 3D structure into a flat, 2D
# representation. This 'squashing' is done by means of a sum.
V = 2*np.sum(term, axis=0)
x_axis = np.linspace(0,1,100)
y_axis = np.linspace(0,1,100)
plt.figure()
plt.contour(x_axis, y_axis, V, 50)
plt.show()
which runs in less than a second on my system.
Broadcasting becomes much more understandable if you take pen&paper and draw out the vectors that you are "broadcasting" as if you were constructing a building, from basic Tetris-blocks.
These two versions are functionally the same, but one is completely vectorized, while the other uses python for-loops. As a new user to python and numpy, I definitely recommend reading through the broadcasting basics. Good luck!