I have a system of equations that I am trying to integrate with integrate.odeint.
Within that system are terms like XiXj[i,j]*XiXj[j,k]/X[j]. It is possible for the denominator to be zero (particularly in the initial condition). However, both terms in the numerator should go to zero there, and the whole term should be interpreted as zero (and values going into the equations are nonnegative).
What I have done to handle this is to replace the 1/X[j] with Xinv[j] which is set to be 1/X[j] when X[j]>eps for some smallish eps, and use 1 if it is smaller than eps.
However, when I integrate, I find that odeint will sit for a very long time calculating derivatives at what appears to be a single value of t. I think this is because of it trying to calculate things near a discontinuity in what I've done.
Larger values of eps or assigning a minimum step size reduces the probability of encountering this issue, but it can still happen.
I'm hoping to figure out how to prevent this from happening. Unfortunately, I haven't been able to force this to happen in a simple example. So I've stripped down my example as far as I can.
Some background on what's being solved: I'm thinking about nodes in a networkx network (so running this code will require networkx). Nodes are susceptible, infected or recovered. I have equations for the derivatives of the susceptible (X) and infected (Y) states of each node. As correlations form across edges, I am also tracking the probability each edge is between S and I (XY) or S and S (XX) nodes.
Here's a system of equations. I've set g_{ij} to be 1 and gamma_i is simply gamma in the code below.
It should be possible to just copy and paste the code and it will run it. As it runs, the derivative function outputs the time and some simple measure of the derivatives. You'll notice that it pauses for a long time at particular times. Sometimes it starts up again. Sometimes I give up waiting.
(note, the network it solves for is random, so if by some unlikely chance you don't see the behavior, run it again. You may want to pickle the graph and reuse a graph to make things consistent between tests.)
from scipy import integrate
import scipy
import networkx as nx
def _dSIR_pair_based_(V, t, G, nodelist, index_of_node, tau, gamma):
#V is a vector of states
#t is time
#G is a networkx Graph
#nodelist is a list of nodes in G
#index_of_node is a dict index_of_node[u] is i such that nodelist[i]=u
# {u:i for i, u in enumerate(nodelist)}
#tau is transmission rate
#gamma is recovery rate
N=G.order()
X = V[0:N] #probability node is susceptible
eps = 0.1**(5)
Xinv = scipy.array([1/v if v>eps else 1 for v in X]) #there are places where we divide by X[i] which may == 0.
#In those cases the numerator is (very) 0, so it's easier
#to set this up as mult by inverse with a dummy value when
#it is 1/0.
Y = V[N:2*N] #prob node is infected
Yinv = scipy.array([1/v if v>eps else 1 for v in Y])
XY = V[2*N: 2*N+N**2] #prob edge exists and is from susceptible to infected
XX = V[2*N+N**2:] #prob edge exists and is between two susceptibles.
#print X.shape, Y.shape, XY.shape, XX.shape, N
XY.shape = (N,N) #get them into the right shape
XX.shape = (N,N)
YX = XY.T #not really needed, but helps keep consistent with equations as written in text
dX = scipy.zeros(N)
dY = scipy.zeros(N)
dXY = scipy.zeros((N,N))
dXX = scipy.zeros((N,N))
#I could make the below more efficient, but I think this sequence of for loops is easier to read, or at least understand.
#I expect it to run quickly regardless. Will avoid (premature) optimization for now.
for u in nodelist:
i = index_of_node[u]
dY[i] += -gamma*Y[i]
for v in G.neighbors(u):
j = index_of_node[v]
dX[i] += -tau*XY[i,j]
dY[i] += tau*XY[i,j]
dXY[i,j] += - (tau+gamma)*XY[i,j]
for w in G.neighbors(u):
if w == v: #skip these
continue
#so w != v.
k= index_of_node[w]
dXY[i,j] += tau * XX[i,j] * XY[j,k]*Xinv[j] - tau * YX[k,i] * XY[i,j]*Xinv[i]
dXX[i,j] += -tau * XX[i,j] * XY[j,k]*Xinv[j] - tau * YX[k,i] * XX[i,j]*Xinv[i]
dXY.shape = (N**2,1)
dXX.shape = (N**2,1)
dV = scipy.concatenate((dX[:, None], dY[:,None], dXY, dXX), axis=0).T[0]
print t, sum(dV)
return dV
def SIR_pair_based(G, nodelist, Y0, tau, gamma, tmin = 0, tmax = 100, tcount = 1001):
times = scipy.linspace(tmin,tmax,tcount)
X0 = 1-Y0 #if not initially infected, then initially susceptible
N = len(Y0)
XY0 = X0[:,None]*Y0[None,:]
XX0 = X0[:,None]*X0[None,:]
XY0.shape=(N**2,1)
XX0.shape=(N**2,1)
V0 = scipy.concatenate((X0[:,None], Y0[:,None], XY0, XX0), axis=0).T[0]
index_of_node = {node:i for i, node in enumerate(nodelist)}
V = integrate.odeint(_dSIR_pair_based_, V0, times, args = (G, nodelist, index_of_node, tau, gamma))#, mxstep=10)#(times[1]-times[0])/1000)
def test_SIR_pair_based():
G = nx.fast_gnp_random_graph(100,0.02)
nodelist = G.nodes()
Y0 = scipy.array([1 if node<10 else 0 for node in nodelist])
SIR_pair_based(G, nodelist, Y0, 1, 0.5, tmax = 10)
test_SIR_pair_based()
If I plot sum(X), sum(Y) and 1-sum(X+Y) I get the following for a larger value of eps=0.01 (when the random graph is such that it does get through the process - sometimes it still gets stuck):
Related
I am trying to implement a numerical solver for a 1D Harmonic well's ground state using the Metropolis algorithm and the Feynman Path Integral technique in Python. When I run my program, I end with a distribution of the different points that my particle has gone to; this distribution ought to match up with that of a particle trapped in a 1D harmonic well. It does not. I have gone through and rewritten my code; I have checked it to similar code used for the same purpose; it all looks like it should work, yet it doesn't.
In blue is the histogram of my results, with Density set to True; the orange line is the function describing the expected distribution
As can be seen in the image, what I have ended up with is a distribution that isn't dissimilar to what I was expecting, but it isn't the correct distribution. The code I used (see below), is based on Lepage (2005) work on the same topic, although I used a slightly different formula to describe the same physical system.
import numpy as np
import random
import matplotlib.pyplot as plt
time = 4 #time over which we evolve our function
steps = 7 #number of steps we take
epsilon = 3 #the pos & neg bounds of our rand variable
N_cor = 100 #the number of times we need to thermalise our function before we take a path
N_cf = 20000 #the number of paths we take
def S(x, j, t, s): #the action of our potential well
e = t / s
return (1/(2*e))*(x[j] - x[j - 1])**2 + ((x[j] + x[j-1])/2)**2/2
def update(x, t, s, eps):
for j in range(0, s):
old_x = x[j] #old x value
old_Sj = S(x, j, t, s) #original action value
x[j] = x[j] + random.uniform(-eps,eps) #new x value
dS = S(x, j, t, s) - old_Sj #change in action
if dS > 0 and np.exp(-dS) < random.uniform(0,1): #check for Metropolis alg
x[j] = old_x
return x
def gamma(t, s, eps, thermal_num, num_paths):
zeros = np.zeros(s) #our initial path with s steps
gamma_arr = np.empty(0) #our initial empty result
for i in range(0, 10*thermal_num): #thermalisation
zeros = update(zeros, t, s, eps)
for j in range(0, num_paths):
for i in range(0, thermal_num): #thermalising again
zeros = update(zeros, t, s, eps)
gamma_arr = np.append(gamma_arr, zeros) #add new path post thermalising
#print(zeros)
#print(gamma_arr)
return gamma_arr
test = gamma(time, steps, epsilon, N_cor, N_cf)
x = np.arange(-4, 4, 0.1)
y = 1/np.sqrt(np.pi)*np.exp(-(x**2)) #expected result
plt.hist(test, bins= 500, density = True)
plt.plot(x, y)
plt.show()
I am newbie in python and doing coding for my physics project which requires to generate a matrix with a variable E for which first element of the matrix has to be solved. Please help me. Thanks in advance.
Here is the part of code
import numpy as np
import pylab as pl
import math
import cmath
import sympy as sy
from scipy.optimize import fsolve
#Constants(Values at temp 10K)
hbar = 1.055E-34
m0=9.1095E-31 #free mass of electron
q= 1.602E-19
v = [0.510,0,0.510] # conduction band offset in eV
m1= 0.043 #effective mass in In_0.53Ga_0.47As
m2 = 0.072 #effective mass in Al_0.48In_0.52As
d = [-math.inf,100,math.inf] # dimension of structure in nanometers
'''scaling factor to with units of E in eV, mass in terms of free mass of electron, length in terms
of nanometers '''
s = (2*q*m0*1E-18)/(hbar)**2
#print('scaling factor is ',s)
E = sy.symbols('E') #Suppose energy of incoming particle is 0.3eV
m = [0.043,0.072,0.043] #effective mass of electrons in layers
for i in range(3):
print ('Effective mass of e in layer', i ,'is', m[i])
k=[ ] #Defining an array for wavevectors in different layers
for i in range(3):
k.append(sy.sqrt(s*m[i]*(E-v[i])))
print('Wave vector in layer',i,'is',k[i])
x = []
for i in range(2):
x.append((k[i+1]*m[i])/(k[i]*m[i+1]))
# print(x[i])
#Define Boundary condition matrix for two interfaces.
D0 = (1/2)*sy.Matrix([[1+x[0],1-x[0]], [1-x[0], 1+x[0]]], dtype = complex)
#print(D0)
#A = sy.matrix2numpy(D0,dtype=complex)
D1 = (1/2)*sy.Matrix([[1+x[1],1-x[1]], [1-x[1], 1+x[1]]], dtype = complex)
#print(D1)
#a=eye(3,3)
#print(a)
#Define Propagation matrix for 2nd layer or quantum well
#print(d[1])
#print(k[1])
P1 = 1*sy.Matrix([[sy.exp(-1j*k[1]*d[1]), 0],[0, sy.exp(1j*k[1]*d[1])]], dtype = complex)
#print(P1)
print("abs")
T= D0*P1*D1
#print('Transfer Matrix is given by:',T)
#print('Dimension of tranfer matrix T is' ,T.shape)
#print(T[0,0]
# I want to solve T{0,0} = 0 equation for E
def f(x):
return T[0,0]
x0= 0.5 #intial guess
x = fsolve(f, x0)
print("E is",x)
'''
y=sy.Eq(T[0,0],0)
z=sy.solve(y,E)
print('z',z)
'''
**The main part i guess is the part of the code where i am trying to solve the equation.***Steps I am following:
Defining a symbol E by using sympy
Generating three matrices which involves sum formulae and with variable E
Generating a matrix T my multiplying those 3 matrices,note that elements are complex and involves square roots of negative number.
I need to solve first element of this matrix T[0,0]=0,for variable E and find out value of E. I used fsolve for soving T[0,0]=0.*
Just a note for future questions, please leave out unused imports such as numpy and leave out zombie code like # a = eye(3,3). This helps keep the code as clean and short as possible. Also, the sample code would not run because of indentation problems, so when you copy and paste code, make sure it works before you do so. Always try to make your questions as short and modular as possible.
The expression of T[0,0] is too complex to solve analytically by SymPy so numerical approximation is needed. This leaves 2 options:
using SciPy's solvers which are advanced but require type casting to float values since SciPy does not deal with SymPy objects in any way.
using SymPy's root solvers which are less advanced but are probably simpler to use.
Both of these will only ever produce a single number as output since you can't expect numeric solvers to find every root. If you wanted to find more than one, then I advise that you use a list of points that you want to use as initial values, input each of them into the solvers and keep track of the distinct outputs. This will however never guarantee that you have obtained every root.
Only mix SciPy and SymPy if you are comfortable using both with no problems. SciPy doesn't play at all with SymPy and you should only have list, float, and complex instances when working with SciPy.
import math
import sympy as sy
from scipy.optimize import newton
# Constants(Values at temp 10K)
hbar = 1.055E-34
m0 = 9.1095E-31 # free mass of electron
q = 1.602E-19
v = [0.510, 0, 0.510] # conduction band offset in eV
m1 = 0.043 # effective mass in In_0.53Ga_0.47As
m2 = 0.072 # effective mass in Al_0.48In_0.52As
d = [-math.inf, 100, math.inf] # dimension of structure in nanometers
'''scaling factor to with units of E in eV, mass in terms of free mass of electron, length in terms
of nanometers '''
s = (2 * q * m0 * 1E-18) / hbar ** 2
E = sy.symbols('E') # Suppose energy of incoming particle is 0.3eV
m = [0.043, 0.072, 0.043] # effective mass of electrons in layers
for i in range(3):
print('Effective mass of e in layer', i, 'is', m[i])
k = [] # Defining an array for wavevectors in different layers
for i in range(3):
k.append(sy.sqrt(s * m[i] * (E - v[i])))
print('Wave vector in layer', i, 'is', k[i])
x = []
for i in range(2):
x.append((k[i + 1] * m[i]) / (k[i] * m[i + 1]))
# Define Boundary condition matrix for two interfaces.
D0 = (1 / 2) * sy.Matrix([[1 + x[0], 1 - x[0]], [1 - x[0], 1 + x[0]]], dtype=complex)
D1 = (1 / 2) * sy.Matrix([[1 + x[1], 1 - x[1]], [1 - x[1], 1 + x[1]]], dtype=complex)
# Define Propagation matrix for 2nd layer or quantum well
P1 = 1 * sy.Matrix([[sy.exp(-1j * k[1] * d[1]), 0], [0, sy.exp(1j * k[1] * d[1])]], dtype=complex)
print("abs")
T = D0 * P1 * D1
# did not converge for 0.5
x0 = 0.75
# method 1:
def f(e):
# evaluate T[0,0] at e and remove all sympy related things.
result = complex(T[0, 0].replace(E, e))
return result
solution1 = newton(f, x0)
print(solution1)
# method 2:
solution2 = sy.nsolve(T[0,0], E, x0)
print(solution2)
This prints:
(0.7533104353644469-0.023775286117722193j)
1.00808496181754 - 0.0444042144405285*I
Note that the first line is a native Python complex instance while the second is an instance of SymPy's complex number. One can convert the second simply with print(complex(solution2)).
Now, you'll notice that they produce different numbers but both are correct. This function seems to have a lot of zeros as can be shown from the Geogebra plot:
The red axis is Re(E), green is Im(E) and blue is |T[0,0]|. Each of those "spikes" are probably zeros.
I am currently trying to write some python code to solve an arbitrary system of first order ODEs, using a general explicit Runge-Kutta method defined by the values alpha, gamma (both vectors of dimension m) and beta (lower triangular matrix of dimension m x m) of the Butcher table which are passed in by the user. My code appears to work for single ODEs, having tested it on a few different examples, but I'm struggling to generalise my code to vector valued ODEs (i.e. systems).
In particular, I try to solve a Van der Pol oscillator ODE (reduced to a first order system) using Heun's method defined by the Butcher Tableau values given in my code, but I receive the errors
"RuntimeWarning: overflow encountered in double_scalars f = lambda t,u: np.array(... etc)" and
"RuntimeWarning: invalid value encountered in add kvec[i] = f(t+alpha[i]*h,y+h*sum)"
followed by my solution vector that is clearly blowing up. Note that the commented out code below is one of the examples of single ODEs that I tried and is solved correctly. Could anyone please help? Here is my code:
import numpy as np
def rk(t,y,h,f,alpha,beta,gamma):
'''Runga Kutta iteration'''
return y + h*phi(t,y,h,f,alpha,beta,gamma)
def phi(t,y,h,f,alpha,beta,gamma):
'''Phi function for the Runga Kutta iteration'''
m = len(alpha)
count = np.zeros(len(f(t,y)))
kvec = k(t,y,h,f,alpha,beta,gamma)
for i in range(1,m+1):
count = count + gamma[i-1]*kvec[i-1]
return count
def k(t,y,h,f,alpha,beta,gamma):
'''returning a vector containing each step k_{i} in the m step Runga Kutta method'''
m = len(alpha)
kvec = np.zeros((m,len(f(t,y))))
kvec[0] = f(t,y)
for i in range(1,m):
sum = np.zeros(len(f(t,y)))
for l in range(1,i+1):
sum = sum + beta[i][l-1]*kvec[l-1]
kvec[i] = f(t+alpha[i]*h,y+h*sum)
return kvec
def timeLoop(y0,N,f,alpha,beta,gamma,h,rk):
'''function that loops through time using the RK method'''
t = np.zeros([N+1])
y = np.zeros([N+1,len(y0)])
y[0] = y0
t[0] = 0
for i in range(1,N+1):
y[i] = rk(t[i-1],y[i-1], h, f,alpha,beta,gamma)
t[i] = t[i-1]+h
return t,y
#################################################################
'''f = lambda t,y: (c-y)**2
Y = lambda t: np.array([(1+t*c*(c-1))/(1+t*(c-1))])
h0 = 1
c = 1.5
T = 10
alpha = np.array([0,1])
gamma = np.array([0.5,0.5])
beta = np.array([[0,0],[1,0]])
eff_rk = compute(h0,Y(0),T,f,alpha,beta,gamma,rk, Y,11)'''
#constants
mu = 100
T = 1000
h = 0.01
N = int(T/h)
#initial conditions
y0 = 0.02
d0 = 0
init = np.array([y0,d0])
#Butcher Tableau for Heun's method
alpha = np.array([0,1])
gamma = np.array([0.5,0.5])
beta = np.array([[0,0],[1,0]])
#rhs of the ode system
f = lambda t,u: np.array([u[1],mu*(1-u[0]**2)*u[1]-u[0]])
#solving the system
time, sol = timeLoop(init,N,f,alpha,beta,gamma,h,rk)
print(sol)
Your step size is not small enough. The Van der Pol oscillator with mu=100 is a fast-slow system with very sharp turns at the switching of the modes, so rather stiff. With explicit methods this requires small step sizes, the smallest sensible step size is 1e-5 to 1e-6. You get a solution on the limit cycle already for h=0.001, with resulting velocities up to 150.
You can reduce some of that stiffness by using a different velocity/impulse variable. In the equation
x'' - mu*(1-x^2)*x' + x = 0
you can combine the first two terms into a derivative,
mu*v = x' - mu*(1-x^2/3)*x
so that
x' = mu*(v+(1-x^2/3)*x)
v' = -x/mu
The second equation is now uniformly slow close to the limit cycle, while the first has long relatively straight jumps when v leaves the cubic v=x^3/3-x.
This integrates nicely with the original h=0.01, keeping the solution inside the box [-3,3]x[-2,2], even if it shows some strange oscillations that are not present for smaller step sizes and the exact solution.
I am creating a project on car simulations and I've run into a problem. When I run my code, the cars sometimes overtake one another.
I have spent few days trying to figure out why is this happening and I still have no idea. The optimal velocity function should set the (de)acceleration, so that the overtake would not happen, but for some reason, it still allows cars to sometimes overatake each other and not decelerate fast enough.
Could you help me, or just push me in direction where I should be looking ?
Here is my optimal velocity function:
def optimal_velocity_function(dx, d_safe, v_max):
vx_opt = v_max * (np.tanh(dx - d_safe) + np.tanh(d_safe))
return vx_opt
Now I am using it within Euler's method to solve ODE:
def euler_method(x, v, n_cars, h, tau, d_safe, v_max):
# Euler method used to solve ODE
# returns new position of car and its new velocity
dv = np.zeros(n_cars)
for j in range(n_cars - 1):
dv[j] = tau ** (-1) * (optimal_velocity_function(x[j+1] - x[j], d_safe, v_max) - v[j])
dv[n_cars - 1] = tau ** (-1) * (v_max - v[n_cars - 1]) # Speed of first car
v_new = v + h * dv
x_new = x + h * v_new
return [x_new, v_new]
And here is rest of the model, basically just generating starting values and then iterating using the functions above.
def optimal_velocity_model(n, n_cars, d_0, v_0, h, tau, d_safe, v_max):
global x_limit, canvas, xx, vv
car_positions = np.linspace(0, n_cars, n_cars)
x = np.array(sorted(np.random.random(n_cars) + car_positions)) # Generation of cars with minimal distance
x = x * d_0
v = np.random.random(n_cars) + v_0 # Generating initial speeds around v_0
xx = np.zeros([n_cars, n]) # Matrix of locations
vv = np.zeros([n_cars, n]) # Matrix of velocities
for i in range(n):
xx[:, i] = x
vv[:, i] = v
[x, v] = euler_method(x, v, n_cars, h, tau, d_safe, v_max)
x_limit = xx.max() # Interval in which will cars be
return
Thanks a lot. J.
I think you are getting a bit lost here into the maths and not focusing on the physics of the problem. By using the tanhh over delta X as well as the safe distance you are generating a soft SPEED transition that might not guarantee that overtaking is inhibited.
My train of thought would be something like:
Firstly assess the relative speed of the two vehicles approaching the safety distance
Define the minimum deceleration given the vehicle properties that can stop it within the distance
Use the tanhh or any other function to model transition between
aggressive to smooth braking or viceversa.
Update your speed by integrating that acceleration.
To calculate the deceleration I can think of a simple approach to keep the use of tanh :
Deceleration=Max_Dec(relative_speed)*[1+tanh(d_safe-dx)]
Hope this can guide you further!
I'm using Scipy 14.0 to solve a system of ordinary differential equations describing the dynamics of a gas bubble rising vertically (in the z direction) in a standing still fluid because of buoyancy forces. In particular, I have an equation expressing the rising velocity U as a function of bubble radius R, i.e. U=dz/dt=f(R), and one expressing the radius variation as a function of R and U, i.e. dR/dT=f(R,U). All the rest appearing in the code below are material properties.
I'd like to implement something to account for the physical constraint on z which, obviously, is limited by the liquid height H. I consequently implemented a sort of z<=H constraint in order to stop integration in advance if needed: I used set_solout in order to do so. The situation is that the code runs and gives good results, but set_solout is not working at all (it seems like z_constraint is never called actually...). Do you know why?
Is there somebody with a more clever idea, may be also in order to interrupt exactly when z=H (i.e. a final value problem) ? is this the right way/tool or should I reformulate the problem?
thanks in advance
Emi
from scipy.integrate import ode
Db0 = 0.001 # init bubble radius
y0, t0 = [ Db0/2 , 0. ], 0. #init conditions
H = 1
def y_(t,y,g,p0,rho_g,mi_g,sig_g,H):
R = y[0]
z = y[1]
z_ = ( R**2 * g * rho_g ) / ( 3*mi_g ) #velocity
R_ = ( R/3 * g * rho_g * z_ ) / ( p0 + rho_g*g*(H-z) + 4/3*sig_g/R ) #R dynamics
return [R_, z_]
def z_constraint(t,y):
H = 1 #should rather be a variable..
z = y[1]
if z >= H:
flag = -1
else:
flag = 0
return flag
r = ode( y_ )
r.set_integrator('dopri5')
r.set_initial_value(y0, t0)
r.set_f_params(g, 5*1e5, 2000, 40, 0.31, H)
r.set_solout(z_constraint)
t1 = 6
dt = 0.1
while r.successful() and r.t < t1:
r.integrate(r.t+dt)
You're running into this issue. For set_solout to work correctly, it must be called right after set_integrator, before set_initial_value. If you introduce this modification into your code (and set a value for g), integration will terminate when z >= H, as you want.
To find the exact time when the bubble reached the surface, you can make a change of variables after the integration is terminated by solout and integrate back with respect to z (rather than t) to z = H. A paper that describes the technique is M. Henon, Physica 5D, 412 (1982); you may also find this discussion helpful. Here's a very simple example in which the time t such that y(t) = 0.5 is found, given dy/dt = -y:
import numpy as np
from scipy.integrate import ode
def f(t, y):
"""Exponential decay: dy/dt = -y."""
return -y
def solout(t, y):
if y[0] < 0.5:
return -1
else:
return 0
y_initial = 1
t_initial = 0
r = ode(f).set_integrator('dopri5')
r.set_solout(solout)
r.set_initial_value(y_initial, t_initial)
# Integrate until solout constraint violated
r.integrate(2)
# New system with t as independent variable: see Henon's paper for details.
def g(y, t):
return -1.0/y
r2 = ode(g).set_integrator('dopri5')
r2.set_initial_value(r.t, r.y)
r2.integrate(0.5)
y_final = r2.t
t_final = r2.y
# Error: difference between found and analytical solution
print t_final - np.log(2)