I'm pretty new to Python, but for a paper in University I need to apply some models, using preferably Python. I spent a couple of days with the code I attached, but I can't really help, what's wrong, it's not creating a random process which looks like standard brownian motions with drift. My parameters like mu and sigma (expected return or drift and volatility) tend to change nothing but the slope of the noise process. That's my problem, it all looks like noise. Hope my problem is specific enough, here is my coode:
import math
from matplotlib.pyplot import *
from numpy import *
from numpy.random import standard_normal
'''
geometric brownian motion with drift!
Spezifikationen:
mu=drift factor [Annahme von Risikoneutralitaet]
sigma: volatility in %
T: time span
dt: lenght of steps
S0: Stock Price in t=0
W: Brownian Motion with Drift N[0,1]
'''
T=1
mu=0.025
sigma=0.1
S0=20
dt=0.01
Steps=round(T/dt)
t=(arange(0, Steps))
x=arange(0, Steps)
W=(standard_normal(size=Steps)+mu*t)### standard brownian motion###
X=(mu-0.5*sigma**2)*dt+(sigma*sqrt(dt)*W) ###geometric brownian motion####
y=S0*math.e**(X)
plot(t,y)
show()
According to Wikipedia,
So it appears that
X=(mu-0.5*sigma**2)*t+(sigma*W) ###geometric brownian motion####
rather than
X=(mu-0.5*sigma**2)*dt+(sigma*sqrt(dt)*W)
Since T represents the time horizon, I think t should be
t = np.linspace(0, T, N)
Now, according to these Matlab examples (here and here), it appears
W = np.random.standard_normal(size = N)
W = np.cumsum(W)*np.sqrt(dt) ### standard brownian motion ###
not,
W=(standard_normal(size=Steps)+mu*t)
Please check the math, however, I could be wrong.
So, putting it all together:
import matplotlib.pyplot as plt
import numpy as np
T = 2
mu = 0.1
sigma = 0.01
S0 = 20
dt = 0.01
N = round(T/dt)
t = np.linspace(0, T, N)
W = np.random.standard_normal(size = N)
W = np.cumsum(W)*np.sqrt(dt) ### standard brownian motion ###
X = (mu-0.5*sigma**2)*t + sigma*W
S = S0*np.exp(X) ### geometric brownian motion ###
plt.plot(t, S)
plt.show()
yields
An additional implementation using the parametrization of the gaussian law though the normal fonction (instead of standard_normal), a bit shorter.
import numpy as np
T = 2
mu = 0.1
sigma = 0.01
S0 = 20
dt = 0.01
N = round(T/dt)
# reversely you can specify N and then compute dt, which is more common in financial litterature
X = np.random.normal(mu * dt, sigma* np.sqrt(dt), N)
X = np.cumsum(X)
S = S0 * np.exp(X)
Related
I am trying to solve a system of two coupled ODEs using Scipy's solve_ivp function. Namely the hydro-static equilibrium and the mass for a white dwarf with full units.
My initial conditions are that the enclosed mass should be 0 at the core, and that the pressure should be 1e24 at the center. The code works for central pressures up to 1e16 but past that critical point, the solution for the pressure flatlines. The initial value does not change.
When I compile the code, there are no errors nor any warnings produced.
import numpy as np
import math as m
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
def dSdx(r, S):
p, m = S
mu = 2 # This is an approx.
gamma = 4/3
K = 1.2435e15/(mu**gamma)
G = 6.67430e-8 # cgs [cm^3 g^-1 s^-2]
if r==0:
return [0,0]
else:
return [-m*G* K**(gamma)/(r**2 * p**gamma), 4* K**(gamma)* np.pi *r**2 /p**gamma]
S_0 = [1e24,0]
dt=10000
sol = solve_ivp(fun=dSdx, t_span=(0,1e9), max_step=dt, y0=S_0, method='RK45',
atol=0.01, rtol=0.001)
p_sol = sol.y[0]
m_sol = sol.y[1]/(1.989*1e33) # Solar Masses
x=sol.t*1e-5 # Km
# Plotting
plt.figure(1)
fig = plt.plot(x,p_sol)
plt.legend(['Pressure'])
plt.title('Pressure of a Newtonian White Dwarf')
plt.xlabel("R $[Km]$")
plt.ylabel("Pressure $[erg/cm^3] $")
plt.figure(2)
fig = plt.plot(x,m_sol, color='darkorange')
plt.legend(['Mass'])
plt.title('Mass of a Newtonian White Dwarf')
plt.xlabel("R $[Km]$")
plt.ylabel("Mass $[M_{\oplus}]$")
I am having a trouble in providing a graphical representation of my system, which happens to be a harmonically driven pendulum. The problem is shown below for reference.
Problem
The source code I used is shown below using the Verlet Scheme.
#Import needed modules
import numpy as np
import matplotlib.pyplot as plt
#Initialize variables (Initial conditions)
g = 9.8 #Gravitational Acceleration
L = 2.0 #Length of the Pendulum
A0 = 3.0 #Initial amplitude of the driving acceleration
v0 = 0.0 #Initial velocity
theta0 = 90*np.pi/180 #Initial Angle
drivingPeriod = 20.0 #Driving Period
#Setting time array for graph visualization
tau = 0.1 #Time Step
tStop = 10.0 #Maximum time for graph visualization derived from Kinematics
t = np.arange(0., tStop+tau, tau) #Array of time
theta = np.zeros(len(t))
v = np.zeros(len(t))
#Verlet Method
theta[0] = theta0
v[0] = v0
for i in range(len(t)-1):
accel = -((g + (A0*np.sin((2*np.pi*t) / drivingPeriod)))/L) * np.sin(theta[i])
theta[i+1] = theta[i] + tau*v[i] + 0.5*tau**2*accel[i]
v[i+1] = v[i] + 0.5*tau*(accel[i] + accel[i+1])
#Plotting and saving the resulting graph
fig, ax1 = plt.subplots(figsize=(7.5,4.5))
ax1.plot(t,theta*(180/np.pi))
ax1.set_xlabel("Time (t)")
ax1.set_ylabel("Theta")
plt.show()
A sample output is shown.
Output
The pendulum should just go back to its initial angle. How can I solve this issue? Notice that as time evolves, my angle measure (degrees) also increases. I want it to only have a domain of 0 degrees to 360 degrees.
Please change the array computation
accel = -((g + (A0*np.sin((2*np.pi*t) / drivingPeriod)))/L) * np.sin(theta[i])
theta[i+1] = theta[i] + tau*v[i] + 0.5*tau**2*accel[i]
into the proper element computation at the correct place
theta[i+1] = theta[i] + tau*v[i] + 0.5*tau**2*accel[i]
accel[i+1] = -((g + (A0*np.sin((2*np.pi*t[i+1]) / drivingPeriod)))/L) * np.sin(theta[i+1])
Note that you need to compute accel[0] separately outside the loop.
It makes the code more readable if you separate out the specifics of the physical model and declare at the start
def acceleration(t,theta):
return -((g + (A0*np.sin((2*np.pi*t) / drivingPeriod)))/L) * np.sin(theta)
so that later you just call
accel[i+1]=acceleration(t[i+1],theta[i+1])
And even then, with a forced oscillation your system is open, it is possible that the forcing action pumps energy into the pendulum, causing it to start to rotate. This is what your graph shows.
The Verlet method like any symplectic method only promises to somewhat have a constant energy if the system is closed and conservative, that is, in the most common cases, no outside influence and all forces are gradient forces.
I'm trying to simulate a simple stochastic process in Python, but with no success. The process is the following:
x(t + δt) = r(t) * x(t)
where r(t) is a Bernoulli random variable that can assume the values 1.5 or 0.6.
I've tried the following:
n = 10
r = np.zeros( (1,n))
for i in range(0, n, 1):
if r[1,i] == r[1,0]:
r[1,i] = 1
else:
B = bernoulli.rvs(0.5, size=1)
if B == 0:
r[1,i] = r[1,i-1] * 0.6
else:
r[1,i] = r[1,i-1] * 1.5
Can you explain why this is wrong and a possible solution?
So , first thing is that the SDE should be perceived over time, so you also need to consider the discretization rather than just giving the number of steps through n .
Essentially, what you are asking is just a simple random walk with a Bernoulli random variable taking on the values 0.5 and 1.6 instead of a Gaussian (standard normal) random variable.
So I have created an answer here, using NumPy to create the Bernoulli random variable for efficiency (numpy is faster than scipy) and then running the simulation with a stepsize of 0.01 then plotting the solution using matplotlib.
One thing to note that this SDE is one dimensional so we can just store the state and time in separate vectors and plot them at the end.
# Function generating bernoulli trial (your r(t))
def get_bernoulli(p=0.5):
'''
Function using numpy (faster than scipy.stats)
to generate bernoulli random variable with values 0.5 or 1.6
'''
B = np.random.binomial(1, p, 1)
if B == 0:
return 0.6
else:
return 1.5
This is then used in the simulation as
import numpy as np
import matplotlib.pyplot as plt
dt = 0.01 #step size
x0 = 1# initialize
tfinal = 1
sqrtdt = np.sqrt(dt)
n = int(tfinal/dt)
# State and time vectors
xtraj = np.zeros(n+1, float)
trange = np.linspace(start=0,stop=tfinal ,num=n+1)
# initialized
xtraj[0] = x0
for i in range(n):
xtraj[i+1] = xtraj[i] * get_bernoulli(p=0.5)
plt.plot(trange,xtraj,label=r'$x(t)$')
plt.xlabel("time")
plt.ylabel(r"$X$")
plt.legend()
plt.show()
Where we assumed the Bernoulli trial is fair, but can be customized to add some more variation.
I am trying to achieve a plot like this, but with multiple circles (where each circle represents the solved equation for a different energy/position/velocity):
Using this code:
import matplotlib.pyplot as plt
import numpy as np
def harmonic_oscillator_energy_force(x,k=1,x0=0):
#calculate the energy on force on the right hand side of the equal signs
energy = 0.5*k*(x-x0)**2
force = -k*(x-x0)
return energy, force
# phi changes the phase
def harmonic_oscillator_position_velocity(t, A=1, omega=1, phi=0):
position = A * np.cos(omega * t + phi)
velocity = -A * omega * np.sin(omega * t + phi)
return position, velocity
# this function will plot the energy and force for a harmonic oscilator
def plot_harmonic_oscillator(t_max=10, dt=0.1): #**kwargs):
t_points = np.arange(0, t_max + dt, dt)
for phi in range(-20, 20):
for t in range(0, 100):
position, velocity = harmonic_oscillator_position_velocity(t, 1, 1, phi)
plt.plot(position, velocity)
plot_harmonic_oscillator(t_max=10, dt=0.1)
plt.show()
But it seems not to want to work ! I'm a beginner and would appreciate any help.
I am also beginner, but I tried to add a b. into your plot function, please see as following:
plt.plot(position, velocity,**'b.'**)
then I got the figure:
I'm interested in solving,
\frac{\delta \phi}{\delta t} - D \nabla^2 \phi - \alpha \phi - \gamma \phi = 0
The following is working, but I have a few questions:
Is it possible to increase performance with FiPy? I feel like the nx, ny, nz bins are very small here, despite a long computation time. I don't understand why the arrays X, Y, and Z are so large.
Notice in the first frame, we are zoomed in. How can I force the extents to automatically be [0..nx, 0..ny, 0..nz] in all plots?
Data for the first frame is a sphere of points with values 1.0 surrounded by 0.0. Why does there appear to be a gradient? Is Mayavi interpolating? If so, how can I disable this?
Code:
from fipy import *
import mayavi.mlab as mlab
import numpy as np
import time
# Spatial parameters
nx = ny = nz = 30 # bins
dx = dy = dz = 1 # Must this be an integer?
L = nx * dx
# Diffusion and time step
D = 1.
dt = 10.0 * dx**2 / (2. * D)
steps = 4
# Initial value and radius of concentration
phi0 = 1.0
r = 3.0
# Rates
alpha = 1.0 # Source coeficcient
gamma = .01 # Sink coeficcient
mesh = Grid3D(nx=nx, ny=ny, nz=nz, dx=dx, dy=dy, dz=dz)
X, Y, Z = mesh.cellCenters # These are large arrays
phi = CellVariable(mesh=mesh, name=r"$\phi$", value=0.)
src = phi * alpha # Source term (zeroth order reaction)
degr = -gamma * phi # Sink term (degredation)
eq = TransientTerm() == DiffusionTerm(D) + src + degr
# Initial concentration is a sphere located in the center of a bounded cube
phi.setValue(1.0, where=( ((X-nx/2))**2 + (Y-ny/2)**2 + (Z-nz/2)**2 < r**2) )
# Solve
start_time = time.time()
results = [phi.getNumericValue().copy()]
for step in range(steps):
eq.solve(var=phi, dt=dt)
results.append(phi.getNumericValue().copy())
print 'Time elapsed:', time.time() - start_time
# Plot
for i, res in enumerate(results):
fig = mlab.figure()
res = res.reshape(nx, ny, nz)
mlab.contour3d(res, opacity=.3, vmin=0, vmax=1, contours=100, transparent=True, extent=[0, 10, 0, 10, 0, 10])
mlab.colorbar()
mlab.savefig('diffusion3d_%i.png'%(i+1))
mlab.close()
Time elapsed: 68.2 seconds
It's hard to tell from your question, but in the course of diagnosing things, I discovered that the LinearLUSolver scales very poorly as the dimension of the problem increases (see https://github.com/usnistgov/fipy/issues/474).
For this symmetric problem, PySparse should use the PCG solver and Trilinos should use GMRES. If you didn't install either of these, then you'll get the SciPy sparse solvers, which defaults to LU (I don't know why; something for us to look into), and things will be really slow in 3D. Try adding solver=LinearGMRESSolver() to your eq.solve(...) statement.
As far as the size of X, Y, and Z, you've declared a 30*30*30 cube of cells, so each of the cell center coordinate vectors will be 27000 elements long. Did you have a different expectation for cellCenters?
I suggest you subclass our MayaviDaemon class, or at least look at how it sets up the display in Mayavi. In short, we set a data_set_clipper to the desired bounds.
I don't know.