Large angle pendulum plot did not show as expected - python

I am trying to plot the relationship between period and amplitude for an undamped and undriven pendulum for when small angle approximation breaks down, however, my code did not do what I expected...
I think I should be expecting a strictly increasing graph as shown in this video: https://www.youtube.com/watch?v=34zcw_nNFGU
Here is my code, I used zero crossing method to calculate period:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
from itertools import chain
# Second order differential equation to be solved:
# d^2 theta/dt^2 = - (g/l)*sin(theta) - q* (d theta/dt) + F*sin(omega*t)
# set g = l and omega = 2/3 rad per second
# Let y[0] = theta, y[1] = d(theta)/dt
def derivatives(t,y,q,F):
return [y[1], -np.sin(y[0])-q*y[1]+F*np.sin((2/3)*t)]
t = np.linspace(0.0, 100, 10000)
#initial conditions:theta0, omega0
theta0 = np.linspace(0.0,np.pi,100)
q = 0.0 #alpha / (mass*g), resistive term
F = 0.0 #G*np.sin(2*t/3)
value = []
amp = []
period = []
for i in range (len(theta0)):
sol = solve_ivp(derivatives, (0.0,100.0), (theta0[i], 0.0), method = 'RK45', t_eval = t,args = (q,F))
velocity = sol.y[1]
time = sol.t
zero_cross = 0
for k in range (len(velocity)-1):
if (velocity[k+1]*velocity[k]) < 0:
zero_cross += 1
value.append(k)
else:
zero_cross += 0
if zero_cross != 0:
amp.append(theta0[i])
# period calculated using the time evolved between the first and last zero-crossing detected
period.append((2*(time[value[zero_cross - 1]] - time[value[0]])) / (zero_cross -1))
plt.plot(amp,period)
plt.title('Period of oscillation of an undamped, undriven pendulum \nwith varying initial angular displacemnet')
plt.xlabel('Initial Displacement')
plt.ylabel('Period/s')
plt.show()
enter image description here

You can use the event mechanism of solve_ivp for such tasks, it is designed for such "simple" situations
def halfperiod(t,y): return y[1]
halfperiod.terminal=True # stop when root found
halfperiod.direction=1 # find sign changes from negative to positive
for i in range (1,len(theta0)): # amp==0 gives no useful result
sol = solve_ivp(derivatives, (0.0,100.0), (theta0[i], 0.0), method = 'RK45', events =(halfperiod,) )
if sol.success and len(sol.t_events[-1])>0:
period.append(2*sol.t_events[-1][0]) # the full period is twice the event time
amp.append(theta0[i])
This results in the plot

Related

How to include numbers we need in a list which is generated by some stochastic algorithm

I need to implement a stochastic algorithm that provides as output the times and the states at the corresponding time points of a dynamic system. We include randomness in defining the time points by retrieving a random number from the uniform distribution. What I want to do, is to find the state at time points 0,1,2,...,24. Given the randomness of the algorithm, the time points 1, 2, 3,...,24 are not necessarily hit. We my include rounding at two decimal places, but even with rounding I can not find/insert all of these time points. The question is, how to change the code so as to be able to include in the list of the time points the numbers 1, 2,..., 24 while preserving the stochasticity of the algorithm ? Thanks for any suggestion.
import numpy as np
import random
import math as m
np.random.seed(seed = 5)
# Stoichiometric matrix
S = np.array([(-1, 0), (1, -1)])
# Reaction parameters
ke = 0.3; ka = 0.5
k = [ke, ka]
# Initial state vector at time t0
X1 = [200]; X2 = [0]
# We will update it for each time.
X = [X1, X2]
# Initial time is t0 = 0, which we will update.
t = [0]
# End time
tfinal = 24
# The propensity vector R concerning the last/updated value of time
def ReactionRates(k, X1, X2):
R = np.zeros((2,1))
R[0] = k[1] * X1[-1]
R[1] = k[0] * X2[-1]
return R
# We implement the Gillespie (SSA) algorithm
while True:
# Reaction propensities/rates
R = ReactionRates(k,X1,X2)
propensities = R
propensities_sum = sum(R)[0]
if propensities_sum == 0:
break
# we include randomness
u1 = np.random.uniform(0,1)
delta_t = (1/propensities_sum) * m.log(1/u1)
if t[-1] + delta_t > tfinal:
break
t.append(t[-1] + delta_t)
b = [0,R[0], R[1]]
u2 = np.random.uniform(0,1)
# Choose j
lambda_u2 = propensities_sum * u2
for j in range(len(b)):
if sum(b[0:j-1+1]) < lambda_u2 <= sum(b[1:j+1]):
break # out of for j
# make j zero based
j -= 1
# We update the state vector
X1.append(X1[-1] + S.T[j][0])
X2.append(X2[-1] + S.T[j][1])
# round t values
t = [round(tt,2) for tt in t]
print("The time steps:", t)
print("The second component of the state vector:", X2)
After playing with your model, I conclude, that interpolation works fine.
Basically, just append the following lines to your code:
ts = np.arange(tfinal+1)
xs = np.interp(ts, t, X2)
and if you have matplotlib installed, you can visualize using
import matplotlib.pyplot as plt
plt.plot(t, X2)
plt.plot(ts, xs)
plt.show()

Simulate a coupled ordinary differential equation

I want to write a program which turns a 2nd order differential equation into two ordinary differential equations but I don't know how I can do that in Python.
I am getting lots of errors, please help in writing the code correctly.
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
import numpy as np
N = 30 # Number of coupled oscillators.
alpha=0.25
A = 1.0
# Initial positions.
y[0] = 0 # Fix the left-hand side at zero.
y[N+1] = 0 # Fix the right-hand side at zero.
# The range(1,N+1) command only prints out [1,2,3, ... N].
for p in range(1, N+1): # p is particle number.
y[p] = A * np.sin(3 * p * np.pi /(N+1.0))
####################################################
# Initial velocities.
####################################################
v[0] = 0 # The left and right boundaries are
v[N+1] = 0 # clamped and don't move.
# This version sets them all the particle velocities to zero.
for p in range(1, N+1):
v[p] = 0
w0 = [v[p], y[p]]
def accel(t,w):
v[p], y[p] = w
global a
a[0] = 0.0
a[N+1] = 0.0
# This version loops explicitly over all the particles.
for p in range(1,N+1):
a[p] = [v[p], y(p+1)+y(p-1)-2*y(p)+ alpha * ((y[p+1] - y[p])**2 - (y[p] - y[p-1])**2)]
return a
duration = 50
t = np.linspace(0, duration, 800)
abserr = 1.0e-8
relerr = 1.0e-6
solution = solve_ivp(accel, [0, duration], w0, method='RK45', t_eval=t,
vectorized=False, dense_output=True, args=(), atol=abserr, rtol=relerr)
Most general-purpose solvers do not do structured state objects. They just work with a flat array as representation of the state space points. From the construction of the initial point you seem to favor the state space ordering
[ v[0], v[1], ... v[N+1], y[0], y[1], ..., y[N+1] ]
This allows to simply split both and to assemble the derivatives vector from the velocity and acceleration arrays.
Let's keep things simple and separate functionality in small functions
a = np.zeros(N+2)
def accel(y):
global a ## initialized to the correct length with zero, avoids repeated allocation
a[1:-1] = y[2:]+y[:-2]-2*y[1:-1] + alpha*((y[2:]-y[1:-1])**2-(y[1:-1]-y[:-2])**2)
return a
def derivs(t,w):
v,y = w[:N+2], w[N+2:]
return np.concatenate([accel(y), v])
or keeping the theme of avoiding allocations
dwdt = np.zeros(2*N+4)
def derivs(t,w):
global dwdt
v,y = w[:N+2], w[N+2:]
dwdt[:N+2] = accel(y)
dwdt[N+2:] = v
return dwdt
Now you only need to set
w0=np.concatenate([v,y])
to rapidly get to a more interesting class of errors.

Spatio-temporal parameters in a pde - solution using method of lines in Python

I am looking to adapt this method of lines based solution of a pde so that k is a function of both space and time OR is equal to zero if a threshold criteria is not met. This is not the actual pde I am solving, but it's a good enough example.
I can make k a function of t easily enough (see below MWE), however I cannot figure out how to make it both a function of space and time with the complexity of a threshold criteria that sends one term of the RHS of my ode to zero at any nodes that the threshold is not exceeded.
As an illustration, consider
kSpace = X
kTime = 0.001*t if t < 2 and k = 0 otherwise.
if kSpace*kTime > 0.002:
return k = kSpace*kTime
else:
return 0
MWE with incorrect k function
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
plt.interactive(False) # so I can see plots
N = 100 # number of points to discretise
L = 1.0 # length of the rod
X = np.linspace(0,L,N)
h = L/ (N - 1)
def k(t):
if t< 2:
return 0.001*t
else:
return 0.1
def odefunc(u,t):
dudt = np.zeros(X.shape)
dudt[0] = 0 # constant at boundary condition
dudt[-1] = 0
# for the internal nodes
for i in range (1, N-1):
dudt[i] = k(t)*(u[i+1] - 2*u[i] + u[i-1]) / h**2 - 1.0
return dudt
init = 150.0 * np.ones(X.shape) # initial temperature
init[0] = 100.0 # boundary condition
init[-1] = 200.0 # boundary condition
tspan = np.linspace(0.0, 5.0, 100)
sol = odeint(odefunc, init, tspan)
for i in range(0, len(tspan), 5):
plt.plot(X,sol[i], label = 't={0:1.2f}'.format(tspan[i]))
# legend outside the figure
plt.legend(loc='center left', bbox_to_anchor=(1,0.5))
plt.xlabel('X position')
plt.ylabel('Temperature')
# adjust figure edges so the legend is in the figure
plt.subplots_adjust(top=0.89, right = 0.77)
plt.show()
I would do it like this based on your existing code.
def k(x, t):
return value_that_depends_on_x_and_t
def odefunc(u,t):
dudt = np.zeros(X.shape)
dudt[0] = 0 # constant at boundary condition
dudt[-1] = 0
# for the internal nodes
for i in range (1, N-1):
dudt[i] = k(X[i], t)*(u[i+1] - 2*u[i] + u[i-1]) / h**2 - 1.0
return dudt

Scipy Minimize Not Working

I'm running the minimization below:
from scipy.optimize import minimize
import numpy as np
import math
import matplotlib.pyplot as plt
### objective function ###
def Rlzd_Vol1(w1, S):
L = len(S) - 1
m = len(S[0])
# Compute log returns, size (L, m)
LR = np.array([np.diff(np.log(S[:,j])) for j in xrange(m)]).T
# Compute weighted returns
w = np.array([w1, 1.0 - w1])
R = np.array([np.sum(w*LR[i,:]) for i in xrange(L)]) # size L
# Compute Realized Vol.
vol = np.std(R) * math.sqrt(260)
return vol
# stock prices
S = np.exp(np.random.normal(size=(50,2)))
### optimization ###
obj_fun = lambda w1: Rlzd_Vol1(w1, S)
w1_0 = 0.1
res = minimize(obj_fun, w1_0)
print res
### Plot objective function ###
fig_obj = plt.figure()
ax_obj = fig_obj.add_subplot(111)
n = 100
w1 = np.linspace(0.0, 1.0, n)
y_obj = np.zeros(n)
for i in xrange(n):
y_obj[i] = obj_fun(w1[i])
ax_obj.plot(w1, y_obj)
plt.show()
The objective function shows an obvious minimum (it's quadratic):
But the minimization output tells me the minimum is at 0.1, the initial point:
I cannot figure out what's going wrong. Any thoughts?
w1 is passed in as a (single entry) vector and not as scalar from the minimize routine. Try what happens if you define w1 = np.array([0.2]) and then calculate w = np.array([w1, 1.0 - w1]). You'll see you get a 2x1 matrix instead of a 2 entry vector.
To make your objective function able to handle w1 being an array you can simply put in an explicit conversion to float w1 = float(w1) as the first line of Rlzd_Vol1. Doing so I obtain the correct minimum.
Note that you might want to use scipy.optimize.minimize_scalar instead especially if you can bracket where you minimum will be.

Solving ODE numerically with Python

I am solving an ODE for an harmonic oscillator numerically with Python. When I add a driving force it makes no difference, so I'm guessing something is wrong with the code. Can anyone see the problem? The (h/m)*f0*np.cos(wd*i) part is the driving force.
import numpy as np
import matplotlib.pyplot as plt
# This code solves the ODE mx'' + bx' + kx = F0*cos(Wd*t)
# m is the mass of the object in kg, b is the damping constant in Ns/m
# k is the spring constant in N/m, F0 is the driving force in N,
# Wd is the frequency of the driving force and x is the position
# Setting up
timeFinal= 16.0 # This is how far the graph will go in seconds
steps = 10000 # Number of steps
dT = timeFinal/steps # Step length
time = np.linspace(0, timeFinal, steps+1)
# Creates an array with steps+1 values from 0 to timeFinal
# Allocating arrays for velocity and position
vel = np.zeros(steps+1)
pos = np.zeros(steps+1)
# Setting constants and initial values for vel. and pos.
k = 0.1
m = 0.01
vel0 = 0.05
pos0 = 0.01
freqNatural = 10.0**0.5
b = 0.0
F0 = 0.01
Wd = 7.0
vel[0] = vel0 #Sets the initial velocity
pos[0] = pos0 #Sets the initial position
# Numerical solution using Euler's
# Splitting the ODE into two first order ones
# v'(t) = -(k/m)*x(t) - (b/m)*v(t) + (F0/m)*cos(Wd*t)
# x'(t) = v(t)
# Using the definition of the derivative we get
# (v(t+dT) - v(t))/dT on the left side of the first equation
# (x(t+dT) - x(t))/dT on the left side of the second
# In the for loop t and dT will be replaced by i and 1
for i in range(0, steps):
vel[i+1] = (-k/m)*dT*pos[i] + vel[i]*(1-dT*b/m) + (dT/m)*F0*np.cos(Wd*i)
pos[i+1] = dT*vel[i] + pos[i]
# Ploting
#----------------
# With no damping
plt.plot(time, pos, 'g-', label='Undampened')
# Damping set to 10% of critical damping
b = (freqNatural/50)*0.1
# Using Euler's again to compute new values for new damping
for i in range(0, steps):
vel[i+1] = (-k/m)*dT*pos[i] + vel[i]*(1-(dT*(b/m))) + (F0*dT/m)*np.cos(Wd*i)
pos[i+1] = dT*vel[i] + pos[i]
plt.plot(time, pos, 'b-', label = '10% of crit. damping')
plt.plot(time, 0*time, 'k-') # This plots the x-axis
plt.legend(loc = 'upper right')
#---------------
plt.show()
The problem here is with the term np.cos(Wd*i). It should be np.cos(Wd*i*dT), that is note that dT has been added into the correct equation, since t = i*dT.
If this correction is made, the simulation looks reasonable. Here's a version with F0=0.001. Note that the driving force is clear in the continued oscillations in the damped condition.
The problem with the original equation is that np.cos(Wd*i) just jumps randomly around the circle, rather than smoothly moving around the circle, causing no net effect in the end. This can be best seen by plotting it directly, but the easiest thing to do is run the original form with F0 very large. Below is F0 = 10 (ie, 10000x the value used in the correct equation), but using the incorrect form of the equation, and it's clear that the driving force here just adds noise as it randomly moves around the circle.
Note that your ODE is well behaved and has an analytical solution. So you could utilize sympy for an alternate approach:
import sympy as sy
sy.init_printing() # Pretty printer for IPython
t,k,m,b,F0,Wd = sy.symbols('t,k,m,b,F0,Wd', real=True) # constants
consts = {k: 0.1, # values
m: 0.01,
b: 0.0,
F0: 0.01,
Wd: 7.0}
x = sy.Function('x')(t) # declare variables
dx = sy.Derivative(x, t)
d2x = sy.Derivative(x, t, 2)
# the ODE:
ode1 = sy.Eq(m*d2x + b*dx + k*x, F0*sy.cos(Wd*t))
sl1 = sy.dsolve(ode1, x) # solve ODE
xs1 = sy.simplify(sl1.subs(consts)).rhs # substitute constants
# Examining the solution, we note C3 and C4 are superfluous
xs2 = xs1.subs({'C3':0, 'C4':0})
dxs2 = xs2.diff(t)
print("Solution x(t) = ")
print(xs2)
print("Solution x'(t) = ")
print(dxs2)
gives
Solution x(t) =
C1*sin(3.16227766016838*t) + C2*cos(3.16227766016838*t) - 0.0256410256410256*cos(7.0*t)
Solution x'(t) =
3.16227766016838*C1*cos(3.16227766016838*t) - 3.16227766016838*C2*sin(3.16227766016838*t) + 0.179487179487179*sin(7.0*t)
The constants C1,C2 can be determined by evaluating x(0),x'(0) for the initial conditions.

Categories

Resources