A simple pendulum inside a big plane-parallel plate capacitor - python

The pendulum bob has a mass m and a positive electric charge q. The capacitor plates are parallel to the surface of the Earth. The electric field inside the capacitor is directed vertically up and its magnitude is modulated by the pendulum motion as E(t) = E_0*|sin(θ)(t)where qE_0/mg=𝛿<1 and θ is the angle between the pendulum and the vertical line. The initial conditions are θ(0) = pi/2 rad and dθ/dt = 0 rad/s.
Let L = 1.0m and g = 9.8 m/s^2
(a) First, taking 𝛿 = 0, estimate at which initial angle θ(0) the numerically obtained period is equal to the period predicted by the formula T = 2pi*sqrt(L/g) with the accuracy better than 1%.
(b) Find and plot the dependence of the period of oscillations of this pendulum on the parameter 𝛿.
(c) What will happen with the pendulum if 𝛿 = 1?
So for a) this is what I did
#import
%pylab nbagg
import numpy as np
from scipy.integrate import odeint
#Solve for T first
#let the length L be equal to 1
L = 1
g=9.8
T = 2*np.pi*np.sqrt(g)
print(T)
#let delta = d
#if d = 0 then the ODE becomes
#Define the ODE
w_0 = np.sqrt(g/L)
def dy_dt(y,t):
y1 = y
y2 = -w_0**2*sin(y1)
dydt = (y1,y2)
return dydt
#Integration values and interval
t_0 = 0
t_f = T
nt = 10000
t = linspace(t_0, t_f, nt)
Now I'm not sure how I can proceed as I am trying to solve for dθ/dt but it's given that dθ/dt is just 0.

So, I've looked this over a few times. It seems like a pretty fun problem, but I don't think that StackOverflow is the correct place for help just yet. I see that you're new to this site, so I'd like to encourage you to break this up into multiple questions and direct them to the appropriate groups.
Since your code is actually working at this point in time, StackOverflow isn't quite the right venue for this question. I would recommend seeking help on the physics stack exchange for any questions relating to the physics behind this problem (such as how to set up the system of differential equations you need), and the computational science stack exchange site for questions on how to solve the differential equations you get numerically.
If you have issues with getting the code itself to work when you are further along in the problem, StackOverflow is definitely the best place to ask for help.

Related

julia differential equations for system of many equations isn't working

I'm changing my programming language to julia due to it's better performance for numerical calculations and differential equations compared to python (I used mostly scipy solve_ivp and odeint but they turned out being too slow for my problems).
For testing purposes I did a simple translation from my old code in python to a new code in julia and I'm pretty sure both codes should have the same numerical results. The problem seems to appear when I try to use the ODE solver from julia and I can't figure out why (maybe because there is too many equations?)
My physics problem is a simulation of a beam of charged planes. For the integration over the time, I use a 2n-dimensional vector, with the [1:n] coordinates corresponding to the positions of the planes and the [n+1:2n] positions correspondind to the particles speeds. So the derivative function is 2n-dimensional as well and the [1:n] time derivatives corresponds to the [n+1:2n] coordinates of the initial vector and the [n+1:2n] derivatives corresponds to a positional term and an index term (see codes below).
Starting by my working code in python:
def deriv(t,y):
p= len(y)
dydt = np.zeros(int(p))
dydt[0:int(p/2)] = y[int(p/2):int(p)] #derivative of the positions
k = index(y[0:int(p/2)])
signal=abs(y[0:int(p/2)])/y[0:int(p/2)]
dydt[int(p/2):int(p)] = -y[0:int(p/2)] + ((np.ones(int(p/2)) + 2*k )/(p))*signal
#derivative of the speeds
deriv = dydt[0:int(p)]
return deriv
With the following integration
sol = solve_ivp(deriv, (time,time+ delta_t), initial_phase, rtol = 0.00001)
and the resulting graphic of the phase-space looks like this:
Few time expected evolution:
Then we have my code in Julia:
function dydt(du, u,p,t)
n = floor(Int,length(u)/2)
k = sortperm(abs.(u[1:n]))
k = k[k] #k ends up beeing the index equivalent
du[1:n] = u[n+1: 2n] #derivative of the positions
du[n+1: 2n] = (1/2n)*(-ones(n) + 2*k).*(abs.(u[1:n])./u[1:n])- (u[1:n])
#derivative of the speeds
end
With the following integration:
using DifferentialEquations
p = 0 #there is no p parameters in my code but they are defined in the tutorials anyway
H = 1.5*rand(2000)
L = (2/3)*(rand(2000) - 0.5*ones(2000))
D = append!(H,L) #Initial state, similar to the python example
tspan = (0.0,1.0)
prob = ODEProblem(dydt,D,tspan,p,reltol=1e-4,abstol=1e-4)
sol = solve(prob)
And the result is assimetrical as we can see below:
I hope anybody can help. I'll answer any asked major question about the physics envolved if necessary.

Estimate velocity on a spring by iterative approach

The problem:
Consider a system with a mass and a spring as shown in the picture below. The stiffness of the spring and the mass of the object are known. Therefore, if the spring is stretched the force the spring exerts can be calculated from Hooke`s law and the instantaneous acceleration can be estimated from Newton´s laws of motion. Integrating the acceleration twice yields the distance the spring would move and subtracting that from the initial length results in a new position to calculate the acceleration and start the loop again. Therefore as the acceleration decreases linearly the speed levels off at a certain value (top right) Everything after that point, spring compressing & decelerating is neglected for this case.
My question is how would to go about coding that up in python. So far I have written some pseudocode.
instantaneous_acceleration = lambda x: 5*x/10 # a = kx/m
delta_time = 0.01 #10 milliseconds
a[0] = instantaneous_acceleration(12) #initial acceleration when stretched to 12 m
v[0] = 0 #initial velocity 0 m/s
s[0] = 12 #initial length 12 m
i = 1
while a[i] > 12:
v[i] = a[i-1]*delta_time + v[i-1] #calculate the next velocity
s[i] = v[i]*delta_time + s[i-1] #calculate the next position
a[i] = instantaneous_acceleration (s[i]) #use the position to derive the new accleration
i = i + 1
Any help or tips are greatly appreciated.
If you're going to integrate up front - which is a good idea and absolutely the way to go when you can - then you can just write down the equations as functions of t for everything:
x'' = -kx/m
x'' + (k/m)x = 0
r^2 + k/m = 0
r^2 = -(k/m)
r = i*sqrt(k/m)
x(t) = A*e^(i*sqrt(k/m)t)
= A*cos(sqrt(k/m)t + B) + i*A*sin(sqrt(k/m)t + B)
= A*cos(sqrt(k/m)t + B)
From initial conditions we know that
x(0) = 12 = A*cos(B)
v(0) = 0 = -sqrt(k/m)*A*sin(B)
The second of these equation is true only if we choose A = 0 or B = 0 or B = Pi.
if A = 0, then the first equation has no solution.
if B = 0, the first equation has solution A = 12.
if B = Pi, the first equation has solution A = -12.
We probably prefer B = 0 and A = 12. This gives
x(t) = 12*cos(sqrt(k/m)t)
v(t) = -12*sqrt(k/m)*sin(sqrt(k/m)t)
a(t) = -12*(k/m)cos(sqrt(k/m)t)
Thus, at any incremental time t[n+1] = t[n] + dt, we can simply calculate the precise position, velocity and acceleration for t[n] without any drift or inaccuracy ever accumulating.
All that said, if you are interested in how to numerically find x(t) and v(t) and a(t) given an arbitrary ordinary differential equation, the answer is much harder. There are lots of good ways of doing what can be called numerical integration. Euler's method is the easiest:
// initial conditions
t[0] = 0
x[0] = …
x'[0] = …
…
x^(n-1)[0] = …
x^(n)[0] = 0
// iterative step
x^(n)[k+1] = f(x^(n-1)[k], …, x'[k], x[k], t[k])
x^(n-1)[k+1] = x^(n-1)[k] + dt * x^(n)[k]
…
x'[k+1] = x'[k] + dt * x''[k]
x[k+1] = x[k] + dt * x'[k]
t[k+1] = t[k] + dt
The smaller a value of dt you choose, the longer it takes to run for a fixed duration of time, but the more accurate the results you get. This is basically doing a Riemann sum of the function and all its derivatives up to the highest one involved in the ODE.
A more accurate version of this, Simpson's rule, does the same thing but takes the average value over the last time quantum (rather than either endpoint's value; the example above uses the beginning of the interval). The average value over the interval is guaranteed to be closer to the true value over the interval than either endpoint (unless the function was constant over that interval, in which case Simpson is at least as good).
Probably the best standard numerical integration methods for ODEs (assuming you don't need something like leapfrog methods for greater stability) are the Runge Kutta methods. An adaptive timestep Runge Kutta method of sufficient order should usually do the trick and give you accurate answers. Unfortunately, the mathematics to explain the Runge Kutta methods is probably too advanced and time consuming to cover here, but you can find information on these and other advanced techniques online or in e.g. Numerical Recipes, a series of books on numerical methods which contains lots of very useful code samples.
Even the Runge Kutta methods work basically by refining the guess at the function's value over the time quantum, though. They just do it in more sophisticated ways which provably reduce the error at each step.
You have a sign error in the force, for a spring or any other oscillation it should always be opposite to the excitation direction. Correcting this gives instantly an oscillation. However, your loop condition will now never be satisfied, so you have to also adapt that.
You can immediately increase the order of your method by elevating it from the current symplectic Euler method to Leapfrog-Verlet. You only have to change the interpretation of v[i] to be the velocity at t[i]-dt/2. Then the first update uses the acceleration in the middle at t[i-1] to compute the velocity at t[i-1]+dt/2=t[i]-dt/2 from the velocity at t[i-1]-dt/2 using a midpoint formula. Then in the next line the position update is a similar midpoint formula using the velocity at the middle time between the position times. All you have to change in the code to get this advantage is to set the initial velocity to the one at time t[0]-dt/2 using the Taylor expansion at t[0].
instantaneous_acceleration = lambda x: -5*x/10 # a = kx/m
delta_time = 0.01 #10 milliseconds
s0, v0 = 12, 0 #initial length 12 m, initial velocity 0 m/s
N=1000
s = np.zeros(N+1); v = s.copy(); a = s.copy()
a[0] = instantaneous_acceleration(s0) #initial acceleration when stretched to 12 m
v[0] = v0-a[0]*delta_time/2
s[0] = s0
for i in range(N):
v[i+1] = a[i]*delta_time + v[i] #calculate the next velocity
s[i+1] = v[i+1]*delta_time + s[i] #calculate the next position
a[i+1] = instantaneous_acceleration (s[i+1]) #use the position to derive the new acceleration
#produce plots of all these functions
t=np.arange(0,N+1)*delta_time;
fig, ax = plt.subplots(3,1,figsize=(5,3*1.5))
for g, y in zip(ax,(s,v,a)):
g.plot(t,y); g.grid();
plt.tight_layout(); plt.show();
This is obviously and correctly an oscillation. The exact solution is 12*cos(sqrt(0.5)*t), using it and its derivatives to compute the errors in the numerical solution (remember the leap-frogging of the velocities) gives via
w=0.5**0.5; dt=delta_time;
fig, ax = plt.subplots(3,1,figsize=(5,3*1.5))
for g, y in zip(ax,(s-12*np.cos(w*t),v+12*w*np.sin(w*(t-dt/2)),a+12*w**2*np.cos(w*t))):
g.plot(t,y); g.grid();
plt.tight_layout(); plt.show();
the plot below, showing errors in the expected size delta_time**2.
An analytical approach is the simplest way to obtain the velocity of a simple system that obeys Hooke's law.
However, if you desire a physically accurate numerical/iterative approach I strongly advise against methods like standard Euler or runge-kutta methods (suggested by Patrick87). [Correction: OPs method is a symplectic 1st order method, if the sign of acceleration term is corrected.]
You probably want to use a Hamiltonian approach and a suitable symplectic integrator such as the second order leapfrog (suggested also by Patrick87).
For Hookes law, you can express the Hamiltonian H = T(p) + V(q), where p is momentum (associated with velocity) and q is position (associated to how far the string located from equilibrium).
You have the kinetic energy T and potential energy V
T(p) = 0.5*p^2/m
V(q) = 0.5*k*q^2
You simply need the derivatives of these two expressions to simulate the system
dT/dp = p/m
dV/dq = k*q
I provided a detailed example (although for another 2-dimensional system), including an implementation of 1st and a 4th order method here:
https://zymplectic.com/case3.html under method 0 and method 1
These are symplectic integrators, which have an energy-preserving property that means you can perform long simulation without dissipative errors.

python solving differential equations

I am attempting to solve four different differential equations. After googling and researching I was able to finally understand how the solver works but I can't get this problem specifically to run correctly. Code compiles but the graphs are incorrect.
I think the problem lies in the volume expression inside the function, which will change depending on how much time has passed. That volume at a specific time will then be used to solve the right hand side of the differential equations.
The intervals, starting point and ending point for the time vector is correct. Constants are also correct.
import numpy as np
import scipy.integrate as integrate
import matplotlib.pyplot as plt
#defining all constants and initial conditions
k=2.2
CB0_inlet=.025
V_flow_inlet=.05
V_reactor_initial=5
CA0_reactor=.05
FB0=CB0_inlet*V_flow_inlet
def dcdt(C,t):
#expression of how volume in reactor varies with time
V=V_flow_inlet*t+C[4] #C[4] is the initial reactor volume ###we dont need things C to be C0 correct?
#calculating right hand side of the four differential equations
dadt=-k*C[0]*C[1]-((V_flow_inlet*C[0])/V)
dbdt=((V_flow_inlet*(CB0_inlet-C[1]))/V)-k*C[0]*C[1]
dcdt=k*C[0]*C[1]-((V_flow_inlet*C[2])/V)
dddt=k*C[0]*C[1]-((V_flow_inlet*C[3])/V)
return [dadt,dbdt,dcdt,dddt,V]
#creating time array, initial conditions array, and calling odeint
t=np.linspace(0,500,100)
initial_conditions=[.05,0,0,0,V_reactor_initial] # [CA0 CB0 CC0 CD0
#V0_reactor]
C=integrate.odeint(dcdt,initial_conditions,t)
plt.plot(t,C)
Taking the hints of the variable names and equation structure, you are considering a chemical reaction
A + B -> C + D
There are 2 sources of changes in the concentration a,b,c,d of reactants A,B,C,D,
the reaction itself with reaction speed k*a*b and
the inflow of reactant B in a solution with concentration b0_in and volume rate V_in, which results in a relative concentration change of V_in/V in all components and an addition of V_in*b0_in/V in B.
This is all well reflected in the first 4 equations of your system. In the treatment of the volume, you are mixing two approaches in an inconsistent way. Either V is a known function of t and thus not a component of the state vector, then
V = V_reactor_initial + V_flow_inlet * t
or you treat it as a component of the state, then the current volume is
V = C[4]
and the rate of volume change is
dVdt = V_flow_inlet.
Modifying your code for the second approach looks like
import numpy as np
import scipy.integrate as integrate
import matplotlib.pyplot as plt
#defining all constants and initial conditions
k=2.2
CB0_inlet=.025
V_flow_inlet=.05
V_reactor_initial=5
CA0_reactor=.05
FB0=CB0_inlet*V_flow_inlet
def dcdt(C,t):
#expression of how volume in reactor varies with time
a,b,c,d,V = C
#calculating right hand side of the four differential equations
dadt=-k*a*b-(V_flow_inlet/V)*a
dbdt=-k*a*b+(V_flow_inlet/V)*(CB0_inlet-b)
dcdt= k*a*b-(V_flow_inlet/V)*c
dddt= k*a*b-(V_flow_inlet/V)*d
return [dadt,dbdt,dcdt,dddt,V_flow_inlet]
#creating time array, initial conditions array, and calling odeint
t=np.linspace(0,500,100)
initial_conditions=[.05,0,0,0,V_reactor_initial] # [CA0 CB0 CC0 CD0
#V0_reactor]
C=integrate.odeint(dcdt,initial_conditions,t)
plt.plot(t,C[:,0:4])
with the result

High frequency noise at solving differential equation

I'm trying to simulate a simple diffusion based on Fick's 2nd law.
from pylab import *
import numpy as np
gridpoints = 128
def profile(x):
range = 2.
straggle = .1576
dose = 1
return dose/(sqrt(2*pi)*straggle)*exp(-(x-range)**2/2/straggle**2)
x = linspace(0,4,gridpoints)
nx = profile(x)
dx = x[1] - x[0] # use np.diff(x) if x is not uniform
dxdx = dx**2
figure(figsize=(12,8))
plot(x,nx)
timestep = 0.5
steps = 21
diffusion_coefficient = 0.002
for i in range(steps):
coefficients = [-1.785714e-3, 2.539683e-2, -0.2e0, 1.6e0,
-2.847222e0,
1.6e0, -0.2e0, 2.539683e-2, -1.785714e-3]
ccf = (np.convolve(nx, coefficients) / dxdx)[4:-4] # second order derivative
nx = timestep*diffusion_coefficient*ccf + nx
plot(x,nx)
for the first few time steps everything looks fine, but then I start to get high frequency noise, do to build-up from numerical errors which are amplified through the second derivative. Since it seems to be hard to increase the float precision I'm hoping that there is something else that I can do to suppress this? I already increased the number of points that are being used to construct the 2nd derivative.
I don't have the time to study your solution in detail, but it seems that you are solving the partial differential equation with a forward Euler scheme. This is pretty easy to implement, as you show, but this can become numerical instable if your timestep is too small. Your only solution is to reduce the timestep or to increase the spatial resolution.
The easiest way to explain this is for the 1-D case: assume your concentration is a function of spatial coordinate x and timestep i. If you do all the math (write down your equations, substitute the partial derivatives with finite differences, should be pretty easy), you will probably get something like this:
C(x, i+1) = [1 - 2 * k] * C(x, i) + k * [C(x - 1, i) + C(x + 1, i)]
so the concentration of a point on the next step depends on its previous value and the ones of its two neighbors. It is not too hard to see that when k = 0.5, every point gets replaced by the average of its two neighbors, so a concentration profile of [...,0,1,0,1,0,...] will become [...,1,0,1,0,1,...] on the next step. If k > 0.5, such a profile will blow up exponentially. You calculate your second order derivative with a longer convolution (I effectively use [1,-2,1]), but I guess that does not change anything for the instability problem.
I don't know about normal diffusion, but based on experience with thermal diffusion, I would guess that k scales with dt * diffusion_coeff / dx^2. You thus have to chose your timestep small enough so that your simulation does not become instable. To make the simulation stable, but still as fast as possible, chose your parameters so that k is a bit smaller than 0.5. Something similar can be derived for 2-D and 3-D cases. The easiest way to achieve this is to increase dx, since your total calculation time will scale with 1/dx^3 for a linear problem, 1/dx^4 for 2-D problems, and even 1/dx^5 for 3-D problems.
There are better methods to solve diffusion equations, I believe that Crank Nicolson is at least standard for solving heat-equations (which is also a diffusion problem). The 'problem' is that this is an implicit method, which means that you have to solve a set of equations to calculate your 'concentration' at the next timestep, which is a bit of a pain to implement. But this method is guaranteed to be numerical stable, even for big timesteps.

How to combine an ODE system with a FEM system

I have a dynamic model set up as a (stiff) system of ODEs. I currently solve this with CVODE (from the SUNDIALS package in the Assimulo python package) and all is good.
I now want to add a new 3D heat sink (with temperature-dependent thermal parameters) to the problem. Instead of writing out all the equations from scratch for the 3D heat equation, my idea is to use an existing FEM or FVM framework to provide to me an interface that will allow me to easily provide the (t, y) for the 3D block to a routine, and get back the residuals y'. The principle is to use the equations from the FEM system but not the solver. CVODE can exploit sparsity, but the combined system is expected to solve slower than the FEM system would solve on its own, being tailored for such.
# pseudocode of a residuals function for CVODE
def residual(t, y):
# ODE system of n equations
res[0] = <function of t,y>;
res[1] = <function of t,y>;
...
res[n] = <function of t,y>;
# Here we add the FEM/FVM residuals
for i in range(FEMcount):
res[n+1+i] = FEMequations[FEMcount](t,y)
return res
My question is whether (a) this approach is sane, and (b) is there a FEM or FVM library that will easily let me treat it as a system of equations, such that I can "tack it on" to my existing set of ODE equations.
If can't let the two systems share the same time axis, then I will have to run them in a stepping mode, where I run the one model for a short time, update the boundary conditions for the other, run that one, update the first model's BCs, and so on.
I have some experience with the wonderful library FiPy, and I am expecting to eventually end up using that library in the manner described above. But I want to know about experience with other systems in problems of this nature, and also other approaches that I have missed.
Edit: I now have some example code that appears to be working, showing how the FiPy mesh diffusion residuals can be solved with CVODE. However, this is only one approach (using FiPy) and the remainder of my other questions and concerns still stand. Any suggestions welcome.
from fipy import *
from fipy.solvers.scipy import DefaultSolver
solverFIPY = DefaultSolver()
from assimulo.solvers import CVode as solverASSIMULO
from assimulo.problem import Explicit_Problem as Problem
# FiPy Setup - Using params from the Mesh1D example
###################################################
nx = 50; dx = 1.; D = 1.
mesh = Grid1D(nx = nx, dx = dx)
phi = CellVariable(name="solution variable", mesh=mesh, value=0.)
valueLeft, valueRight = 1., 0.
phi.constrain(valueRight, mesh.facesRight)
phi.constrain(valueLeft, mesh.facesLeft)
# Instead of eqX = TransientTerm() == ExplicitDiffusionTerm(coeff=D),
# Rather just operate on the diffusion term. CVODE will calculate the
# Transient side
edt = ExplicitDiffusionTerm(coeff=D)
timeStepDuration = 0.9 * dx**2 / (2 * D)
steps = 100
# For comparison with an analytical solution - again,
# taken from the Mesh1D.py example
phiAnalytical = CellVariable(name="analytical value", mesh=mesh)
x = mesh.cellCenters[0]
t = timeStepDuration * steps
from scipy.special import erf
phiAnalytical.setValue(1 - erf(x / (2 * numerix.sqrt(D * t))))
if __name__ == '__main__':
viewer = Viewer(vars=(phi, phiAnalytical))#, datamin=0., datamax=1.)
viewer.plot()
raw_input('Press a key...')
# Now for the Assimulo/Sundials solver setup
############################################
def residual(t, X):
# Pretty straightforward, phi is the unknown
phi.value = X # This is a vector, 50 elements
# Can immediately return the residuals, CVODE sees this vector
# of 50 elements as X'(t), which is like TransientTerm() from FiPy
return edt.justResidualVector(var=phi, solver=solverFIPY)
x0 = phi.value
t0 = 0.
model = Problem(residual, x0, t0)
simulation = solverASSIMULO(model)
tfinal = steps * timeStepDuration # s,
cell_tol = [1.0e-8]*50
simulation.atol = cell_tol
simulation.rtol = 1e-6
simulation.iter = 'Newton'
t, x = simulation.simulate(tfinal, 0)
print x[-1]
# Write back the answer to compare
phi.value = x[-1]
viewer.plot()
raw_input('Press a key...')
This will produce a graph showing a perfect match:
An ODE is a differential equation in one dimension.
An FEM model is for problems that are spacial ie, problems in higher dimensions. You want a finite difference method. It's easier to solve and understand from the perspective someone coming from ODE world.
The question I think you should really be asking is how do you take your ODE, and transfer it to a 3D problem space.
Multidimensional partial differential equations are difficult to solve, yet I'll refer you to a CFD method for doing just that however only in 2D. http://lorenabarba.com/blog/cfd-python-12-steps-to-navier-stokes/
It should take you a solid afternoon to get through that! Good Luck!

Categories

Resources