I'm trying to plot the exact solution to a differential equation (a radioactive leak model) in python2.7 with matplotlib. When plotting the graph with Euler methods OR SciPy I get the expected results, but with the exact solution the output is a straight-line graph (should be logarithmic curve).
Here is my code:
import math
import numpy as np
import matplotlib.pyplot as plt
#define parameters
r = 1
beta = 0.0864
x0 = 0
maxt = 100.0
tstep = 1.0
#Make arrays for time and radioactivity
t = np.zeros(1)
#Implementing model with Exact solution where Xact = Exact Solution
Xact = np.zeros(1)
e = math.exp(-(beta/t))
while (t[-1]<maxt):
t = np.append(t,t[-1]+tstep)
Xact = np.append(Xact,Xact[-1] + ((r/beta)+(x0-r/beta)*e))
#plot results
plt.plot(t, Xact,color="green")
I realise that my problem may be due to an incorrect equation, but I'd be very grateful if someone could point out an error in my code. Cheers.
You probably want e to depend on t, as in
def e(t): return np.exp(-t/beta)
and then use
Xact.append( (r/beta)+(x0-r/beta)*e(t[-1]) )
But you can have that all shorter as
t = np.arange(0, maxt+tstep/2, tstep)
plt.plot(t, (r/beta)+(x0-r/beta)*np.exp(-t/beta), color="green" )
Related
I'm trying to solve a 2D delay differential equation with some parameters. The problem is that I can’t get the right solution (which I know) and I suspect that it comes from the integration step, but I'm not sure and I don't really understand how the JiTCDDE works.
This is the DDE:
This is my model:
def model(p, q, r, alpha, T, tau, tmax, ci):
f = [1/T * (p*y(0)+alpha*y(1, t-tau)), 1/T * (r*y(0)+q*y(1))]
DDE = jitcdde(f)
DDE.constant_past(ci)
DDE.step_on_discontinuities()
data = []
for time in np.arange(DDE.t, DDE.t+tmax, 0.09):
data.append( DDE.integrate(time)[1])
return data
Where I'm only interested in the y(1) solution
And the parameters:
T=32 #escala temporal
p=-2.4/T
q=-1.12/T
r=1.5/T
alpha=.6/T
tau=T*2.4 #delay
tmax=400
ci = np.array([4080, 0])
This is the plot I have with that model and parameters:
And this is (the blue line) the correct solution (someone give me the plot not the data)
The following code works for me and produces a result similar to your control:
import numpy as np
from jitcdde import y,t,jitcdde
T = 1
p = -2.4/T
q = -1.12/T
r = 1.5/T
alpha = .6/T
tau = T*2.4
tmax = 10
ci = np.array([4080, 0])
f = [
1/T * (p*y(0)+alpha*y(1, t-tau)),
1/T * (r*y(0)+q*y(1))
]
DDE = jitcdde(f)
DDE.constant_past(ci)
DDE.adjust_diff()
times = np.linspace( DDE.t, DDE.t+tmax, 1000 )
values = [DDE.integrate(time)[1] for time in times]
from matplotlib.pyplot import subplots
fig,axes = subplots()
axes.plot(times,values)
fig.show()
Note the following:
I set T=1 (and adjusted tmax accordingly). I presume that there still is a mistake here.
I used adjust_diff instead of step_on_discontinuities. The problem with your model is that it has a heavy discontinuity in the derivative at t=0. (A discontinuity is normal, but none of this kind). This causes problems with the adaptive step size control at the very beginning of the integration. Such a discontinuity suggests that there is something wrong with either your model or your initial past. The latter doesn’t matter if you only care about long-term behaviour, but this doesn’t seem to be the case here. I added a passage to the documentation about this kind of issue.
I have modeled Brownian motion in both the x and y directions as random walks. I have plotted the data on a 2-d plot but, while it is not so difficult to trace the simulated particle's path from the origin, I want to be able to see the time-evolution of the particle's path visually represented on the plot, whether it be by changing the color of the line over time, or by adding a third dimension to the plot to represent time, or by using some sort of dynamic graph type.
I haven't tried implementing anything, but I have tried to look at what options are available to me. I want to avoid using a 3d plot if possible. That said, I am open to using something other than matplotlib if it makes sense for this situation (like pyqtgraph).
Here is my code:
import random
import numpy as np
import matplotlib.pyplot as plt
#n is how many trajectory evaluations
n = 1000
t= np.linspace(0,10000,num=n)
def brownianMotion(time):
B = [0]
for t in range(len(time)-1):
nrand = random.gauss(0,(time[t+1] - time[t])**.5)
B.append(B[t]+nrand)
return B
xpath = brownianMotion(t)
ypath = brownianMotion(t)
def plot(x,y):
plt.figure()
xplot = np.insert(x,0,0)
yplot = np.insert(y,0,0)
plt.plot(xplot,yplot,'go-',lw=1,ms=.1)
#np.arange(0,n+1),'go-', lw=1, ms = .1)
plt.xlim([-150,150])
plt.ylim([-150,150])
plt.title('Brownian Motion')
plt.xlabel('xDisplacement')
plt.ylabel('yDisplacement')
plt.show()
plot(xpath,ypath)
All in all, this is just for fun and something I did while bored at work. All suggestions are welcome! Thank you for your time!
Please let me know if I should post a picture of my code's output.
Edit: Additionally, if I wanted to represent multiple particles in the same graph, how could I do that so that the multiple pathes are distinguishable? I have modified my code for this purpose shown below but currently this code outputs a messy green mixture of particles.
import random
import numpy as np
import matplotlib.pyplot as plt
nparticles = 20
#n is how many trajectory evaluations
n = 100
t= np.linspace(0,1000,num=n)
def brownianMotion(time):
B = [0]
for t in range(len(time)-1):
nrand = random.gauss(0,(time[t+1] - time[t])**.5)
B.append(B[t]+nrand)
return B
xs = []
ys = []
for i in range(nparticles):
xs.append(brownianMotion(t))
ys.append(brownianMotion(t))
#xpath = brownianMotion(t)
#ypath = brownianMotion(t)
def plot(x,y):
plt.figure()
for xpath, ypath in zip(x,y):
xplot = np.insert(xpath,0,0)
yplot = np.insert(ypath,0,0)
plt.plot(xplot,yplot,'go-',lw=1,ms=.1)
#np.arange(0,n+1),'go-', lw=1, ms = .1)
plt.xlim([np.amin(x),np.amax(x)])
plt.ylim([np.amin(y),np.amax(y)])
plt.title('Brownian Motion')
plt.xlabel('xDisplacement')
plt.ylabel('yDisplacement')
plt.show()
plot(xs,ys)
I am trying to solve a dynamical system with three state variables V1,V2,I3 and then plot these in a 3d Plot. My code so far looks as follows:
from scipy.integrate import ode
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import math
def ID(V,a,b):
return a*(math.exp(b*V)-math.exp(-b*V))
def dynamical_system(t,z,C1,C2,L,R1,R2,R3,RN,a,b):
V1,V2,I3 = z
f = [(1/C1)*(V1*(1/RN-1/R1)-ID(V1-V2,a,b)-(V1-V2)/R2),(1/C2)*(ID(V1-V2,a,b)+(V1-V2)/R2-I3),(1/L)*(-I3*R3+V2)]
return f
# Create an `ode` instance to solve the system of differential
# equations defined by `dynamical_system`, and set the solver method to 'dopri5'.
solver = ode(dynamical_system)
solver.set_integrator('dopri5')
# Set the initial value z(0) = z0.
C1=10
C2=100
L=0.32
R1=22
R2=14.5
R3=100
RN=6.9
a=2.295*10**(-5)
b=3.0038
solver.set_f_params(C1,C2,L,R1,R2,R3,RN,a,b)
t0 = 0.0
z0 = [-3, 0.5, 0.25] #here you can set the inital values V1,V2,I3
solver.set_initial_value(z0, t0)
# Create the array `t` of time values at which to compute
# the solution, and create an array to hold the solution.
# Put the initial value in the solution array.
t1 = 25
N = 200 #number of iterations
t = np.linspace(t0, t1, N)
sol = np.empty((N, 3))
sol[0] = z0
# Repeatedly call the `integrate` method to advance the
# solution to time t[k], and save the solution in sol[k].
k = 1
while solver.successful() and solver.t < t1:
solver.integrate(t[k])
sol[k] = solver.y
k += 1
xlim = (-4,1)
ylim= (-1,1)
zlim=(-1,1)
fig=plt.figure()
ax=fig.gca(projection='3d')
#ax.view_init(35,-28)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_zlim(zlim)
print sol[:,0]
print sol[:,1]
print sol[:,2]
ax.plot3D(sol[:,0], sol[:,1], sol[:,2], 'gray')
plt.show()
Printing the arrays that should hold the solutions sol[:,0] etc. shows that apparently it constantly fills it with the initial value. Can anyone help? Thanks!
Use from __future__ import division.
I can't reproduce your problem: I see a gradual change from -3 to -2.46838127, from 0.5 to 0.38022886 and from 0.25 to 0.00380239 (with a sharp change from 0.25 to 0.00498674 in the first step). This is with Python 3.7.0, NumPy version 1.15.3 and SciPy version 1.1.0.
Given that you are using Python 2.7, integer division may be the culprit here. Quite a number of your constants are integer, and you have a bunch of 1/<constant> integer divisions in your equation.
Indeed, if I replace / with // in my version (for Python 3), I can reproduce your problem.
Simply add from __future__ import division at the top of your script to solve your problem.
Also add from __future__ import print_function at the top, replace print <something> with print(<something>) and your script is fully Python 3 and 2 compatible).
I am using the standard differential equation for SHM for the above simulation, a = -w^2*x. I'm using Python with the odeint being the solver. Despite editing it several times, I keep getting the output as a straight line instead of a sinusoidal curve. The code is:
from scipy.integrate import odeint
from pylab import *
k = 80 #Spring Constant
m = 8 #mass of block
omega = sqrt(k/m) #angular velocity
def deriv(x,t):
return array([x[1],(-1)*(k/m)*x[0]])
t = linspace(0,3.62,100)
xinit = array([0,0])
x = odeint(deriv,xinit,t)
acc_mass = zeros(t.shape[0])
for q in range(0,t.shape[0]):
acc_mass[q] = (-1)*(omega**2)*x[q][0]
f, springer = subplots(3, sharex = True)
springer[0].plot(t,x[:,0],'r')
springer[0].set_title('Position Variation')
springer[1].plot(t,x[:,1],'b')
springer[1].set_title('Velocity Variation')
springer[2].plot(t,acc_mass,'g')
springer[2].set_title('Acceleration Variation')
As pointed out by Warren Weckesser, the code is correct, but since the initial conditions are given as 0 for the displacement, the output is also 0. Hence on his advice, I changed the initial conditions and got the required output which was a sinusoidal curve.
Here is a complete example of SHM using odeint:
http://nbviewer.ipython.org/gist/dpsanders/d417c1ffbb76f13f678c#2D-equations
I'm trying to perform a simple time series prediction using support vector regression.
I am trying to understand the answer provided here.
I adapted Tom's code to reflect the answer provided:
import numpy as np
from matplotlib import pyplot as plt
from sklearn.svm import SVR
X = np.arange(0,100)
Y = np.sin(X)
a = 0
b = 10
x = []
y = []
while b <= 100:
x.append(Y[a:b])
a += 1
b += 1
b = 10
while b <= 90:
y.append(Y[b])
b += 1
svr_rbf = SVR(kernel='rbf', C=1e5, gamma=1e5)
y_rbf = svr_rbf.fit(x[:81], y).predict(x)
figure = plt.figure()
tick_plot = figure.add_subplot(1, 1, 1)
tick_plot.plot(X, Y, label='data', color='green', linestyle='-')
tick_plot.axvline(x=X[-10], alpha=0.2, color='gray')
tick_plot.plot(X[10:], y_rbf[:-1], label='data', color='blue', linestyle='--')
plt.show()
However, I still get the same behavior -- the prediction just returns the value from the last known step. Strangely, if I set the kernel to linear the result is much better. Why doesn't the rbf kernel prediction work as intended?
Thank you.
I understand this is an old question, but I will answer it as other people might benefit from the answer.
The values you are using for C and gamma are most likely the issue if your example works with a linear kernel and not with rbf.
C and gamma are SVM parameters used for nonlinear kernel. For a goodexplanation of what C and gamma are intuitively, have a look here: http://scikit-learn.org/stable/auto_examples/svm/plot_rbf_parameters.html?
In order to predict the values of a sinusoid, try C = 1 and gamma = 0.1. It performs much better than with the values you have.