I have modeled Brownian motion in both the x and y directions as random walks. I have plotted the data on a 2-d plot but, while it is not so difficult to trace the simulated particle's path from the origin, I want to be able to see the time-evolution of the particle's path visually represented on the plot, whether it be by changing the color of the line over time, or by adding a third dimension to the plot to represent time, or by using some sort of dynamic graph type.
I haven't tried implementing anything, but I have tried to look at what options are available to me. I want to avoid using a 3d plot if possible. That said, I am open to using something other than matplotlib if it makes sense for this situation (like pyqtgraph).
Here is my code:
import random
import numpy as np
import matplotlib.pyplot as plt
#n is how many trajectory evaluations
n = 1000
t= np.linspace(0,10000,num=n)
def brownianMotion(time):
B = [0]
for t in range(len(time)-1):
nrand = random.gauss(0,(time[t+1] - time[t])**.5)
B.append(B[t]+nrand)
return B
xpath = brownianMotion(t)
ypath = brownianMotion(t)
def plot(x,y):
plt.figure()
xplot = np.insert(x,0,0)
yplot = np.insert(y,0,0)
plt.plot(xplot,yplot,'go-',lw=1,ms=.1)
#np.arange(0,n+1),'go-', lw=1, ms = .1)
plt.xlim([-150,150])
plt.ylim([-150,150])
plt.title('Brownian Motion')
plt.xlabel('xDisplacement')
plt.ylabel('yDisplacement')
plt.show()
plot(xpath,ypath)
All in all, this is just for fun and something I did while bored at work. All suggestions are welcome! Thank you for your time!
Please let me know if I should post a picture of my code's output.
Edit: Additionally, if I wanted to represent multiple particles in the same graph, how could I do that so that the multiple pathes are distinguishable? I have modified my code for this purpose shown below but currently this code outputs a messy green mixture of particles.
import random
import numpy as np
import matplotlib.pyplot as plt
nparticles = 20
#n is how many trajectory evaluations
n = 100
t= np.linspace(0,1000,num=n)
def brownianMotion(time):
B = [0]
for t in range(len(time)-1):
nrand = random.gauss(0,(time[t+1] - time[t])**.5)
B.append(B[t]+nrand)
return B
xs = []
ys = []
for i in range(nparticles):
xs.append(brownianMotion(t))
ys.append(brownianMotion(t))
#xpath = brownianMotion(t)
#ypath = brownianMotion(t)
def plot(x,y):
plt.figure()
for xpath, ypath in zip(x,y):
xplot = np.insert(xpath,0,0)
yplot = np.insert(ypath,0,0)
plt.plot(xplot,yplot,'go-',lw=1,ms=.1)
#np.arange(0,n+1),'go-', lw=1, ms = .1)
plt.xlim([np.amin(x),np.amax(x)])
plt.ylim([np.amin(y),np.amax(y)])
plt.title('Brownian Motion')
plt.xlabel('xDisplacement')
plt.ylabel('yDisplacement')
plt.show()
plot(xs,ys)
Related
I would like to write a code in Python that evaluates the time evolution of a density distribution, p(x,y). The initial conditions is p(t=0,x,y)=exp[-((x-500)^2)/500] and the formula for the solution is in the code below: t-time index, i-space index (x-direction), j-space index (y-direction), and v=0.8
My goal is to run the scheme for 10 iterations and plot the results at the final time step (t=9). What I'm getting is a big array just filled with zeros. I think it's because I am not using the 3D arrays correctly, does anyone have any suggestions? Thank you.
My attempt:
import numpy as np
import matplotlib.pyplot as plt
#Input Parameters
Nx = 1000 #number of grid points in x-direction
Ny = 500 #number of grid points in y-direction
T = 10 #number of time steps
v = 0.8
p = np.zeros((T,Nx,Ny))
P = np.zeros((T,Nx,Ny))
for t in range(0,T-1):
for i in range(0,Nx-1):
for j in range(0,Ny-1):
P[t,i,j] = p[t,i,j]-((v/2)*(p[t,i+1,j]-p[t,i,j]))
p[0,i,j] = np.exp(((-1*(i-500))**2)/500)
x = P[9,i]
y = P[9,j]
print(x)
plt.plot(x,y)
plt.xlim([0,1000])
plt.ylim([0,500])
plt.xlabel('x-direction')
plt.ylabel('y-direction')
plt.title("Density Distribution After 10 Iterations")
Looks like you only fill the values for t in range(0,T-1) which stops at T=8, and you are trying to get x = P[9,i]. They never get filled so obviously they are all 0.
Try to use range(0, T), it will loop over 0,1,2,...,T-1. Also change range(0,Nx), range(0,Ny)
how to validate whether the down sampled output is correct. For example, I had make some example, however, I am not sure whether the output is correct or not?
Any idea on the validation
Code
import numpy as np
import matplotlib.pyplot as plt # For ploting
from scipy import signal
import mne
fs = 100 # sample rate
rsample=50 # downsample frequency
fTwo=400 # frequency of the signal
x = np.arange(fs)
y = [ np.sin(2*np.pi*fTwo * (i/fs)) for i in x]
f_res = signal.resample(y, rsample)
xnew = np.linspace(0, 100, f_res.size, endpoint=False)
#
# ##############################
#
plt.figure(1)
plt.subplot(211)
plt.stem(x, y)
plt.subplot(212)
plt.stem(xnew, f_res, 'r')
plt.show()
Plotting the data is a good first take at a verification. Here I made regular plot with the points connected by lines. The lines are useful since they give a guide for where you expect the down-sampled data to lie, and also emphasize what the down-sampled data is missing. (It would also work to only show lines for the original data, but lines, as in a stem plot, are too confusing, imho.)
import numpy as np
import matplotlib.pyplot as plt # For ploting
from scipy import signal
fs = 100 # sample rate
rsample=43 # downsample frequency
fTwo=13 # frequency of the signal
x = np.arange(fs, dtype=float)
y = np.sin(2*np.pi*fTwo * (x/fs))
print y
f_res = signal.resample(y, rsample)
xnew = np.linspace(0, 100, f_res.size, endpoint=False)
#
# ##############################
#
plt.figure()
plt.plot(x, y, 'o')
plt.plot(xnew, f_res, 'or')
plt.show()
A few notes:
If you're trying to make a general algorithm, use non-rounded numbers, otherwise you could easily introduce bugs that don't show up when things are even multiples. Similarly, if you need to zoom in to verify, go to a few random places, not, for example, only the start.
Note that I changed fTwo to be significantly less than the number of samples. Somehow, you need at least more than one data point per oscillation if you want to make sense of it.
I also remove the loop for calculating y: in general, you should try to vectorize calculations when using numpy.
The spectrum of the resampled signal should have a tone at the same frequency as the input signal just in a smaller nyquist bandwidth.
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
import scipy.fftpack as fft
fs = 100 # sample rate
rsample=50 # downsample frequency
fTwo=10 # frequency of the signal
n = np.arange(1024)
y = np.sin(2*np.pi*fTwo/fs*n)
y_res = signal.resample(y, len(n)/2)
Y = fft.fftshift(fft.fft(y))
f = -fs*np.arange(-512, 512)/1024
Y_res = fft.fftshift(fft.fft(y_res, 1024))
f_res = -fs/2*np.arange(-512, 512)/1024
plt.figure(1)
plt.subplot(211)
plt.stem(f, abs(Y))
plt.subplot(212)
plt.stem(f_res, abs(Y_res))
plt.show()
The tone is still at 10.
IF you down sample a signal both signals will still have the exact same value and a given time , so just loop through "time" and check that the values are the same. In your case you go from a sample rate of 100 to 50. Assuming you have 1 seconds worth of data from building your x from fs, then just loop through t = 0 to t=1 in 1/50'th increments and make sure that Yd(t) = Ys(t) where Yd d is the down sampled f and Ys is the original sampled frequency. Or to say it simply Yd(n) = Ys(2n) for n = 1,2,3,...n=total_samples-1.
I am trying to plot grid of vectors. However when I load my files, vectors actually point to the 45 degree wrong direction, but following the patter from my data. In quiver howto, it is said it points 45 degrees when the vectors are the same, can this be changed?
Also when I tried to use some random number the quiver function acted quite randomly. (using numbers or generating angle grid by arctan(y/x)*180/3.1415). My grid of vectors should look like its rotating - vortex around the centre, instead it looks like antivortex blowing out of the centre.
from pylab import *
from numpy import ma
import scipy.io as c
import math
X,Y = meshgrid( arange(0,100,1),arange(0,100,1) )
ufile = np.genfromtxt(r'x.txt')
vfile = np.genfromtxt(r'y.txt')
U = ufile
V = vfile
angle = (((abs(U)/U+1)/2)*((abs(V)/V+1)/2)*arctan(V/U)+((abs(V)/V+1)/2)*((abs(U)/U-1)/2)*(-arctan(V/U)+math.pi)+((abs(V)/V-1)/2)*((abs(U)/U-1)/2)*(arctan(V/U)+math.pi)+((abs(U)/U+1)/2)*((abs(V)/V-1)/2)*(-arctan(V/U)+2*math.pi))*180/math.pi+90
scale = 10
figure()
Q = quiver( X[::scale, ::scale], Y[::scale, ::scale], U[::scale, ::scale], V[::scale, ::scale],
pivot='mid', color='k', units='xy', headaxislength=20, angles=angle[::scale, ::scale] )
axis([0, 100, 0, 100])
show()
I use matplotlib's method hexbin to compute 2d histograms on my data.
But I would like to get the coordinates of the centers of the hexagons in order to further process the results.
I got the values using get_array() method on the result, but I cannot figure out how to get the bins coordinates.
I tried to compute them given number of bins and the extent of my data but i don't know the exact number of bins in each direction. gridsize=(10,2) should do the trick but it does not seem to work.
Any idea?
I think this works.
from __future__ import division
import numpy as np
import math
import matplotlib.pyplot as plt
def generate_data(n):
"""Make random, correlated x & y arrays"""
points = np.random.multivariate_normal(mean=(0,0),
cov=[[0.4,9],[9,10]],size=int(n))
return points
if __name__ =='__main__':
color_map = plt.cm.Spectral_r
n = 1e4
points = generate_data(n)
xbnds = np.array([-20.0,20.0])
ybnds = np.array([-20.0,20.0])
extent = [xbnds[0],xbnds[1],ybnds[0],ybnds[1]]
fig=plt.figure(figsize=(10,9))
ax = fig.add_subplot(111)
x, y = points.T
# Set gridsize just to make them visually large
image = plt.hexbin(x,y,cmap=color_map,gridsize=20,extent=extent,mincnt=1,bins='log')
# Note that mincnt=1 adds 1 to each count
counts = image.get_array()
ncnts = np.count_nonzero(np.power(10,counts))
verts = image.get_offsets()
for offc in xrange(verts.shape[0]):
binx,biny = verts[offc][0],verts[offc][1]
if counts[offc]:
plt.plot(binx,biny,'k.',zorder=100)
ax.set_xlim(xbnds)
ax.set_ylim(ybnds)
plt.grid(True)
cb = plt.colorbar(image,spacing='uniform',extend='max')
plt.show()
I would love to confirm that the code by Hooked using get_offsets() works, but I tried several iterations of the code mentioned above to retrieve center positions and, as Dave mentioned, get_offsets() remains empty. The workaround that I found is to use the non-empty 'image.get_paths()' option. My code takes the mean to find centers but which means it is just a smidge longer, but it does work.
The get_paths() option returns a set of x,y coordinates embedded that can be looped over and then averaged to return the center position for each hexagram.
The code that I have is as follows:
counts=image.get_array() #counts in each hexagon, works great
verts=image.get_offsets() #empty, don't use this
b=image.get_paths() #this does work, gives Path([[]][]) which can be plotted
for x in xrange(len(b)):
xav=np.mean(b[x].vertices[0:6,0]) #center in x (RA)
yav=np.mean(b[x].vertices[0:6,1]) #center in y (DEC)
plt.plot(xav,yav,'k.',zorder=100)
I had this same problem. I think what needs to be developed is a framework to have a HexagonalGrid object which can then be applied to many different data sets (and it would be awesome to do it for N dimensions). This is possible and it surprises me that neither Scipy or Numpy has anything for it (furthermore there seems to be nothing else like it except perhaps binify)
That said, I assume you want to use hexbinning to compare multiple binned data sets. This requires some common base. I got this to work using matplotlib's hexbin the following way:
import numpy as np
import matplotlib.pyplot as plt
def get_data (mean,cov,n=1e3):
"""
Quick fake data builder
"""
np.random.seed(101)
points = np.random.multivariate_normal(mean=mean,cov=cov,size=int(n))
x, y = points.T
return x,y
def get_centers (hexbin_output):
"""
about 40% faster than previous post only cause you're not calculating the
min/max every time
"""
paths = hexbin_output.get_paths()
v = paths[0].vertices[:-1] # adds a value [0,0] to the end
vx,vy = v.T
idx = [3,0,5,2] # index for [xmin,xmax,ymin,ymax]
xmin,xmax,ymin,ymax = vx[idx[0]],vx[idx[1]],vy[idx[2]],vy[idx[3]]
half_width_x = abs(xmax-xmin)/2.0
half_width_y = abs(ymax-ymin)/2.0
centers = []
for i in xrange(len(paths)):
cx = paths[i].vertices[idx[0],0]+half_width_x
cy = paths[i].vertices[idx[2],1]+half_width_y
centers.append((cx,cy))
return np.asarray(centers)
# important parts ==>
class Hexagonal2DGrid (object):
"""
Used to fix the gridsize, extent, and bins
"""
def __init__ (self,gridsize,extent,bins=None):
self.gridsize = gridsize
self.extent = extent
self.bins = bins
def hexbin (x,y,hexgrid):
"""
To hexagonally bin the data in 2 dimensions
"""
fig = plt.figure()
ax = fig.add_subplot(111)
# Note mincnt=0 so that it will return a value for every point in the
# hexgrid, not just those with count>mincnt
# Basically you fix the gridsize, extent, and bins to keep them the same
# then the resulting count array is the same
hexbin = plt.hexbin(x,y, mincnt=0,
gridsize=hexgrid.gridsize,
extent=hexgrid.extent,
bins=hexgrid.bins)
# you could close the figure if you don't want it
# plt.close(fig.number)
counts = hexbin.get_array().copy()
return counts, hexbin
# Example ===>
if __name__ == "__main__":
hexgrid = Hexagonal2DGrid((21,5),[-70,70,-20,20])
x_data,y_data = get_data((0,0),[[-40,95],[90,10]])
x_model,y_model = get_data((0,10),[[100,30],[3,30]])
counts_data, hexbin_data = hexbin(x_data,y_data,hexgrid)
counts_model, hexbin_model = hexbin(x_model,y_model,hexgrid)
# if you want the centers, they will be the same for both
centers = get_centers(hexbin_data)
# if you want to ignore the cells with zeros then use the following mask.
# But if want zeros for some bins and not others I'm not sure an elegant way
# to do this without using the centers
nonzero = counts_data != 0
# now you can compare the two data sets
variance_data = counts_data[nonzero]
square_diffs = (counts_data[nonzero]-counts_model[nonzero])**2
chi2 = np.sum(square_diffs/variance_data)
print(" chi2={}".format(chi2))
I'm using matplotlib at the moment to try and visualise some data I am working on. I'm trying to plot around 6500 points and the line y = x on the same graph but am having some trouble in doing so. I can only seem to get the points to render and not the line itself. I know matplotlib doesn't plot equations as such rather just a set of points so I'm trying to use and identical set of points for x and y co-ordinates to produce the line.
The following is my code
from matplotlib import pyplot
import numpy
from pymongo import *
class Store(object):
"""docstring for Store"""
def __init__(self):
super(Store, self).__init__()
c = Connection()
ucd = c.ucd
self.tweets = ucd.tweets
def fetch(self):
x = []
y = []
for t in self.tweets.find():
x.append(t['positive'])
y.append(t['negative'])
return [x,y]
if __name__ == '__main__':
c = Store()
array = c.fetch()
t = numpy.arange(0., 0.03, 1)
pyplot.plot(array[0], array[1], 'ro', t, t, 'b--')
pyplot.show()
Any suggestions would be appreciated,
Patrick
Correct me if I'm wrong (I'm not a pro at matplotlib), but 't' will simply get the value [0.].
t = numpy.arange(0.,0.03,1)
That means start at 0 and go to 0.03 (not inclusive) with a step size of 1. Resulting in an array containing just 0.
In that case you are simply plotting one point. It takes two to make a line.