scipy fft returns null imaginary part - python

First of all, I apologize for being an absolute beginner in both python and signal processing.
I am trying to simulate an impulse signal (or a delta function) propagating along spatial x-axis over time. Then, I would like to perform Fourier Transformation on amplitude vs x-axis for each time and then amplitude vs t-axis for each point in space. The problem I'm facing is that the Fourier coefficients are all real valued. If I "implot" the imaginary part over spatial and temporal axis, you can see, all of these are shown to be zero. However, my understanding was that, the impulse signal at t = 0, x = 0, should have null imaginary coefficient. But after that, for all the other t and/or x's, there should be a real valued imaginary coefficient.
Please refer to this site http://madebyevan.com/dft/ where one can interactively make waveforms and observe the Fourier Transformation. In the f(x) box, please put "spike(x-0)", "spike(x-1)" etc. to simulate my problem and expected result.
I have tried the following code using scipy.fftpack. There are some extra lines to analyze the impulse signal travelling in x axis and x-t plane.
import numpy as np
from numpy import pi
import matplotlib.pyplot as plt
from scipy import signal
import math
import scipy.fftpack
from scipy import ndimage
L = 10
k = np.pi/L
w = np.pi*2
n = 5
# Number of samplepoints
Nx = 1000
Nt = 500
# sample spacing
l = 1.0/Nx
T = 1.0/Nt
x = np.linspace(0, Nx*l*L, Nx)
t = np.linspace(0, Nt*T*L, Nt)
x = np.round(x,2)
t = np.round(t,2)
# function to produce impulse
def gw(xx, tt):
if xx == tt:
kk = 1
else:
kk = 0
return (kk)
fig = plt.figure()
yg = np.array([gw(i, j) for j in t for i in x])
YG = yg.reshape(Nt, Nx)
# how impulse propagate in x-t plane
plt.imshow(YG, interpolation='bilinear',aspect='auto')
plt.colorbar();
# how impulse propagate in x-axis for t = 2 and t = 100
fig, ax = plt.subplots()
ax.plot(x, YG[2,:], x, YG[100,:])
plt.show()
# FFT in x-axis at each point in time
yxf = np.zeros((Nt, Nx))
for i in range(Nt):
yx = YG[i,:]
yxf[i,:] = scipy.fftpack.fft(yx)
plt.imshow(np.imag(yxf[:,:Nx]), interpolation='bilinear',aspect='auto')
plt.colorbar();
plt.show()
# FFT in t-axis at each point in space
ytf = np.zeros((Nt, Nx))
for i in range(Nx):
yt = YG[:,i]
ytf[:,i] = scipy.fftpack.fft(yt)
plt.imshow(np.imag(ytf[:Nt,:]), interpolation='bilinear',aspect='auto')
plt.colorbar();
plt.show()

Related

scipy Fast fourier transform doesn't recognize the signal

i'm trying to get the frequency of a signal via fourier transform but it's not able to recognize it (sets the peak to f=0). Maybe something is wrong in my code (FULL reprudible code at the end of the page):
PF = fft.fft(Y[0,:])/Npoints #/Npoints to get the true amplitudes
ZF = fft.fft(Y[1,:])/Npoints
freq = fft.fftfreq(Npoints,deltaT)
PF = fft.fftshift(PF) #change of ordering so that the frequencies are increasing
ZF = fft.fftshift(ZF)
freq = fft.fftshift(freq)
plt.plot(freq, np.abs(PF))
plt.show()
plt.plot(T,Y[0,:])
plt.show()
where Npoints is the number of intervals (points) and deltaT is the time spacing of the intervals. You can see that the peak is at f=0
I show also a plot of Y[0,:] (my signal) over time where it's clear that the signal has a characteristic frequency
FULL REPRUDICIBLE CODE
import numpy as np
import matplotlib.pyplot as plt
#numerical integration
from scipy.integrate import solve_ivp
import scipy.fft as fft
r=0.5
g=0.4
e=0.6
H=0.6
m=0.15
#define a vector of K between 0 and 4 with 50 componets
K=np.arange(0.1,4,0.4)
tsteps=np.arange(7200,10000,5)
Npoints=len(tsteps)
deltaT=2800/Npoints #sample spacing
for k in K :
i=0
def RmAmodel(t,y):
return [r*y[0]*(1-y[0]/k)-g*y[0]/(y[0]+H)*y[1], e*g*y[0]/(y[1]+H)*y[1]-m*y[1]]
sol = solve_ivp(RmAmodel, [0,10000], [3,3], t_eval=tsteps) #t_eval specify the points where the solution is desired
T=sol.t
Y=sol.y
vk=[]
for i in range(Npoints):
vk.append(k)
XYZ=[vk,Y[0,:],Y[1,:]]
#check periodicity over P and Z with fourier transform
#try Fourier analysis just for the last value of K
PF = fft.fft(Y[0,:])/Npoints #/Npoints to get the true amplitudes
ZF = fft.fft(Y[1,:])/Npoints
freq = fft.fftfreq(Npoints,deltaT)
PF = fft.fftshift(PF) #change of ordering so that the frequencies are increasing
ZF = fft.fftshift(ZF)
freq = fft.fftshift(freq)
plt.plot(T,Y[0,:])
plt.show()
plt.plot(freq, np.abs(PF))
plt.show()
I can't pinpoint where the problem is. It looks like there is some problem in the fft code. Anyway, I have little time so I will just put a sample code I made before. You can use it as reference or copy-paste it. It should work.
import numpy as np
import matplotlib.pyplot as plt
from scipy.fft import fft, fftfreq
fs = 1000 #sampling frequency
T = 1/fs #sampling period
N = int((1 / T) + 1) #number of sample points for 1 second
t = np.linspace(0, 1, N) #time array
pi = np.pi
sig1 = 1 * np.sin(2*pi*10*t)
sig2 = 2 * np.sin(2*pi*30*t)
sig3 = 3 * np.sin(2*pi*50*t)
#generate signal
signal = sig1 + sig2 + sig3
#plot signal
plt.plot(t, signal)
plt.show()
signal_fft = fft(signal) #getting fft
f2 = np.abs(signal_fft / N) #full spectrum
f1 = f2[:N//2] #half spectrum
f1[1:] = 2*f1[1:] #actual amplitude
freq = fs * np.linspace(0,N/2,int(N/2)) / N #frequency array
#plot fft result
plt.plot(freq, f1)
plt.xlim(0,100)
plt.show()

How to rotate a cylinder without causing a 'sheared' appearance

I have plotted a 'tear drop' shaped cylinder in matplotlib. To obtain the tear drop shape I plotted a normal cylinder from theta = 0 to theta = pi and an ellipse from theta = pi to theta = 2pi. However I am now trying to 'spin' the cylinder around it's axis which here is given conveniently by the z-axis.
I tried using the rotation matrix for rotating around the z-axis which Wikipedia gives as:
However when I try to rotate through -pi/3 radians, the cylinder becomes very disfigured.
Is there anyway to prevent this from happening?
Here is my code:
import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from math import sin, cos, pi
import math
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
theta = np.linspace(0,2*pi, 1200)
Z = np.linspace(0,5,1000+600)
Z,theta = np.meshgrid(Z, theta)
X = []
Y = []
R = 0.003
#calculate the x and y values
for i in theta:
cnt = 0
tempX = []
tempY = []
for j in i:
#circle
if(i[0]<=pi):
tempX.append(R*cos(j))
tempY.append(R*sin(j))
cnt+=1
#ellipse
else:
tempX.append(R*cos(j))
tempY.append(0.006*sin(j))
X.append(tempX)
Y.append(tempY)
X1 = np.array(X)
Y1 = np.array(Y)
#rotate around the Z axis
a = -pi/3
for i in range(len(X)):
X1[i] = cos(a)*X1[i]-sin(a)*Y1[i]
Y1[i] = sin(a)*X1[i]+cos(a)*Y1[i]
#plot
ax.plot_surface(X1,Y1,Z,linewidth = 0, shade = True, alpha = 0.3)
ax.set_xlim(-0.01,0.01)
ax.set_ylim(-0.01, 0.01)
azimuth = 173
elevation = 52
ax.view_init(elevation, azimuth)
plt.show()
Your rotating is flawed: To calculate Y1[i] you need the old value of X1[i], but you already updated it. You can try something like
X1[i], Y1[i] = cos(a)*X1[i]-sin(a)*Y1[i], sin(a)*X1[i]+cos(a)*Y1[i]
if you want to make the matrix multiplication a bit more obvious (and fix the bug) you could also do the following (please doublecheck that the matrix is correct and that the multiplication is in the right order, I did not test this):
rotation_matrix = np.array([[cos(a), -sin(a)], [sin(a), cos(a)]])
x, y = zip(*[(x,y) # rotation_matrix for x,y in zip(x,y)])
the # is new in 3.5 and for numpy array it's defined to be the matrix multiplication. If you are on a version below 3.5 you can use np.dot.
The zip(*...) is necessary to get a pair of lists instead of a list of pairs. See also this answer

Using set_array with pyplot.pcolormesh ruins figure

I have a xx and yy matrix created with np.meshgrid and a grid matrix of values created by operating on xx and yy. Then I plot the results using graph = plt.pcolormesh(... and get this:
Then when I try to update the grid matrix in the plot using graph.set_array(grid.ravel()) it causes the figure to be thwarted.
Does anyone know how to avoid this?
Here is my full code if it helps:
from pylab import *
import numpy as np
import matplotlib.pyplot as plt
from obspy import read
dx = 5 # km
dt = 5 # sec
nx = 500
ny = 500
v = 3.5 # km/s
p = 1/v
t_min = np.int_(np.sqrt((nx*dx)**2 + (ny*dx)**2))/v
nt = np.int_(t_min/dt)
# Receiver position
rx = 40 * dx
ry = 40 * dx
# Signal with ones
signal = np.zeros(2*nt)
signal[0:len(signal):len(signal)/10] = 1
# Create grid:
x_vector = np.arange(0, nx)*dx
y_vector = np.arange(0, ny)*dx
xx, yy = np.meshgrid(x_vector, y_vector)
# Distance from source to grid point
rr = np.int_(np.sqrt((xx - rx)**2 + (yy - ry)**2))
# travel time grid
tt_int = np.int_(rr/v)
tt = rr/v
# Read window of signal
wlen = np.int_(t_min/dt)
signal_window = signal[0:wlen]
grid = signal_window[tt_int/dt]
ax = plt.subplot(111)
graph = plt.pcolormesh(xx, yy, grid, cmap=mpl.cm.Reds)
plt.colorbar()
plt.plot(rx, ry, 'rv', markersize=10)
plt.xlabel('km')
plt.ylabel('km')
# plt.savefig('anitestnormal.png', bbox_inches='tight')
signal_window = signal[wlen:wlen * 2]
grid = signal_window[tt_int/dt]
graph.set_array(grid.ravel())
# plt.ion()
plt.show()
This is rather tricky ... but I believe that your assertion about the dimensions is correct. It has to do with how pcolormesh creates the QuadMesh object.
The documentation states that:
A quadrilateral mesh is represented by a (2 x ((meshWidth + 1) * (meshHeight + 1))) numpy array coordinates
In this context, meshWidth is your xx, meshHeight is your yy. When you set the array explicitly using set_array, pcolormesh wants to interpret it directly as the (meshWidth x meshHeight) quadrilaterals, and therefore requires one less point in each dimension.
When I tested it, I got the following behavior - if you change
graph.set_array(grid.ravel())
to
graph.set_array(grid[:-1,:-1].ravel())
your plot will look like it should.
In the code, it looks like in the initial call to pcolormesh, if xx and yy are given, they should actually be defined to have one more point in each dimension than the value array, and if they don't (are off by 1), then the array is truncated by one value automatically. So, you should get the same answer even if you use grid[:-1,:-1] in the first call as well.

Method to uniformly randomly populate a disk with points in python

I have an application that requires a disk populated with 'n' points in a quasi-random fashion. I want the points to be somewhat random, but still have a more or less regular density over the disk.
My current method is to place a point, check if it's inside the disk, and then check if it is also far enough away from all other points already kept. My code is below:
import os
import random
import math
# ------------------------------------------------ #
# geometric constants
center_x = -1188.2
center_y = -576.9
center_z = -3638.3
disk_distance = 2.0*5465.6
disk_diam = 5465.6
# ------------------------------------------------ #
pts_per_disk = 256
closeness_criteria = 200.0
min_closeness_criteria = disk_diam/closeness_criteria
disk_center = [(center_x-disk_distance),center_y,center_z]
pts_in_disk = []
while len(pts_in_disk) < (pts_per_disk):
potential_pt_x = disk_center[0]
potential_pt_dy = random.uniform(-disk_diam/2.0, disk_diam/2.0)
potential_pt_y = disk_center[1]+potential_pt_dy
potential_pt_dz = random.uniform(-disk_diam/2.0, disk_diam/2.0)
potential_pt_z = disk_center[2]+potential_pt_dz
potential_pt_rad = math.sqrt((potential_pt_dy)**2+(potential_pt_dz)**2)
if potential_pt_rad < (disk_diam/2.0):
far_enough_away = True
for pt in pts_in_disk:
if math.sqrt((potential_pt_x - pt[0])**2+(potential_pt_y - pt[1])**2+(potential_pt_z - pt[2])**2) > min_closeness_criteria:
pass
else:
far_enough_away = False
break
if far_enough_away:
pts_in_disk.append([potential_pt_x,potential_pt_y,potential_pt_z])
outfile_name = "pt_locs_x_lo_"+str(pts_per_disk)+"_pts.txt"
outfile = open(outfile_name,'w')
for pt in pts_in_disk:
outfile.write(" ".join([("%.5f" % (pt[0]/1000.0)),("%.5f" % (pt[1]/1000.0)),("%.5f" % (pt[2]/1000.0))])+'\n')
outfile.close()
In order to get the most even point density, what I do is basically iteratively run this script using another script, with the 'closeness' criteria reduced for each successive iteration. At some point, the script can not finish, and I just use the points of the last successful iteration.
So my question is rather broad: is there a better way to do this? My method is ok for now, but my gut says that there is a better way to generate such a field of points.
An illustration of the output is graphed below, one with a high closeness criteria, and another with a 'lowest found' closeness criteria (what I want).
A simple solution based on Disk Point Picking from MathWorld:
import numpy as np
import matplotlib.pyplot as plt
n = 1000
r = np.random.uniform(low=0, high=1, size=n) # radius
theta = np.random.uniform(low=0, high=2*np.pi, size=n) # angle
x = np.sqrt(r) * np.cos(theta)
y = np.sqrt(r) * np.sin(theta)
# for plotting circle line:
a = np.linspace(0, 2*np.pi, 500)
cx,cy = np.cos(a), np.sin(a)
fg, ax = plt.subplots(1, 1)
ax.plot(cx, cy,'-', alpha=.5) # draw unit circle line
ax.plot(x, y, '.') # plot random points
ax.axis('equal')
ax.grid(True)
fg.canvas.draw()
plt.show()
It gives.
Alternatively, you also could create a regular grid and distort it randomly:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.tri as tri
n = 20
tt = np.linspace(-1, 1, n)
xx, yy = np.meshgrid(tt, tt) # create unit square grid
s_x, s_y = xx.ravel(), yy.ravel()
ii = np.argwhere(s_x**2 + s_y**2 <= 1).ravel() # mask off unwanted points
x, y = s_x[ii], s_y[ii]
triang = tri.Triangulation(x, y) # create triangluar grid
# distort the grid
g = .5 # distortion factor
rx = x + np.random.uniform(low=-g/n, high=g/n, size=x.shape)
ry = y + np.random.uniform(low=-g/n, high=g/n, size=y.shape)
rtri = tri.Triangulation(rx, ry, triang.triangles) # distorted grid
# for circle:
a = np.linspace(0, 2*np.pi, 500)
cx,cy = np.cos(a), np.sin(a)
fg, ax = plt.subplots(1, 1)
ax.plot(cx, cy,'k-', alpha=.2) # circle line
ax.triplot(triang, "g-", alpha=.4)
ax.triplot(rtri, 'b-', alpha=.5)
ax.axis('equal')
ax.grid(True)
fg.canvas.draw()
plt.show()
It gives
The triangles are just there for visualization. The obvious disadvantage is that depending on your choice of grid, either in the middle or on the borders (as shown here), there will be more or less large "holes" due to the grid discretization.
If you have a defined area like a disc (circle) that you wish to generate random points within you are better off using an equation for a circle and limiting on the radius:
x^2 + y^2 = r^2 (0 < r < R)
or parametrized to two variables
cos(a) = x/r
sin(a) = y/r
sin^2(a) + cos^2(a) = 1
To generate something like the pseudo-random distribution with low density you should take the following approach:
For randomly distributed ranges of r and a choose n points.
This allows you to generate your distribution to roughly meet your density criteria.
To understand why this works imagine your circle first divided into small rings of length dr, now imagine your circle divided into pie slices of angle da. Your randomness now has equal probability over the whole boxed area arou d the circle. If you divide the areas of allowed randomness throughout your circle you will get a more even distribution around the overall circle and small random variation for the individual areas giving you the psudo-random look and feel you are after.
Now your job is just to generate n points for each given area. You will want to have n be dependant on r as the area of each division changes as you move out of the circle. You can proportion this to the exact change in area each space brings:
for the n-th to n+1-th ring:
d(Area,n,n-1) = Area(n) - Area(n-1)
The area of any given ring is:
Area = pi*(dr*n)^2 - pi*(dr*(n-1))
So the difference becomes:
d(Area,n,n-1) = [pi*(dr*n)^2 - pi*(dr*(n-1))^2] - [pi*(dr*(n-1))^2 - pi*(dr*(n-2))^2]
d(Area,n,n-1) = pi*[(dr*n)^2 - 2*(dr*(n-1))^2 + (dr*(n-2))^2]
You could expound this to gain some insight on how much n should increase but it may be faster to just guess at some percentage increase (30%) or something.
The example I have provided is a small subset and decreasing da and dr will dramatically improve your results.
Here is some rough code for generating such points:
import random
import math
R = 10.
n_rings = 10.
n_angles = 10.
dr = 10./n_rings
da = 2*math.pi/n_angles
base_points_per_division = 3
increase_per_level = 1.1
points = []
ring = 0
while ring < n_rings:
angle = 0
while angle < n_angles:
for i in xrange(int(base_points_per_division)):
ra = angle*da + da*math.random()
rr = r*dr + dr*random.random()
x = rr*math.cos(ra)
y = rr*math.sin(ra)
points.append((x,y))
angle += 1
base_points_per_division = base_points_per_division*increase_per_level
ring += 1
I tested it with the parameters:
n_rings = 20
n_angles = 20
base_points = .9
increase_per_level = 1.1
And got the following results:
It looks more dense than your provided image, but I imagine further tweaking of those variables could be beneficial.
You can add an additional part to scale the density properly by calculating the number of points per ring.
points_per_ring = densitymath.pi(dr**2)*(2*n+1)
points_per_division = points_per_ring/n_angles
This will provide a an even better scaled distribution.
density = .03
points = []
ring = 0
while ring < n_rings:
angle = 0
base_points_per_division = density*math.pi*(dr**2)*(2*ring+1)/n_angles
while angle < n_angles:
for i in xrange(int(base_points_per_division)):
ra = angle*da + min(da,da*random.random())
rr = ring*dr + dr*random.random()
x = rr*math.cos(ra)
y = rr*math.sin(ra)
points.append((x,y))
angle += 1
ring += 1
Giving better results using the following parameters
R = 1.
n_rings = 10.
n_angles = 10.
density = 10/(dr*da) # ~ ten points per unit area
With a graph...
and for fun you can graph the divisions to see how well it is matching your distriubtion and adjust.
Depending on how random the points need to be, it may be simple enough to just make a grid of points within the disk, and then displace each point by some small but random amount.
It may be that you want more randomness, but if you just want to fill your disc with an even-looking distribution of points that aren't on an obvious grid, you could try a spiral with a random phase.
import math
import random
import pylab
n = 300
alpha = math.pi * (3 - math.sqrt(5)) # the "golden angle"
phase = random.random() * 2 * math.pi
points = []
for k in xrange(n):
theta = k * alpha + phase
r = math.sqrt(float(k)/n)
points.append((r * math.cos(theta), r * math.sin(theta)))
pylab.scatter(*zip(*points))
pylab.show()
Probability theory ensures that the rejection method is an appropriate method
to generate uniformly distributed points within the disk, D(0,r), centered at origin and of radius r. Namely, one generates points within the square [-r,r] x [-r,r], until a point falls within the disk:
do{
generate P in [-r,r]x[-r,r];
}while(P[0]**2+P[1]**2>r);
return P;
unif_rnd_disk is a generator function implementing this rejection method:
import matplotlib.pyplot as plt
import numpy as np
import itertools
def unif_rnd_disk(r=1.0):
pt=np.zeros(2)
while True:
yield pt
while True:
pt=-r+2*r*np.random.random(2)
if (pt[0]**2+pt[1]**2<=r):
break
G=unif_rnd_disk()# generator of points in disk D(0,r=1)
X,Y=zip(*[pt for pt in itertools.islice(G, 1, 1000)])
plt.scatter(X, Y, color='r', s=3)
plt.axis('equal')
If we want to generate points in a disk centered at C(a,b), we have to apply a translation to the points in the disk D(0,r):
C=[2.0, -3.5]
plt.scatter(C[0]+np.array(X), C[1]+np.array(Y), color='r', s=3)
plt.axis('equal')

Using FFT to find the center of mass under periodic boundary conditions

I would like to use the Fourier transform to find the center of a simulated entity under periodic boundary condition; periodic boundary conditions means, that whenever something exits through one side of the box, it is warped around to appear on the opposite side just like in the classic game asteroids.
So what I have is for each time frame a matrix (Nx3) with N the number of points in xyz. what I want to do is determine the center of that cloud even if it all moved over the periodic boundary and is so to say stuck in between.
My idea for an solution would now be do a (mass weigted) histogram of these points and then perform an FFT on that and use the phase of the first Fourier coefficient to determine where in the box the maximum would be.
as a test case I have used
import numpy as np
Points_x = np.random.randn(10000)
Box_min = -10
Box_max = 10
X = np.linspace( Box_min, Box_max, 100 )
### make a Histogram of the points
Histogram_Points = np.bincount( np.digitize( Points_x, X ), minlength=100 )
### make an artifical shift over the periodic boundary
Histogram_Points = np.r_[ Histogram_Points[45:], Histogram_Points[:45] ]
So now I can use FFT since it expects a periodic function anyways.
## doing fft
F = np.fft.fft(Histogram_Points)
## getting rid of everything but first harmonic
F[2:] = 0.
## back transforming
Fist_harmonic = np.fft.ifft(F)
That way I get a sine wave with its maximum exactly where the maximum of the histogram is.
Now I'd like to extract the position of the maximum not by taking the max function on the sine vector, but somehow it should be retrievable from the first (not the 0th) Fourier coefficient, since that should somehow contain the phase shift of the sine to have its maximum exactly at the maximum of the histogram.
Indeed, plotting
Cos_approx = cos( linspace(0,2*pi,100) * angle(F[1]) )
will give
But I can't figure out how to get the position of the peak from this angle.
Using the FFT is overkill when all you need is one Fourier coefficent. Instead, you can simply compute the dot product of your data with
w = np.exp(-2j*np.pi*np.arange(N) / N)
where N is the number of points. (The time to compute all the Fourier coefficients with the FFT is O(N*log(N)). Computing just one coefficient is O(N).)
Here's a script similar to yours. The data is put in y; the coordinates of the data points are in x.
import numpy as np
N = 100
# x coordinates of the data
xmin = -10
xmax = 10
x = np.linspace(xmin, xmax, N, endpoint=False)
# Generate data in y.
n = 35
y = np.zeros(N)
y[:n] = 1 - np.cos(np.linspace(0, 2*np.pi, n))
y[:n] /= 0.7 + 0.3*np.random.rand(n)
m = 10
y = np.r_[y[m:], y[:m]]
# Compute coefficent 1 of the discrete Fourier transform.
w = np.exp(-2j*np.pi*np.arange(N) / N)
F1 = y.dot(w)
print "F1 =", F1
# Get the angle of F1 (in the interval [0,2*pi]).
angle = np.angle(F1.conj())
if angle < 0:
angle += 2*np.pi
center_x = xmin + (xmax - xmin) * angle / (2*np.pi)
print "center_x = ", center_x
# Create the first sinusoidal mode for the plot.
mode1 = (F1.real * np.cos(2*np.pi*np.arange(N)/N) -
F1.imag*np.sin(2*np.pi*np.arange(N)/N))/np.abs(F1)
import matplotlib.pyplot as plt
plt.clf()
plt.plot(x, y)
plt.plot(x, mode1)
plt.axvline(center_x, color='r', linewidth=1)
plt.show()
This generates the plot:
To answer the question "Why F1.conj()?":
The complex conjugate of F1 is used because of the minus sign in
w = np.exp(-2j*np.pi*np.arange(N) / N) (which I used because it
is a common convention).
Since w can be written
w = np.exp(-2j*np.pi*np.arange(N) / N)
= cos(-2*pi*arange(N)/N) + 1j*sin(-2*pi*arange(N)/N)
= cos(2*pi*arange(N)/N) - 1j*sin(2*pi*arange(N)/N)
the dot product y.dot(w) is basically a projection of y onto
cos(2*pi*arange(N)/N) (the real part of F1) and -sin(2*pi*arange(N)/N)
(the imaginary part of F1). But when we figure out the phase of
the maximum, it is based on the functions cos(...) and sin(...). Taking
the complex conjugate accounts for the opposite sign of the sin()
function. If w = np.exp(2j*np.pi*np.arange(N) / N) were used instead, the
complex conjugate of F1 would not be needed.
You could calculate the circular mean directly on your data.
When calculating the circular mean, your data is mapped to -pi..pi. This mapped data is interpreted as angle to a point on the unit circle. Then the mean value of x and y component is calculated. The next step is to calculate the resulting angle and map it back to the defined "box".
import numpy as np
import matplotlib.pyplot as plt
Points_x = np.random.randn(10000)+1
Box_min = -10
Box_max = 10
Box_width = Box_max - Box_min
#Maps Points to Box_min ... Box_max with periodic boundaries
Points_x = (Points_x%Box_width + Box_min)
#Map Points to -pi..pi
Points_map = (Points_x - Box_min)/Box_width*2*np.pi-np.pi
#Calc circular mean
Pmean_map = np.arctan2(np.sin(Points_map).mean() , np.cos(Points_map).mean())
#Map back
Pmean = (Pmean_map+np.pi)/(2*np.pi) * Box_width + Box_min
#Plotting the result
plt.figure(figsize=(10,3))
plt.subplot(121)
plt.hist(Points_x, 100);
plt.plot([Pmean, Pmean], [0, 1000], c='r', lw=3, alpha=0.5);
plt.subplot(122,aspect='equal')
plt.plot(np.cos(Points_map), np.sin(Points_map), '.');
plt.ylim([-1, 1])
plt.xlim([-1, 1])
plt.grid()
plt.plot([0, np.cos(Pmean_map)], [0, np.sin(Pmean_map)], c='r', lw=3, alpha=0.5);

Categories

Resources