Phase discontinuity happens during ifft operation using pyfftw and scipy.fft - python

When doing ifft of a 2D ndarray using pyfftw, I found the resultant phase is discontinuous in many positions. My code is as follows:
import numpy as np
import pyfftw
from scipy.fft import ifftshift,fftshift
import matplotlib.pyplot as plt
N = 256
kx = np.linspace(-np.floor(N/2),np.ceil(N/2)-1,N)
kX,kY = np.meshgrid(kx,kx)
kR = np.sqrt(kX**2 + kY**2)
mask = np.where((kR<=15),1,0)
ifft_obj = pyfftw.builders.ifft2(ifftshift(mask))
wave = fftshift(ifft_obj())
plt.imshow(np.angle(wave),cmap='jet')
plt.colorbar()
The phase image is as follows
The minimum phase value is -3.141592653589793 and the maximum value is 3.141592653589793 and their difference is larger than 2pi. Using scipy.fft just gets the same results.
However, when I turn to Matlab, the result looks more reasonable. My code is:
N = 256;
kx = linspace(-floor(N/2),ceil(N/2)-1,N);
[kX,kY] = meshgrid(kx,kx);
kR = sqrt(kX.^2 + kY.^2);
mask = single(kR<=15);
wave = fftshift(ifft2(ifftshift(mask)));
imshow(angle(wave));
caxis([min(angle(wave),[],'all') max(angle(wave),[],'all')]);
axis image; colormap jet;colorbar;
The phase image is
I wonder what leads to the phase discontinuity in python code and how can I correct it.

A phase is best shown as an angle on a unit circle. And a circle does not have a begin and an end. Going around the circle does not create a discontinuity. Adding exactly 2*pi (one trip around the circle) does not change the phase, so +pi and -pi are the same phase. The absolute difference of those two phases is thus not 2*pi, but zero. If you take in account tiny rounding errors, it is almost zero.
My suggestion is to use a "cyclic" color scheme (don't know a better term), where approaching +pi on one end and -pi on the other end colors the graph with the same color.

Your input is symmetric, which leads to a purely real transform (phase is 0 for positive and π for negative values). But because of numerical inaccuracies of the FFT algorithm, the result has very small imaginary values. Thus, the phase deviates a little bit from 0 and π. In your colormapped image, small deviations from 0 are not seen, but small deviations from π can lead to values close to -π (as already discussed by VPfB).
MATLAB does not show this issue because MATLAB's ifft recognizes that the input is (conjugate) symmetric and outputs a purely real image. It simply ignores those small imaginary values.
You can do the same in Python with
wave = np.real_if_close(wave, tol=1000)
The tolerance here is np.finfo(wave.dtype).eps * tol (2.22 10-13 for double float). Adjust as necessary.

Related

Normalized smoothing with a kernel

I am trying to smooth a noisy one-dimensional physical signal, y, while retaining correspondence between the signal's amplitude and its units. I'm applying a Gaussian kernel and normalizing the Gaussian itself based on its own integral.
import numpy.random
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import quad
#Gaussian kernel (not normalized here)
def gaussian(x, sigma):
return np.exp(-(x/sigma)**2/2)
#convolution
def smooth(y,box_pts):
x = (np.linspace(-box_pts/2.,box_pts/2.,box_pts + 1)) #Gaussian centred on 0
std_norm = 3. #3. is an arbitrary value for normalizing the sigma
sigma = box_pts/std_norm
integral = quad(gaussian, x[0], x[-1], args=(sigma))[0]
box = gaussian(x, sigma)/integral
y_smooth = np.convolve(y,box,mode='same')
return y_smooth
box_size = 10
length = 100
y = numpy.random.randn(length)
y_smooth = smooth(y,box_size)
plt.plot(y)
plt.show()
plt.plot(y_smooth)
plt.show()
In this example, y is the signal I want to smooth, box_pts is the width or region over which the Gaussian kernel is applied as it is translated. And to normalize my convolution, I've simply divided by the integral of the Gaussian by itself. Since the Gaussian has a certain width (given by box_pts) and is zero outside of this interval compared to the physical axis of y, I am normalizing the Gaussian to the region over which it is non-zero as opposed to $-\infty$ to $ \infty$. This method appears simple enough and the normalization appears to work, but is dividing by the kernel's integral itself the "right" or suggested approach to take when normalizing convolutions/kernels?
Based on a simple example, it appears keep the integral about the same, but amplitudes are no longer consistent as visualized below which is a bit problematic as I want to retain a smoothed signal while removing the noise:
Original:
Smoothed:
I get improved amplitudes when I decrease box_size relative to length, but this keeps the signal noisy. Decreasing std_norm also appears to help reduce noise, but this influences the amplitude. My question boils down to deciding how one chooses optimal std_norm and box_norm for a given noisy signal y of size length such that I can reduce the noise while keeping the output physically sound (i.e. smoothed signal amplitude is somewhat consistent with the original signal and the integral under the smoothed curve is fairly similar, too). I have applied a Gaussian kernel here, but am interested in hopefully a general approach for any arbitrary kernel.

How to represent a square wave in python and how to convolve it?

I am trying to convolve a square wave with itself more than once and see the resulting graph. I know how to do convolution by my hands but I am not experienced in signal processing with Python. So my questions are:
How can I represent a signal in Python? For example:
x(t) = 1 , 0 ≤ t ≤ 1
x(t) = 0, otherwise
How can I convolve this square wave with itself?
What I have come so far is that I must use numpy's built-in convolve method; but the problem is I am stuck representing this square wave.
One way to create a suitable 0-1 array is np.fromfunction, passing it a function that returns True within the wave. Converting to float results in a 0-1 array.
For this illustration it's better to position the wave in the middle of the array, avoiding boundary effects associated with convolution. Using mode='same' allows for all curves to be plotted together. Also, don't forget to divide the output of convolve by sample_rate, otherwise it will grow proportionally to it with each convolution.
import numpy as np
import matplotlib.pyplot as plt
sample_rate = 100
num_samples = 500
wave = np.fromfunction(lambda i: (2*sample_rate < i) & (i < 3*sample_rate), (num_samples,)).astype(np.float)
wave1 = np.convolve(wave, wave, mode='same')/sample_rate
wave2 = np.convolve(wave1, wave, mode='same')/sample_rate
wave3 = np.convolve(wave2, wave, mode='same')/sample_rate
plt.plot(np.stack((wave, wave1, wave2, wave3), axis=1))
plt.show()
Mathematically, these are known as cardinal B-splines and as the density of Irwin–Hall distribution.

extracting phase information using numpy fft

I am trying to use a fast fourier transform to extract the phase shift of a single sinusoidal function. I know that on paper, If we denote the transform of our function as T, then we have the following relations:
However, I am finding that while I am able to accurately capture the frequency of my cosine wave, the phase is inaccurate unless I sample at an extremely high rate. For example:
import numpy as np
import pylab as pl
num_t = 100000
t = np.linspace(0,1,num_t)
dt = 1.0/num_t
w = 2.0*np.pi*30.0
phase = np.pi/2.0
amp = np.fft.rfft(np.cos(w*t+phase))
freqs = np.fft.rfftfreq(t.shape[-1],dt)
print (np.arctan2(amp.imag,amp.real))[30]
pl.subplot(211)
pl.plot(freqs[:60],np.sqrt(amp.real**2+amp.imag**2)[:60])
pl.subplot(212)
pl.plot(freqs[:60],(np.arctan2(amp.imag,amp.real))[:60])
pl.show()
Using num=100000 points I get a phase of 1.57173880459.
Using num=10000 points I get a phase of 1.58022110476.
Using num=1000 points I get a phase of 1.6650441064.
What's going wrong? Even with 1000 points I have 33 points per cycle, which should be enough to resolve it. Is there maybe a way to increase the number of computed frequency points? Is there any way to do this with a "low" number of points?
EDIT: from further experimentation it seems that I need ~1000 points per cycle in order to accurately extract a phase. Why?!
EDIT 2: further experiments indicate that accuracy is related to number of points per cycle, rather than absolute numbers. Increasing the number of sampled points per cycle makes phase more accurate, but if both signal frequency and number of sampled points are increased by the same factor, the accuracy stays the same.
Your points are not distributed equally over the interval, you have the point at the end doubled: 0 is the same point as 1. This gets less important the more points you take, obviusly, but still gives some error. You can avoid it totally, the linspace has a flag for this. Also it has a flag to return you the dt directly along with the array.
Do
t, dt = np.linspace(0, 1, num_t, endpoint=False, retstep=True)
instead of
t = np.linspace(0,1,num_t)
dt = 1.0/num_t
then it works :)
The phase value in the result bin of an unrotated FFT is only correct if the input signal is exactly integer periodic within the FFT length. Your test signal is not, thus the FFT measures something partially related to the phase difference of the signal discontinuity between end-points of the test sinusoid. A higher sample rate will create a slightly different last end-point from the sinusoid, and thus a possibly smaller discontinuity.
If you want to decrease this FFT phase measurement error, create your test signal so the your test phase is referenced to the exact center (sample N/2) of the test vector (not the 1st sample), and then do an fftshift operation (rotate by N/2) so that there will be no signal discontinuity between the 1st and last point in your resulting FFT input vector of length N.
This snippet of code might help:
def reconstruct_ifft(data):
"""
In this function, we take in a signal, find its fft, retain the dominant modes and reconstruct the signal from that
Parameters
----------
data : Signal to do the fft, ifft
Returns
-------
reconstructed_signal : the reconstructed signal
"""
N = data.size
yf = rfft(data)
amp_yf = np.abs(yf) #amplitude
yf = yf*(amp_yf>(THRESHOLD*np.amax(amp_yf)))
reconstructed_signal = irfft(yf)
return reconstructed_signal
The 0.01 is the threshold of amplitudes of the fft that you would want to retain. Making the THRESHOLD greater(more than 1 does not make any sense), will give
fewer modes and cause higher rms error but ensures higher frequency selectivity.
(Please adjust the TABS for the python code)

When should I use fftshift(fft(fftshift(x))) and when fft(x)?

I am trying to implement an algorithm in python, but I am not sure when I should use fftshift(fft(fftshift(x))) and when only fft(x) (from numpy). Is there a rule of thumb based on the shape of input data?
I am using fftshift instead of ifftshift due to the even number of values in the vector x.
It really just depends on what you want. The DFT (and hence the FFT) is periodic in the frequency domain with period equal to 2pi.
The fft() function will return the approximation of the DFT with omega (radians/s) from 0 to pi (i.e. 0 to fs, where fs is the sampling frequency). All fftshift() does is swap the output vector of the fft() right down the middle. So the output of fftshift(fft()) is now from -pi/2 to pi/2.
Usually, people like to plot a good approximation of the DTFT (or maybe even the CTFT) using the FFT, so they zero-pad the input with a huge amount of zeros (the function fft() does this on it's own) and then they use the fftshift() function to plot between -pi and pi.
In other words, use fftshift(fft()) for plotting, and fft() for the math!
fft(fftshift(x)) rotates the input vector so the the phase of the complex FFT result is relative to the center of the original data window. If the input waveform is not exactly integer periodic in the FFT width, phase relative to the center of the original window of data may make more sense than the phase relative to some averaging between the discontinuous beginning and end. fft(fftshift(x)) also has the property that the imaginary component of a result will always be positive for a positive zero crossing at the center of the window of any antisymmetric waveform component.
fftshift(fft(y)) rotates the FFT results so that the DC bin is in the center of the result, halfway between -Fs/2 and Fs/2, which is a common spectrum display format.

Discrete Fourier Transform: How to use fftshift correctly with fft

I want numerically compute the FFT on a numpy array Y. For testing, I'm using the Gaussian function Y = exp(-x^2). The (symbolic) Fourier Transform is Y' = constant * exp(-k^2/4).
import numpy
X = numpy.arange(-100,100)
Y = numpy.exp(-(X/5.0)**2)
The naive approach fails:
from numpy.fft import *
from matplotlib import pyplot
def plotReIm(x,y):
f = pyplot.figure()
ax = f.add_subplot(111)
ax.plot(x, numpy.real(y), 'b', label='R()')
ax.plot(x, numpy.imag(y), 'r:', label='I()')
ax.plot(x, numpy.abs(y), 'k--', label='abs()')
ax.legend()
Y_k = fftshift(fft(Y))
k = fftshift(fftfreq(len(Y)))
plotReIm(k,Y_k)
real(Y_k) jumps between positive and negative values, which correspond to a jumping phase, which is not present in the symbolic result. This is certainly not desirable. (The result is technically correct in the sense that abs(Y_k) gives the amplitudes as expected ifft(Y_k) is Y.)
Here, the function fftshift() renders the array k monotonically increasing and changes Y_k accordingly. The pairs zip(k, Y_k) are not changed by applying this operation to both vectors.
This changes appears to fix the issue:
Y_k = fftshift(fft(ifftshift(Y)))
k = fftshift(fftfreq(len(Y)))
plotReIm(k,Y_k)
Is this the correct way to employ the fft() function if monotonic Y and Y_k are required?
The reverse operation of the above is:
Yx = fftshift(ifft(ifftshift(Y_k)))
x = fftshift(fftfreq(len(Y_k), k[1] - k[0]))
plotReIm(x,Yx)
For this case, the documentation clearly states that Y_k must be sorted compatible with the output of fft() and fftfreq(), which we can achieve by applying ifftshift().
Those questions have been bothering me for a long time: Are the output and input arrays of both fft() and ifft() always such that a[0] should contain the zero frequency term, a[1:n/2+1] should contain the positive-frequency terms, and a[n/2+1:] should contain the negative-frequency terms, in order of decreasingly negative frequency [numpy reference], where 'frequency' is the independent variable?
The answer on Fourier Transform of a Gaussian is not a Gaussian does not answer my question.
The FFT can be thought of as producing a set vectors each with an amplitude and phase. The fft_shift operation changes the reference point for a phase angle of zero, from the edge of the FFT aperture, to the center of the original input data vector.
The phase (and thus the real component of the complex vector) of the result is sometimes less "jumpy" when this is done, especially if some input function is windowed such that it is discontinuous around the edges of the FFT aperture. Or if the input is symmetric around the center of the FFT aperture, the phase of the FFT result will always be zero after an fft_shift.
An fft_shift can be done by a vector rotate of N/2, or by simply flipping alternating sign bits in the FFT result, which may be more CPU dcache friendly.
The definition for the output of fft (and ifft) is here: http://docs.scipy.org/doc/numpy/reference/routines.fft.html#background-information
This is what the routines compute, no more and no less. Observe that the discrete Fourier transform is rather different from the continuous Fourier transform. For a densely sampled function there is a relation between the two, but the relation also involves phase factors and scaling in addition to fftshift. This is the cause of the oscillations you see in your plot. The necessary phase factor you can work out yourself from the above mathematical formula for the DFT.

Categories

Resources