I want to generate a random time series with a prescribed spectral shape. To do this I will draw random complex Fourier coefficients with from the appropriate spectral distribution then transform the frequency to the time domain.
To generate a real time series, the Fourier spectrum must have real DC and Nyquist coefficients and have symmetric negative frequencies.
When I do this, I get different behavior from numpy's ifft versus its irfft.
As an example, here's a 32 sample white spectrum:
import numpy as np
Nsamp = 2**5
Nfreq = (Nsamp-1)//2 # num pos freq bins not including DC or Nyquist
DC = 0.
f_pos = np.random.randn(Nfreq) + 1j*np.random.randn(Nfreq)
Nyquist = np.random.randn() # this is real
f_neg = f_pos[::-1] # mirror pos freqs
f_tot = np.hstack((DC, f_pos, Nyquist, f_neg))
f_rep = np.hstack((DC, f_pos, Nyquist))
t1 = np.fft.ifft(f_tot)
t2 = np.fft.irfft(f_rep)
print(t1)
print(t2)
I would expect both t1 to be real and t1 and t2 to agree (within machine precision). Neither is true.
Am I using the ifft correctly? Looking at the frequencies output by np.fft.fftfreq(Nsamp), makes me think I'm building f_tot correctly for input.
irfft is the correct result, so I'll use that... but I'd like to know how use ifft for the future.
from the numpy.fft docs:
A[0] contains the zero-frequency term (the sum of the signal), which is always purely real for real inputs. Then A[1:n/2] contains the positive-frequency terms, and A[n/2+1:] contains the negative-frequency terms, in order of decreasingly negative frequency. For an even number of input points, A[n/2] represents both positive and negative Nyquist frequency, and is also purely real for real input.
Related
I tried to create a spectogram of magnitudes using scipy.signal.spectogram.
Unfortunately I didn't get it working.
My test signal should be a sine with frequency 400 Hz and an amplitude of 1. The result for the magnitude of the spectogram seems to be 0.5 instead of 1.0. I have no idea what the problem could be.
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
# 2s time range with 44kHz
t = np.arange(0, 2, 1/44000)
# test signal: sine with 400Hz amplitude 1
x = np.sin(t*2*np.pi*440)
# spectogram for spectrum of magnitudes
f, t, Sxx = signal.spectrogram(x,
44000,
"hanning",
nperseg=1000,
noverlap=0,
scaling="spectrum",
return_onesided=True,
mode="magnitude"
)
# plot last frequency plot
plt.plot(f, Sxx[:,-1])
print("highest magnitude is: %f" %np.max(Sxx))
A strictly real time domain signal is conjugate symmetric in the frequency domain. e.g. will appear in both the positive and negative (or upper) half of a complex result FFT.
Thus you need to add together the two "halves" together of an FFT result to get the total energy (Parseval's theorem). Or just double one side, since complex conjugates have equal magnitudes.
I am look for a way to obtain the frequency from a signal. Here's an example:
signal = [numpy.sin(numpy.pi * x / 2) for x in range(1000)]
This Array will represent the sample of a recorded sound (x = miliseconds)
sin(pi*x/2) => 250 Hrz
How can we go from the signal (list of points), to obtaining the frequencies form this array?
Note:
I have read many Stackoverflow threads and watch many youtube videos. I am yet to find an answer. Please use simple words.
(I am Thankfull for every answer)
What you're looking for is known as the Fourier Transform
A bit of background
Let's start with the formal definition:
The Fourier transform (FT) decomposes a function (often a function of time, or a signal) into its constituent frequencies
This is in essence a mathematical operation that when applied over a signal, gives you an idea of how present each frequency is in the time series. In order to get some intuition behind this, it might be helpful to look at the mathematical definition of the DFT:
Where k here is swept all the way up t N-1 to calculate all the DFT coefficients.
The first thing to notice is that, this definition resembles somewhat that of the correlation of two functions, in this case x(n) and the negative exponential function. While this may seem a little bit abstract, by using Euler's formula and by playing a bit around with the definition, the DFT can be expressed as the correlation with both a sine wave and a cosine wave, which will account for the imaginary and the real parts of the DFT.
So keeping in mind that this is in essence computing a correlation, whenever a corresponding sine or cosine from the decomposition of the complex exponential matches with that of x(n), there will be a peak in X(K), meaning that, such frequency is present in the signal.
How can we do the same with numpy?
So having given a very brief theoretical background, let's consider an example to see how this can be implemented in python. Lets consider the following signal:
import numpy as np
import matplotlib.pyplot as plt
Fs = 150.0; # sampling rate
Ts = 1.0/Fs; # sampling interval
t = np.arange(0,1,Ts) # time vector
ff = 50; # frequency of the signal
y = np.sin(2*np.pi*ff*t)
plt.plot(t, y)
plt.xlabel('Time')
plt.ylabel('Amplitude')
plt.show()
Now, the DFT can be computed by using np.fft.fft, which as mentioned, will be telling you which is the contribution of each frequency in the signal now in the transformed domain:
n = len(y) # length of the signal
k = np.arange(n)
T = n/Fs
frq = k/T # two sides frequency range
frq = frq[:len(frq)//2] # one side frequency range
Y = np.fft.fft(y)/n # dft and normalization
Y = Y[:n//2]
Now, if we plot the actual spectrum, you will see that we get a peak at the frequency of 50Hz, which in mathematical terms it will be a delta function centred in the fundamental frequency of 50Hz. This can be checked in the following Table of Fourier Transform Pairs table.
So for the above signal, we would get:
plt.plot(frq,abs(Y)) # plotting the spectrum
plt.xlabel('Freq (Hz)')
plt.ylabel('|Y(freq)|')
plt.show()
Doing a simple FFT run to learn the operation, I create an NumPy array with 100 elements having a sine wave with only a single period in the array. This code is used:
...
n = 100
x = np.fromfunction(lambda a: np.sin(2 * np.pi * a / n), (n,), dtype=float)
res = np.fft.fft(x)
...
The result in res shows an non-zero amplitude at 2 different index value:
idx real imag abs
--- ---------- ---------- ----------
...
1: 0 -50.000 50.000
...
99: 0 50.000 50.000
I had only expected to see a single non-zero amplitude at index 1.
Why is amplitude non-zero for both index 1 and 99, and how can I understand this mathematically?
ADDITION: Maybe the high frequency actually represent an aliased frequency, where the sample rate is too low according to the Nyquist rate.
The np.fft.fft() function returns the two-sided DFT spectra.
What you are seeing are the peaks for frequencies w and -w, where w is the frequency of the sine wave.
You can check this yourself by running np.fft.fftfreq and plotting the results:
x = np.linspace(0, 2)
y = np.sin(2*np.pi*x)
Y = np.fft.fft(y)
freqs = np.fft.fftfreq(len(x), d=x[1]-x[0])
# Plot the results
fig, (ax1, ax2) = plt.subplots(2, 1)
ax1.plot(x, y)
ax2.plot(freqs, np.abs(Y))
The Fourier transform
where Xk are complex numbers. While your x are real numbers, as a result, you get X[N-m] = X[m]* In your case, N=100, m=1, therefore, you have X[ 1 ] = X[99]
The link below explains everything,
Why is the FFT “mirrored”?
When dealing with real numbers, numpy provides another function numpy.fft.rfft
When the DFT is computed for purely real input, the output is Hermitian-symmetric, i.e. the negative frequency terms are just the complex conjugates of the corresponding positive-frequency terms, and the negative-frequency terms are therefore redundant. This function does not compute the negative frequency terms, and the length of the transformed axis of the output is therefore n//2 + 1.
A standard full DFT or FFT is a NxN complex-to-complex linear basis transform that returns its result as an N element vector consisting of complex elements, each complex result element consisting of a real and imaginary component. A complex result is required to represent both the magnitude and phase of each frequency component (and thus not be information lossy). The arctangent of the imaginary and real components represents the phase of each frequency component.
If you feed an FFT with a strictly real input (with no non-zero imaginary components), then you want the FFT result to represent a strictly real signal. How is this possible when the FFT returns a complex result with non-zero imaginary components (required if the phase is non-zero)? By returning two components for each signal, where those two components are equal in magnitude, but opposite in their imaginary components, so the imaginary parts cancels out. You still need the imaginary component of each result element so you can measure the phase. But looking at the entire FFT result, the imaginary components in the two complex values sum to zero, thus representing a strictly real input signal (with no imaginary stuff).
Thus a full FFT has to be complex-conjugate mirror symmetric when given strictly real input.
Thus you see (at least) two equal magnitude values in an FFT result for each frequency component in the input. This is not true when feeding an FFT complex input with non-zero imaginary components, as common in many physics equations and signal processing algorithms.
Added: Why does an FFT have to return a complex result instead of just a magnitude and a phase angle? FFT stands for Fast Fourier Transform. One of the things that makes an FFT fast is that it is a linear transforms that can be computed with just multiplies and adds for the arithmetic (plus a bit of clever data shuffling along the way). The real and imaginary components can be computed with just linear arithmetic. Whereas computing the phase requires an arctangent (or atan2()), which is a much slower non-linear transcendental operator.
I am trying to perform Fourier transform using numpy's fft as follows:
import numpy as np
import matplotlib.pyplot as plt
t = np.linspace(0,1, 128)
x = np.cos(2*np.pi*t)
s_fft = np.fft.fft(x)
s_fft_freq = np.fft.fftshift(np.fft.fftfreq(t.shape[-1], t[1]-t[0]))
plt.plot(s_fft_freq, np.abs(s_fft))
The result I get is
which is wrong, as I know the FT should peak at f = 1, as the frequency of the cos is 1.
What am I doing wrong?
You are only applying fftshift to the x-axis labels, not the actual FFT magnitudes - you just need to apply s_fft = np.fft.fftshift(np.fft.fft(x)) too.
There are 2 or 3 things you have gotten wrong:
The FFT will peak at two positions for a pure real-valued frequency. This is the plus and minus frequencies. The only way to get a single peak in the Fourier domain is by having a complex valued signal (or having the trivial DC component).
(if with f, you mean frequency index) When using the DFT, the number of samples will determine how many frequency components you have. At the highest frequency index, you are always close to the per-sample oscilation: (-1)^t
(if with f, you mean amplitude) There are many definitions of the DFT, affecting both the forward and backward transform. This will affect how the values are interpreted when reading the spectrum.
I am trying to use a fast fourier transform to extract the phase shift of a single sinusoidal function. I know that on paper, If we denote the transform of our function as T, then we have the following relations:
However, I am finding that while I am able to accurately capture the frequency of my cosine wave, the phase is inaccurate unless I sample at an extremely high rate. For example:
import numpy as np
import pylab as pl
num_t = 100000
t = np.linspace(0,1,num_t)
dt = 1.0/num_t
w = 2.0*np.pi*30.0
phase = np.pi/2.0
amp = np.fft.rfft(np.cos(w*t+phase))
freqs = np.fft.rfftfreq(t.shape[-1],dt)
print (np.arctan2(amp.imag,amp.real))[30]
pl.subplot(211)
pl.plot(freqs[:60],np.sqrt(amp.real**2+amp.imag**2)[:60])
pl.subplot(212)
pl.plot(freqs[:60],(np.arctan2(amp.imag,amp.real))[:60])
pl.show()
Using num=100000 points I get a phase of 1.57173880459.
Using num=10000 points I get a phase of 1.58022110476.
Using num=1000 points I get a phase of 1.6650441064.
What's going wrong? Even with 1000 points I have 33 points per cycle, which should be enough to resolve it. Is there maybe a way to increase the number of computed frequency points? Is there any way to do this with a "low" number of points?
EDIT: from further experimentation it seems that I need ~1000 points per cycle in order to accurately extract a phase. Why?!
EDIT 2: further experiments indicate that accuracy is related to number of points per cycle, rather than absolute numbers. Increasing the number of sampled points per cycle makes phase more accurate, but if both signal frequency and number of sampled points are increased by the same factor, the accuracy stays the same.
Your points are not distributed equally over the interval, you have the point at the end doubled: 0 is the same point as 1. This gets less important the more points you take, obviusly, but still gives some error. You can avoid it totally, the linspace has a flag for this. Also it has a flag to return you the dt directly along with the array.
Do
t, dt = np.linspace(0, 1, num_t, endpoint=False, retstep=True)
instead of
t = np.linspace(0,1,num_t)
dt = 1.0/num_t
then it works :)
The phase value in the result bin of an unrotated FFT is only correct if the input signal is exactly integer periodic within the FFT length. Your test signal is not, thus the FFT measures something partially related to the phase difference of the signal discontinuity between end-points of the test sinusoid. A higher sample rate will create a slightly different last end-point from the sinusoid, and thus a possibly smaller discontinuity.
If you want to decrease this FFT phase measurement error, create your test signal so the your test phase is referenced to the exact center (sample N/2) of the test vector (not the 1st sample), and then do an fftshift operation (rotate by N/2) so that there will be no signal discontinuity between the 1st and last point in your resulting FFT input vector of length N.
This snippet of code might help:
def reconstruct_ifft(data):
"""
In this function, we take in a signal, find its fft, retain the dominant modes and reconstruct the signal from that
Parameters
----------
data : Signal to do the fft, ifft
Returns
-------
reconstructed_signal : the reconstructed signal
"""
N = data.size
yf = rfft(data)
amp_yf = np.abs(yf) #amplitude
yf = yf*(amp_yf>(THRESHOLD*np.amax(amp_yf)))
reconstructed_signal = irfft(yf)
return reconstructed_signal
The 0.01 is the threshold of amplitudes of the fft that you would want to retain. Making the THRESHOLD greater(more than 1 does not make any sense), will give
fewer modes and cause higher rms error but ensures higher frequency selectivity.
(Please adjust the TABS for the python code)