Matplotlib - Wrong number of Frequency bins with Specgram - python

From what I unserstand, in a FFT the number of frequency bins is exactly the number of samples / 2.
However, matplotlib.specgram gives me one bin too much.
fig, [ax1, ax2, ax3] = plt.subplots(nrows=3, ncols=1)
Pxx, freqs, times, im = ax3.specgram(raw_normalized, NFFT=1024, Fs=sampleRate, noverlap=overlap, window=matplotlib.mlab.window_none, scale='linear')
Pxx contains the array of the spectogram, which should have 512 bins (due to the number of samples set to 1024), however it has 513.
Is there something off with my unserstanding off FFTs, or is there something wrong/quirky in the matplotlib library?

Just a wild guess, but if you set NFFT to 1024, it should have 1024 bins. However, if your input is real-valued, the values should be symmetrical. The 0th value should be the DC value, and the 512th value should be the value with the highest frequency. The 511th and 513th should be identical, so the spectrogram might filter out the symmetric values, as it knows the input is real-valued. So you get 513 values. (because the 513th to 1023th values are hidden; starting count at #0, of course)
The reasoning behind that is that the FFT folds a 'rotating' value on top of your data. It starts rotation slowly, #0 is the dc value, followed by #1, which is one rotation on the entire data. #2 is two rotations, and so on.
#512 is 512 rotations on your data of 1024 points, meaning you get one full rotation every 2 samples. This is the nyquist frequency for your data, everything above that will be subject to aliasing. Therefore the #513 looks identically to #511, just rotating in reverse. #1023 is identical to #1, just a single rotation, but in the opposite direction.
For complex-valued data, folding with a clockwise rotation and a counter clockwise rotation makes a difference, but for real-valued data it is the same.
Therefore values #513 to #1023 can be discarded, leaving you with 513 meaningful buckets.
Another detail: Technically, the output values of the FFT are always complex, even with real-valued inputs, and contain both a magnitude and a phase information, but your library probably filters out the phase information and just gives the magnitude, to convert it back to real-valued output values.

Related

python Spectrogram by using value in timeseries

I am new to spectrogram and try to plot spectrogram by using relative velocity variations value of ambient seismic noise.
So the format of the data I have is 'time', 'station pair', 'velocity variation value' as below. (If error is needed, I can add it on the data)
2013-11-24,05_PK01_05_SS01,0.057039371136200
2013-11-25,05_PK01_05_SS01,-0.003328071661900
2013-11-26,05_PK01_05_SS01,0.137221779659000
2013-11-27,05_PK01_05_SS01,0.068823721831000
2013-11-28,05_PK01_05_SS01,-0.006876687060810
2013-11-29,05_PK01_05_SS01,-0.023895268916200
2013-11-30,05_PK01_05_SS01,-0.105762098404000
2013-12-01,05_PK01_05_SS01,-0.028069540807700
2013-12-02,05_PK01_05_SS01,0.015091601414300
2013-12-03,05_PK01_05_SS01,0.016353885353700
2013-12-04,05_PK01_05_SS01,-0.056654092859700
2013-12-05,05_PK01_05_SS01,-0.044520608528500
2013-12-06,05_PK01_05_SS01,0.020226437197700
...
But I searched for it, I can only see people using data of network, station, location, channel, or wav data.
Therefore, I have no idea what I have to start because the data format is different..
If you know some ways to get spectrogram by using 'value' of timeseries.
p.s. I would compute cross correlation with velocity variation value and other environmental data such as air temperature, air pressure etc.
###Edit (I add two pictures but the notice pops up that I cannot post images yet but only link)
I would write about groundwater level or other environmental data because those are easier to see variations.
The plot that I want to make similarly is from David et al., 2021 as below.
enter image description here
X axis shows time series and y axis shows cycles/day.
So when the light color is located at 1 then it means diurnal cycle (if 2, semidiurnal cycle).
Now I plot spectrogram and make the frequency as cycles / 1day.
enter image description here
But what I found to edit are two.
In the reference, it is normalized as log scale.
So I need to find the way to edit it as log scale.
In the reference, the x axis becomes 1*10^7.
But in my data, there are only 755 points in time series (dates in 2013-2015).
So what do I have to do to make x axis to time series?
p.s. The code I made
fil=pd.read_csv('myfile.csv')
cf=fil.iloc[:,1]
cf=cf/max(abs(cf))
nfft=128 #The number of data points
fs=1/86400 #Hz [0, fs/2] cycles / unit time
n=len(cf)
fr=fs/n
spec, freq, tt, pplot = pylab.specgram(cf, NFFT=nfft, Fs=fs, detrend=pylab.detrend,
window=pylab.window_hanning, noverlap=100, mode='psd')
pylab.title('%s' % e_n)
plt.colorbar()
plt.ylabel("Frequency (cycles / %s Day)" % str(1/fs/86400))
plt.xlabel("days")
plt.show()
If you look closely at it, wav data is basically just an array of numbers (sound amplitude), recorded at a certain interval.
Note: You have an array of equally spaced samples, but they are for velocity difference, not amplitude. So while the following is technically valid, I don't think that the resulting frequencies represent seismic sound frequencies?
So the discrete Fourier transform (in the form of np.fft.rfft) would normally be the right thing to use.
If you give the function np.fft.rfft() n numbers, it will return n/2+1 frequencies. This is because of the inherent symmetry in the transform.
However, one thing to keep in mind is the frequency resolution of FFT. For example if you take n=44100 samples from a wav file sampled at Fs=44100 Hz, you get a convenient frequency resolution of Fs/n = 1 Hz. Which means that the first number in the FFT result is 0 Hz, the second number is 1 Hz et cetera.
It seems that the sampling frequency in your dataset is once per day, i.e. Fs= 1/(24x3600) =0.000012 Hz. Suppose you have n = 10000 samples, then the FFT will return 5001 numbers, with a frequency resolution of Fs/n= 0.0000000012 Hz. That means that the highest frequency you will be able to detect from data sampled at this frequncy is 0.0000000012*5001 = 0.000006 Hz.
So the highest frequency you can detect is approximately Fs/2!
I'm no domain expert, but that value seems to be a bit low for seismic noise?

Fourier Transform Time Series in Python

I've got a time series of sunspot numbers, where the mean number of sunspots is counted per month, and I'm trying to use a Fourier Transform to convert from the time domain to the frequency domain. The data used is from https://wwwbis.sidc.be/silso/infosnmtot.
The first thing I'm confused about is how to express the sampling frequency as once per month. Do I need to convert it to seconds, eg. 1/(seconds in 30 days)? Here's what I've got so far:
fs = 1/2592000
#the sampling frequency is 1/(seconds in a month)
fourier = np.fft.fft(sn_value)
#sn_value is the mean number of sunspots measured each month
freqs = np.fft.fftfreq(sn_value.size,d=fs)
power_spectrum = np.abs(fourier)
plt.plot(freqs,power_spectrum)
plt.xlim(0,max(freqs))
plt.title("Power Spectral Density of the Sunspot Number Time Series")
plt.grid(True)
I don't think this is correct - namely because I don't know what the scale of the x-axis is. However I do know that there should be a peak at (11years)^-1.
The second thing I'm wondering from this graph is why there seems to be two lines - one being a horizontal line just above y=0. It's more clear when I change the x-axis bounds to: plt.xlim(0,1).
Am I using the fourier transform functions incorrectly?
You can use any units you want. Feel free to express your sampling frequency as fs=12 (samples/year), the x-axis will then be 1/year units. Or use fs=1 (sample/month), the units will then be 1/month.
The extra line you spotted comes from the way you plot your data. Look at the output of the np.fft.fftfreq call. The first half of that array contains positive values from 0 to 1.2e6 or so, the other half contain negative values from -1.2e6 to almost 0. By plotting all your data, you get a data line from 0 to the right, then a straight line from the rightmost point to the leftmost point, then the rest of the data line back to zero. Your xlim call makes it so you don’t see half the data plotted.
Typically you’d plot only the first half of your data, just crop the freqs and power_spectrum arrays.

Understanding the output of fftfreq function and the fft plot for a single row in an image

I am trying to understand the function fftfreq and the resulting plot generated by adding real and imaginary components for one row in the image. Here is what I did:
import numpy as np
import cv2
import matplotlib.pyplot as plt
image = cv2.imread("images/construction_150_200_background.png", 0)
image_fft = np.fft.fft(image)
real = image_fft.real
imag = image_fft.imag
real_row_bw = image_fft[np.ceil(image.shape[0]/2).astype(np.int),0:image.shape[1]]
imag_row_bw = image_fft[np.ceil(image.shape[0]/2).astype(np.int),0:image.shape[1]]
sum = real_row_bw + imag_row_bw
plt.plot(np.fft.fftfreq(image.shape[1]), sum)
plt.show()
Here is image of the plot generated :
I read the image from the disk, calculate the Fourier transform and extract the real and imaginary parts. Then I sum the sine and cosine components and plot using the pyplot library.
Could someone please help me understand the fftfreq function? Also what does the peak represent in the plot for the following image:
I understand that Fourier transform maps the image from spatial domain to the frequency domain but I cannot make much sense from the graph.
Note: I am unable to upload the images directly here, as at the moment of asking the question, I am getting an upload error.
I don't think that you really need fftfreq to look for frequency-domain information in images, but I'll try to explain it anyway.
fftfreq is used to calculate the frequencies that correspond to each bin in an FFT that you calculate. You are using fftfreq to define the x coordinates on your graph.
fftfreq has two arguments: one mandatory, one optional. The mandatory first argument is an integer, the window length you used to calculate an FFT. You will have the same number of frequency bins in the FFT as you had samples in the window. The optional second argument is the time period per window. If you don't specify it, the default is a period of 1. I don't know whether a sample rate is a meaningful quantity for an image, so I can understand you not specifying it. Maybe you want to give the period in pixels? It's up to you.
Your FFT's frequency bins start at the negative Nyquist frequency, which is half the sample rate (default = -0.5), or a little higher; and it ends at the positive Nyquist frequency (+0.5), or a little lower.
The fftfreq function returns the frequencies in a funny order though. The zero frequency is always the zeroth element. The frequencies count up to the maximum positive frequency, and then flip to the maximum negative frequency and count upwards towards zero. The reason for this strange ordering is that if you're doing FFT's with real-valued data (you are, image pixels do not have complex values), the negative frequency data is exactly equal to the corresponding positive frequency data and is redundant. This ordering makes it easy to throw the negative frequencies away: just take the first half of the array. Since you aren't doing that, you're plotting the negative frequencies too. If you should choose to ignore the second half of the array, the negative frequencies will be removed.
As for the strong spike that you see at the zero frequency in your image, this is probably because your image data is RGB values which range from 0 to 255. There's a huge "DC offset" in your data. It looks like you're using Matplotlib. If you are plotting in an interactive window, you can use the zoom rectangle to look at that horizontal line. If you push the DC offset off scale, setting the Y axis scale to perhaps ±500, I bet you will start to see that the horizontal line isn't exactly horizontal after all.
Once you know which bin contains your DC offset, if you don't want to see it, you can just assign the value of the fft in that bin to zero. Then the graph will scale automatically.
By the way, these two lines of code perform identical calculations, so you aren't actually taking the sine and cosine components like your text says:
real_row_bw = image_fft[np.ceil(image.shape[0]/2).astype(np.int),0:image.shape[1]]
imag_row_bw = image_fft[np.ceil(image.shape[0]/2).astype(np.int),0:image.shape[1]]
And one last thing: to sum the sine and cosine components properly (once you have them), since they're at right angles, you need to use a vector sum rather than a scalar sum. Look at the function numpy.linalg.norm.

Numpy Correlate is not providing an offset

I am trying to look at astronomical spectra using Python, and I'm using numpy.correlate to try and find a radial velocity shift. I'm comparing each spectrum I have to one template spectrum. The problem that I'm encountering is that, no matter which spectra I use, numpy.correlate states that the maximal value of the correlation function occurs with a shift of zero pixels, i.e. the spectra already line up, which is very clearly not true. Here is some of the relevant code:
corr = np.correlate(temp_data, imag_data, mode='same')
ax1.plot(delta_data, corr, c='g')
ax1.plot(delta_data, 100*temp_data, c='b')
ax1.plot(delta_data, 100*imag_data, c='r')
The output of this code is shown here:
What I Have
Note that the cross-correlation function peaks at an offset of zero pixels despite the template (blue) and observed (red) spectra clearly showing an offset. What I would expect to see would be something a bit like (albeit not exactly like; this is merely the closest representation I could produce):
What I Want
Here I have introduced an artificial offset of 50 pixels in the template data, and they more or less line up now. What I would like is, for a case like this, for a peak to appear at an offset of 50 pixels rather than at zero (I don't care if the spectra at the bottom appear lined up; that is merely for visual representation). However, despite several hours of work and research online, I can't find someone who even describes this problem, let alone a solution. I've attempted to use ScyPy's correlate and MatLib's xcorr, and bot show this same thing (although I'm led to believe that they are essentially the same function).
Why is the cross-correlation not acting the way I expect, and how to do I get it to act in a useful way?
The issue you're experiencing is probably because your spectra are not zero-centered; their RMS value looks to be about 100 in whichever units you're plotting, instead of 0. The reason this is an issue is because numpy.correlate works by "sliding" imag_data over temp_data to get their dot product at each possible offset between the two series. (See the wikipedia on cross-correlation to understand the operation itself.) When using mode='same' to produce an output that is the same length as your first input (temp_data), NumPy has to "pad" a bunch of dummy values--zeroes--to the ends of imag_data in order to be able to calculate the dot products of all the shifted versions of the imag_data. When we have any non-zero offset between the spectra, some of the values in temp_data are being multiplied by those dummy zero-padding values instead of the values in image_data. If the values in the spectra were centered around zero (RMS=0), then this zero-padding would not impact our expectation of the dot product, but because these spectra have RMS values around 100 units, that dot product (our correlation) is largest when we lay the two spectra on top of one another with no offset.
Notice that your cross-correlation result looks like a triangular pulse, which is what you might expect from the cross-correlation of two square pulses (c.f. Convolution of a Rectangular "Pulse" With Itself. That's because your spectra, once padded, look like a step function from zero up to a pulse of slightly noisy values around 100. You can try convolving with mode='full' to see the entire response of the two spectra you're correlating, or, notice that with mode='valid' that you should only get one value in return, since your two spectra are the exact same length, so there is only one offset (zero!) where you can entirely line them up.
To sidestep this issue, you can try either subtracting away the RMS value of the spectra so that they are zero-centered, or manually padding the beginning and end of imag_data with (len(temp_data)/2-1) dummy values equal to np.sqrt(np.mean(imag_data**2))
Edit:
In response to your questions in the comments, I thought I'd include a graphic to make the point I'm trying to describe a little clearer.
Say we have two vectors of values, not entirely unlike your spectra, each with some large non-zero mean.
# Generate two noisy, but correlated series
t = np.linspace(0,250,250) # time domain from 0 to 250 steps
# signal_model = narrow_peak + gaussian_noise + constant
f = 10*np.exp(-((t-90)**2)/8) + np.random.randn(250) + 40
g = 10*np.exp(-((t-180)**2)/8) + np.random.randn(250) + 40
f has a spike around t=90, and g has a spike around t=180. So we expect the correlation of g and f to have a spike around a lag of 90 timesteps (in the case of spectra, frequency bins instead of timesteps.)
But in order to get an output that is the same shape as our inputs, as in np.correlate(g,f,mode='same'), we have to "pad" f on either side with half its length in dummy values: np.correlate pads with zeroes. If we don't pad f (as in np.correlate(g,f,mode='valid')), we will only get one value in return (the correlation with zero offset), because f and g are the same length, and there is no room to shift one of the signals relative to the other.
When you calculate the correlation of g and f after that padding, you find that it peaks when the non-zero portion of signals aligns completely, that is, when there is no offset between the original f and g. This is because the RMS value of the signals is so much higher than zero--the size of the overlap of f and g depends much more strongly on the number of elements overlapping at this high RMS level than on the relatively small fluctuations each function has around it. We can remove this large contribution to the correlation by subtracting the RMS level from each series. In the graph below, the gray line on the right shows the cross-correlation the two series before zero-centering, and the teal line shows the cross-correlation after. The gray line is, like your first attempt, triangular with the overlap of the two non-zero signals. The teal line better reflects the correlation between the fluctuation of the two signals, as we desired.
xcorr = np.correlate(g,f,'same')
xcorr_rms = np.correlate(g-40,f-40,'same')
fig, axes = plt.subplots(5,2,figsize=(18,18),gridspec_kw={'width_ratios':[5,2]})
for n, axis in enumerate(axes):
offset = (0,75,125,215,250)[n]
fp = np.pad(f,[offset,250-offset],mode='constant',constant_values=0.)
gp = np.pad(g,[125,125],mode='constant',constant_values=0.)
axis[0].plot(fp,color='purple',lw=1.65)
axis[0].plot(gp,color='orange',lw=lw)
axis[0].axvspan(max(125,offset),min(375,offset+250),color='blue',alpha=0.06)
axis[0].axvspan(0,max(125,offset),color='brown',alpha=0.03)
axis[0].axvspan(min(375,offset+250),500,color='brown',alpha=0.03)
if n==0:
axis[0].legend(['f','g'])
axis[0].set_title('offset={}'.format(offset-125))
axis[1].plot(xcorr/(40*40),color='gray')
axis[1].plot(xcorr_rms,color='teal')
axis[1].axvline(offset,-100,350,color='maroon',lw=5,alpha=0.5)
if n == 0:
axis[1].legend(["$g \star f$","$g' \star f'$","offset"],loc='upper left')
plt.show()

In a dataset with multiple peaks, how do I find FWHM of each peak (Python)?

Here's my denoised data:
I've calculated my peaks, but if I define the peak width as full width at half maximum (FWHM) (while assuming zero is defined as the smallest point in data between ~25 to ~375 on x axis), is there a Numpy/Scipy way to calculate their width? I'm okay with coding my own function, but I'm not sure how to begin an implementation since there are multiple peaks in my data.

Categories

Resources