The original data is on the google drive. It is a two columns data, t and x. I did the following discrete fft transform. I don't quite understand that the main peak(sharp one) has a lower height than the side one. The second the subplot shows that it is indeed that the sharp peak(most close to 2.0) is the main frequency. The code and the figure is as follows:
import numpy as np
import math
import matplotlib.pyplot as plt
from scipy.fftpack import fft,fftfreq
freqUnit=0.012/(2*np.pi)
data = np.loadtxt(fname='data.txt')
t = data[:,0]
x = data[:,1]
n=len(t)
d=t[1]-t[0]
fig=plt.figure()
ax1=fig.add_subplot(3,1,1)
ax2=fig.add_subplot(3,1,2)
ax3=fig.add_subplot(3,1,3)
y = abs(fft(x))
freqs = fftfreq(n, d)/freqUnit
ax1.plot(t, x)
ax2.plot(t, x)
ax2.set_xlim(40000,60000)
ax2.set_ylim(0.995,1.005)
ax3.plot(freqs,y,'-.')
ax3.set_xlim(0,4)
ax3.set_ylim(0,1000)
plt.show()
You need to apply a window function prior to your FFT, otherwise you will see artefacts such as the above, particularly where a peak in the spectrum does not correspond directly with an FFT bin centre.
See "Why do I need to apply a window function to samples when building a power spectrum of an audio signal?" question for further details.
Related
my problem is allegedly simple - I have scatter data in X and Y, and want to get a nice, well-fitting trendline with a known equation so that I can go on to correspond LDR voltages into power readings. However, I'm having trouble with generating a trendline in Matplotlib or Scipy that fits well, which I believe is because there's a logarithmic relationship.
I'm using Spyder and Matplotlib, and first tried plotting the X (Thorlabs) and Y (LDR) data as a log-log scatter plot. Because the data didn't seem to show a linear relationship after doing this, I then used numpy's Polynomial.fit with degree 5 to 6. This looked good, but then when inverting the axes, so I could get something of the form [LDR] = f[Thorlabs], I noticed the fit was suddenly not very good at all at the extremes of my data.
Using this question using curve_fit seems to be the way to go, but I tried using curve_fit as described here and, after adjusting to increase the max number of curve-fit iterations, stumbled when I got the error message "TypeError: can't multiply sequence by non-int of type 'numpy.float64'", which will likely be because my data contains decimal points. I'm not sure how to account for this.
I have several mini-questions, then -
am I misunderstanding the above examples?
is there a better way I could go about trying to find the ideal trendline for this data? Is it possible that it's some sort of logarithmic relationship on top of a log-log plot?
once I get a trendline, how can I make sure it fits well and can be displayed?
#import libraries
import matplotlib.pyplot as plt
import csv
import numpy as np
from numpy.polynomial import Polynomial
import scipy.optimize as opt
#initialise arrays - I create log arrays too so I can plot directly
deg = 6 #degree of polynomial fitting for Polynomial.fit()
thorlabs = []
logthorlabs = []
ldr = []
logldr = []
#read in LDR/Thorlabs datasets from file
with open('16ldr561nm.txt','r') as csvfile:
plots = csv.reader(csvfile, delimiter='\t')
for row in plots:
thorlabs.append(float(row[0]))
ldr.append(float(row[1]))
logthorlabs.append(np.log(float(row[0])))
logldr.append(np.log(float(row[1])))
#This seems to work just fine, I now have arrays containing data in float
#fit and plot log polynomials
p = Polynomial.fit(logthorlabs, logldr, deg)
plt.plot(*p.linspace()) #plot lines
#plot scatter graphs on log-log axis - either using log arrays or on loglog plot
#plt.loglog()
plt.scatter(logthorlabs, logldr, label='16bit ADC LDR1')
plt.xlabel('log Thorlabs laser power (microW)')
plt.ylabel('log LDR voltage (mV)')
plt.title('LDR voltage against laser power at 561nm')
plt.legend()
plt.show()
#attempt at using curve_fit - when using, comment out the above block
"""
# This is the function we are trying to fit to the data.
def func(x, a, b, c):
return a * np.exp(-b * x) + c
#freaks out here as I get a type error which I am not sure how to account for
# Plot the actual data
plt.plot(thorlabs, ldr, ".", label="Data");
#Adjusted maxfev to 5000. I know you can make "guesses" here but I am not sure how to do so
# The actual curve fitting happens here
optimizedParameters, pcov = opt.curve_fit(func, thorlabs, ldr, maxfev=5000);
# Use the optimized parameters to plot the best fit
plt.plot(thorlabs, func(ldr, *optimizedParameters), label="fit");
# Show the graph
plt.legend();
plt.show();
"""
When using curve_fit, I get a "TypeError: can't multiply sequence by non-int of type 'numpy.float64'".
As I don't have enough reputation to post images, my raw dataset can be found here. (Otherwise I'd include the graphs!)
(Note that I actually have two datasets, but as I only want to know the principle for calculating a trendline for one, I've left out the other dataset above.)
Refactoring your code a bit, most importantly to use native Numpy arrays once things have been parsed out from the file, makes things not crash, but the CurveFit line doesn't look good at all.
The code prints out the parameters fit by curve_fit, which don't look very good either, and a warning too: "Covariance of the parameters could not be estimated". I'm no mathematician/statistician, so I don't know what to do there.
from numpy.polynomial import Polynomial
import csv
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
def read_dataset(filename):
x = []
y = []
with open(filename, "r") as csvfile:
plots = csv.reader(csvfile, delimiter="\t")
for row in plots:
x.append(float(row[0]))
y.append(float(row[1]))
# cast to native numpy arrays
x = np.array(x)
y = np.array(y)
return (x, y)
ldr, thorlabs = read_dataset("16ldr561nm.txt")
plt.scatter(thorlabs, ldr, label="Data")
plt.xlabel("Thorlabs laser power (microW)")
plt.ylabel("LDR voltage (mV)")
plt.title("LDR voltage against laser power at 561nm")
# Generate and plot polynomial
p = Polynomial.fit(thorlabs, ldr, 6)
plt.plot(*p.linspace(), label="Polynomial")
# Generate and plot curvefit
def func(x, a, b, c):
return a * np.exp(-b * x) + c
optimizedParameters, pcov = opt.curve_fit(func, thorlabs, ldr)
print(optimizedParameters, pcov)
plt.plot(thorlabs, func(ldr, *optimizedParameters), label="CurveFit")
# Show everything
plt.legend()
plt.show()
If you really need to log() the data, it's easily done with
x = np.log(x)
y = np.log(y)
which will keep the arrays as NumPy arrays and be plenty faster than doing it "by hand".
I need to detect singular points (extremas, trend change, sharp changes) on a given curve plotted from a dataset. The first thing to come in mind is the inflexion point detection with derivation( but i dont have the mathematical expression of the plotted curve), second is how to detect angular points. so if possible can i build (using python) a sliding window which detects these kind of SP(singular points), if possible what are the libraries and function used ?
Thanks
Singular point detection
I just scraped some of your data to show you that you can find points on the whole dataset, without using a sliding window (but you could, in theory):
Local extrema (find peaks in raw data)
Max Steepness (find peaks in 1st derivative)
Inflexion points (find peaks in 2nd derivative)
First, let's have a look on calculating the derivatives:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv("Default Dataset.csv",
sep=';',
decimal=",",
header=None)
### Interpolate linearily ###
x_new = np.linspace(0, df[0].iloc[-1], 2000)
y_new = np.interp(x_new, df[0], df[1])
### First and second derivative ###
diff1 = np.insert(np.diff(y_new), 0, 0)
diff2 = np.insert(np.diff(diff1), 0, 0)
### Plot everything ###
plt.figure(figsize=(12,3))
plt.subplot(131)
plt.plot(x_new, y_new)
plt.subplot(132)
plt.plot(x_new, diff1)
plt.subplot(133)
plt.plot(x_new, diff2)
plt.tight_layout()
Here, I also interpolate to have an equal spacing between the datapoints.
Further, I insert a 0 at position 0 using the np.insert function after the differentiation, to ensure the same shape as the raw data.
Next, we will find the peaks:
import peakutils as pu
ix_abs = pu.indexes(y_new, thres=0.5, min_dist=15)
ix_diff1 = pu.indexes(diff1, thres=0.5, min_dist=15)
ix_diff2 = pu.indexes(diff2, thres=0.5, min_dist=15)
plt.scatter(x_new[ix_abs], y_new[ix_abs], color='g', label='abs')
plt.scatter(x_new[ix_diff1], y_new[ix_diff1], color='r', label='first deriv')
plt.scatter(x_new[ix_diff2], y_new[ix_diff2], color='purple', label='second deriv')
plt.plot(x_new, y_new)
plt.legend(loc='best')
I am using the peakutils package, because it works nicely in almost all cases. You see that not all points that were indicated in your example were found. You can play around with different parameters for threshold and minimum distance to find a better solution. But this should be a good starting point for further research. Indeed, the minimum distance parameter would give you the desired sliding window.
I'm trying to make sense of the output produced by the python FFT library.
I have a sqlite database where I have logged several series of ADC values. Each series consist of 1024 samples taken with a frequency of 1 ms.
After importing a dataseries, I normalize it and run int through the fft method. I've included a few plots of the original signal compared to the FFT output.
import sqlite3
import struct
import numpy as np
from matplotlib import pyplot as plt
import time
import math
conn = sqlite3.connect(r"C:\my_test_data.sqlite")
c = conn.cursor()
c.execute('SELECT ID, time, data_blob FROM log_tbl')
for row in c:
data_raw = bytes(row[2])
data_raw_floats = struct.unpack('f'*1024, data_raw)
data_np = np.asarray(data_raw_floats)
data_normalized = (data_np - data_np.mean()) / (data_np.max() - data_np.min())
fft = np.fft.fft(data_normalized)
N = data_normalized .size
plt.figure(1)
plt.subplot(211)
plt.plot(data_normalized )
plt.subplot(212)
plt.plot(np.abs(fft)[:N // 2] * 1 / N)
plt.show()
plt.clf()
The signal clearly contains some frequencies, and I was expecting them to be visible from the FFT output.
What am I doing wrong?
You need to make sure that your data is evenly spaced when using np.fft.fft, otherwise the output will not be accurate. If they are not evenly spaced, you can use LS periodograms for example: http://docs.astropy.org/en/stable/stats/lombscargle.html.
Or look up non-uniform fft.
About the plots:
I don't think that you are doing something obviously wrong. Your signal consists a signal with period in the order of magnitude 100, so you can expect a strong frequency signal around 1/period=0.01. This is what is visible on your graphs. The time-domain signals are not that sinusoidal, so your peak in the frequency domain will be blurry, as seen on your graphs.
I'm analyzing what is essentially a respiratory waveform, constructed in 3 different shapes (the data originates from MRI, so multiple echo times were used; see here if you'd like some quick background). Here are a couple of segments of two of the plotted waveforms for some context:
For each waveform, I'm trying to perform a DFT in order to discover the dominant frequency or frequencies of respiration.
My issue is that when I plot the DFTs that I perform, I get one of two things, dependent on the FFT library that I use. Furthermore, neither of them is representative of what I am expecting. I understand that data doesn't always look the way we want, but I clearly have waveforms present in my data, so I would expect a discrete Fourier transform to produce a frequency peak somewhere reasonable. For reference here, I would expect about 80 to 130 Hz.
My data is stored in a pandas data frame, with each echo time's data in a separate series. I'm applying the FFT function of choice (see the code below) and receiving different results for each of them.
SciPy (fftpack)
import pandas as pd
import scipy.fftpack
# temporary copy to maintain data structure
lead_pts_fft_df = lead_pts_df.copy()
# apply a discrete fast Fourier transform to each data series in the data frame
lead_pts_fft_df.magnitude = lead_pts_df.magnitude.apply(scipy.fftpack.fft)
lead_pts_fft_df
NumPy:
import pandas as pd
import numpy as np
# temporary copy to maintain data structure
lead_pts_fft_df = lead_pts_df.copy()
# apply a discrete fast Fourier transform to each data series in the data frame
lead_pts_fft_df.magnitude = lead_pts_df.magnitude.apply(np.fft.fft)
lead_pts_fft_df
The rest of the relevant code:
ECHO_TIMES = [0.080, 0.200, 0.400] # milliseconds
f_s = 1 / (0.006) # 0.006 = time between samples
freq = np.linspace(0, 29556, 29556) * (f_s / 29556) # (29556 = length of data)
# generate subplots
fig, axes = plt.subplots(3, 1, figsize=(18, 16))
for idx in range(len(ECHO_TIMES)):
axes[idx].plot(freq, lead_pts_fft_df.magnitude[idx].real, # real part
freq, lead_pts_fft_df.magnitude[idx].imag, # imaginary part
axes[idx].legend() # apply legend to each set of axes
# show the plot
plt.show()
Post-DFT (SciPy fftpack):
Post-DFT (NumPy)
Here is a link to the dataset (on Dropbox) used to create these plots, and here is a link to the Jupyter Notebook.
EDIT:
I used the posted advice and took the magnitude (absolute value) of the data, and also plotted with a logarithmic y-axis. The new results are posted below. It appears that I have some wraparound in my signal. Am I not using the correct frequency scale? The updated code and plots are below.
# generate subplots
fig, axes = plt.subplots(3, 1, figsize=(18, 16))
for idx in range(len(ECHO_TIMES)):
axes[idx].plot(freq[1::], np.log(np.abs(lead_pts_fft_df.magnitude[idx][1::])),
label=lead_pts_df.index[idx], # apply labels
color=plot_colors[idx]) # colors
axes[idx].legend() # apply legend to each set of axes
# show the plot
plt.show()
I've figured out my issues.
Cris Luengo was very helpful with this link, which helped me determine how to correctly scale my frequency bins and plot the DFT appropriately.
ADDITIONALLY: I had forgotten to take into account the offset present in the signal. Not only does it cause issues with the huge peak at 0 Hz in the DFT, but it is also responsible for most of the noise in the transformed signal. I made use of scipy.signal.detrend() to eliminate this and got a very appropriate looking DFT.
# import DFT and signal packages from SciPy
import scipy.fftpack
import scipy.signal
# temporary copy to maintain data structure; original data frame is NOT changed due to ".copy()"
lead_pts_fft_df = lead_pts_df.copy()
# apply a discrete fast Fourier transform to each data series in the data frame AND detrend the signal
lead_pts_fft_df.magnitude = lead_pts_fft_df.magnitude.apply(scipy.signal.detrend)
lead_pts_fft_df.magnitude = lead_pts_fft_df.magnitude.apply(np.fft.fft)
lead_pts_fft_df
Arrange frequency bins accordingly:
num_projections = 29556
N = num_projections
T = (6 * 3) / 1000 # 6*3 b/c of the nature of my signal: 1 pt for each waveform collected every third acquisition
xf = np.linspace(0.0, 1.0 / (2.0 * T), num_projections / 2)
Then plot:
# generate subplots
fig, axes = plt.subplots(3, 1, figsize=(18, 16))
for idx in range(len(ECHO_TIMES)):
axes[idx].plot(xf, 2.0/num_projections * np.abs(lead_pts_fft_df.magnitude[idx][:num_projections//2]),
label=lead_pts_df.index[idx], # apply labels
color=plot_colors[idx]) # colors
axes[idx].legend() # apply legend to each set of axes
# show the plot
plt.show()
I'm using FFT to extract the amplitude of each frequency components from an audio file. Actually, there is already a function called Plot Spectrum in Audacity that can help to solve the problem. Taking this example audio file which is composed of 3kHz sine and 6kHz sine, the spectrum result is like the following picture. You can see peaks are at 3KHz and 6kHz, no extra frequency.
Now I need to implement the same function and plot the similar result in Python. I'm close to the Audacity result with the help of rfft but I still have problems to solve after getting this result.
What's physical meaning of the amplitude in the second picture?
How to normalize the amplitude to 0dB like the one in Audacity?
Why do the frequency over 6kHz have such high amplitude (≥90)? Can I scale those frequency to relative low level?
Related code:
import numpy as np
from pylab import plot, show
from scipy.io import wavfile
sample_rate, x = wavfile.read('sine3k6k.wav')
fs = 44100.0
rfft = np.abs(np.fft.rfft(x))
p = 20*np.log10(rfft)
f = np.linspace(0, fs/2, len(p))
plot(f, p)
show()
Update
I multiplied Hanning window with the whole length signal (is that correct?) and get this. Most of the amplitude of skirts are below 40.
And scale the y-axis to decibel as #Mateen Ulhaq said. The result is more close to the Audacity one. Can I treat the amplitude below -90dB so low that it can be ignored?
Updated code:
fs, x = wavfile.read('input/sine3k6k.wav')
x = x * np.hanning(len(x))
rfft = np.abs(np.fft.rfft(x))
rfft_max = max(rfft)
p = 20*np.log10(rfft/rfft_max)
f = np.linspace(0, fs/2, len(p))
About the bounty
With the code in the update above, I can measure the frequency components in decibel. The highest possible value will be 0dB. But the method only works for a specific audio file because it uses rfft_max of this audio. I want to measure the frequency components of multiple audio files in one standard rule just like Audacity does.
I also started a discussion in Audacity forum, but I was still not clear how to implement my purpose.
After doing some reverse engineering on Audacity source code here some answers. First, they use Welch algorithm for estimating PSD. In short, it splits signal to overlapped segments, apply some window function, applies FFT and averages the result. Mostly as This helps to get better results when noise is present. Anyway, after extracting the necessary parameters here is the solution that approximates Audacity's spectrogram:
import numpy as np
from scipy.io import wavfile
from scipy import signal
from matplotlib import pyplot as plt
segment_size = 512
fs, x = wavfile.read('sine3k6k.wav')
x = x / 32768.0 # scale signal to [-1.0 .. 1.0]
noverlap = segment_size / 2
f, Pxx = signal.welch(x, # signal
fs=fs, # sample rate
nperseg=segment_size, # segment size
window='hanning', # window type to use
nfft=segment_size, # num. of samples in FFT
detrend=False, # remove DC part
scaling='spectrum', # return power spectrum [V^2]
noverlap=noverlap) # overlap between segments
# set 0 dB to energy of sine wave with maximum amplitude
ref = (1/np.sqrt(2)**2) # simply 0.5 ;)
p = 10 * np.log10(Pxx/ref)
fill_to = -150 * (np.ones_like(p)) # anything below -150dB is irrelevant
plt.fill_between(f, p, fill_to )
plt.xlim([f[2], f[-1]])
plt.ylim([-90, 6])
# plt.xscale('log') # uncomment if you want log scale on x-axis
plt.xlabel('f, Hz')
plt.ylabel('Power spectrum, dB')
plt.grid(True)
plt.show()
Some necessary explanations on parameters:
wave file is read as 16-bit PCM, in order to be compatible with Audacity it should be scaled to be |A|<1.0
segment_size is corresponding to Size in Audacity's GUI.
default window type is 'Hanning', you can change it if you want.
overlap is segment_size/2 as in Audacity code.
output window is framed to follow Audacity style. They throw away first low frequency bins and cut everything below -90dB
What's physical meaning of the amplitude in the second picture?
It is basically amount of energy in the frequency bin.
How to normalize the amplitude to 0dB like the one in Audacity?
You need choose some reference point. Graphs in decibels are always relevant to something. When you select maximum energy bin as a reference, your 0db point is the maximum energy (obviously). It is acceptable to set as a reference energy of the sine wave with maximum amplitude. See ref variable. Power in sinusoidal signal is simply squared RMS, and to get RMS, you just need to divide amplitude by sqrt(2). So the scaling factor is simply 0.5. Please note that factor before log10 is 10 and not 20, this is because we are dealing with power of signal and not amplitude.
Can I treat the amplitude below -90dB so low that it can be ignored?
Yes, anything below -40dB is usually considered negligeble