I am new to Python so please pardon me if this question is very basic.
I have Accelerometer Vector Magnitude (acc_VM) signal with sampling frequency of 100Hz. I have to find the Fourier transform of this signal and find the fundamental frequency between range Df.
Df is the family of frequencies corresponding to walking. Here we use Df = [1.2, 4]Hz. How can I choose the frequency range Df = [1.2, 4]Hz using python should I implement filters OR is combFunction() the correct code ?
def combFunction(n):
combSignal = []
for element in n:
if element>1.2 and element<4 :
combSignal.append(element)
else:
combSignal.append(0)
return np.maximum(combSignal)
def hann(total_data):
hann_array = np.zeros(total_data)
for i in range(total_data):
hann_array[i] = 0.5 - 0.5 * np.cos((2 * np.pi * i)/(total_data - 1))
return hann_array
def calculate_FT(x):
hann_weight = hann(len(x))
x_multiplied_hann = x * hann_weight
X = np.abs(np.fft.rfft(x_multiplied_hann))
combSignal = combFunction(X)
calculate_FT(acc_VM)
The FFT does not return frequencies, but rather an array of amplitudes for a fixed set of evenly spaced frequencies.
As a result your combFunction, as implemented, would pick the components which have a spectrum amplitude between 1.2 and 4.
To be able to select frequencies, you would need the corresponding array of those evenly spaced frequencies, which you can get
from np.fft.rfftfreq.
Note that you will need the sampling rate (and if your data isn't uniformly sampled, you will need to resample it).
In the code that follows I'll use the variable sampling_rate for that. Then the frequencies will be given by:
freqs = np.fft.rfftfreq(len(data), sampling_rate)
Now let's extract the array indices corresponding to those frequencies that are within the frequency band of interest:
in_band = np.where([f >= 1.2 and f <= 4 for f in freqs])[0]
Then you may get the location within this band where the original spectrum X has a peak:
peak_location = np.argmax(X[in_band])
which gives you a peak spectrum amplitude X[in_band[peak_location]] at a frequency f[in_band[peak_location]].
Putting it all together should give you something like the following:
def find_peak_in_frequency_range(X, freqs, fmin, fmax):
in_band = np.where([f >= fmin and f <= fmax for f in freqs])[0]
peak_location = np.argmax(X[in_band])
return f[in_band[peak_location]], X[in_band[peak_location]]
def calculate_FT(x, sampling_rage):
hann_weight = hann(len(x))
x_multiplied_hann = x * hann_weight
X = np.abs(np.fft.rfft(x_multiplied_hann))
freqs = np.fft.rfftfreq(len(x), sampling_rate)
peakFreq,peakAmp = find_peak_in_frequency_range(X, freqs, 1.2, 4)
Note that you may get better results by using a spectrum estimation method such as scipy.signal.welch instead of simply taking the FFT.
For sake of illustration, I've ran the above on a sample data set (file 1.csv with some resampling):
Related
I have a 3D data matrix of sea level data (time, y, x) and I found the power spectrum by taking the square of the FFT but there are low frequencies that are really dominant. I want to get rid of those low frequencies by applying a high pass filter... how would I go about doing that?
Example of data set and structure/code is below:
This is the data set and creating the arrays:
Yearmin = 2018
Yearmax = 2019
year_len = Yearmax - Yearmin + 1.0 # number of years
direcInput = "filepath"
a = s.Dataset(direcInput+"test.nc", mode='r')
#creating arrays
lat = a.variables["latitude"][:]
lon = a.variables["longitude"][:]
time1 = a.variables["time"][:] #DAYS SINCE JAN 1ST 1950
sla = a.variables["sla"][:,:,:] #t, y, x
time = Yearmin + (year_len * (time1 - np.min(time1)) / ( np.max(time1) - np.min(time1)))
#detrending and normalizing data
def standardize(y, detrend = True, normalize = True):
if detrend == True:
y = signal.detrend(y, axis=0)
y = (y - np.mean(y, axis=0))
if normalize == True:
y = y / np.std(y, axis=0)
return y
sla_standard = standardize(sla)
print(sla_standard.shape) = (710, 81, 320)
#fft
fft = np.fft.rfft(sla_standard, axis=0)
spec = np.square(abs(fft))
frequencies = (0, nyquist, df)
#PLOTTING THE FREQUENCIES VS SPECTRUM FOR A FEW DIFFERENT SPATIAL LOCATIONS
plt.plot(frequencies, spec[:, 68,85])
plt.plot(frequencies, spec[:, 23,235])
plt.plot(frequencies, spec[:, 39,178])
plt.plot(frequencies, spec[:, 30,149])
plt.xlim(0,.05)
plt.show()
My goal is to make a high pass filter of the ORIGINAL time series (sla_standard) to remove the two really big peaks. Which type of filter should I use? Thank you!
Use .axes.Axes.set_ylim to set the y-axis limit.
Axes.set_ylim(self, left=None, right=None, emit=True, auto=False, *, ymin=None, ymax=None)
So in your case ymin=None and you set ymax for example to ymax=60000 before you start plotting.
Thus plt.ylim(ymin=None, ymax=60000).
Taking out data should not be done here because its "falsifying results". What you actually want is to zoom in on the chart. The person who reads the chart independently from you would interpret the data falsely if not made aware in advance. Peaks that go off the chart are okay because everybody understands that.
Or:
Directly replacement of certain values in an array (arr):
arr[arr > ori] = dest
For example in your case ori=60000 and dest=1
All values larger ">" than 60k are replaces by 1.
The different filters: As you state a filter acts on the frequencies of your signal. Different filter shapes exist and some of them have complex expressions because they need to be implemented in real time processing (causal). However in your case, you seem to post process the data. You can use the Fourier Transform, that requires all the data (non causal).
The filter to choose: Consequently you can directly perform you filtering operation in the Fourier domain by applying a mask on your frequencies. If you want to remove frequencies, I recommand you to use a binary mask made of 0 and 1. Why? Because it is the simplest filter you can think about. It is scientifically relevant to state that you completely removed some frequencies (say it and justify it). However it is more difficult to claim that you let some and attenuated a little bit others, and that you chose arbitrarily the attenuation factor...
Python implementation
signal_fft = np.fft.rfft(sla_standard,axis=0)
mask = np.ones_like(sla_standard)
mask[freq_to_filter,...] = 0.0 # define here the frequencies to filter
filtered_signal = np.fft.irfft(mask*signal_fft,axis=0)
I am working on a small project in the lab with an Arduino Mega 2560 board. I want to average the signal (voltage) of the positive-slope portion (rise) of a triangle wave to try to remove as much noise as possible. My frequency is 20Hz and I am working with a data rate of 115200 bits/second (fastest recommended by Arduino for data transfer to a computer).
The raw signal looks like this:
My data is stored in a text file, with each line corresponding to a data point. Since I do have thousands of data points, I expect that some averaging would smooth the way my signal looks and make a close-to-perfect straight line in this case. However, other experimental conditions might lead to a signal where I could have features along the positive-slope portion of the triangle wave, such as a negative peak, and I absolutely do need to be able to see this feature on my averaged signal.
I am a Python beginner so I might not have the ideal approach to do so and my code might look bad for most of you guys but I would still like to get your hints / ideas on how to improve my signal processing code to achieve a better noise removal by averaging the signal.
#!/usr/bin/python
import matplotlib.pyplot as plt
import math
# *** OPEN AND PLOT THE RAW DATA ***
data_filename = "My_File_Name"
filepath = "My_File_Path" + data_filename + ".txt"
# Open the Raw Data
with open(filepath, "r") as f:
rawdata = f.readlines()
# Remove the \n
rawdata = map(lambda s: s.strip(), rawdata)
# Plot the Raw Data
plt.plot(rawdata, 'r-')
plt.ylabel('Lightpower (V)')
plt.show()
# *** FIND THE LOCAL MAXIMUM AND MINIMUM
# Number of data points for each range
datarange = 15 # This number can be changed for better processing
max_i_range = int(math.floor(len(rawdata)/datarange))-3
#Declare an empty lists for the max and min
min_list = []
max_list = []
min_list_index = []
max_list_index = []
i=0
for i in range(0, max_i_range):
delimiter0 = i * datarange
delimiter1 = (i+1) * datarange
delimiter2 = (i+2) * datarange
delimiter3 = (i+3) * datarange
sumrange1 = sum(float(rawdata[i]) for i in range(delimiter0, delimiter1 + 1))
averagerange1 = sumrange1 / len(rawdata[delimiter0:delimiter1])
sumrange2 = sum(float(rawdata[i]) for i in range(delimiter1, delimiter2 + 1))
averagerange2 = sumrange2 / len(rawdata[delimiter1:delimiter2])
sumrange3 = sum(float(rawdata[i]) for i in range(delimiter2, delimiter3 + 1))
averagerange3 = sumrange3 / len(rawdata[delimiter2:delimiter3])
# Find if there is a minimum in range 2
if ((averagerange1 > averagerange2) and (averagerange2 < averagerange3)):
min_list.append(min(rawdata[delimiter1:delimiter2])) # Find the value of all the minimum
#Find the index of the minimum
min_index = delimiter1 + [k for k, j in enumerate(rawdata[delimiter1:delimiter2]) if j == min(rawdata[delimiter1:delimiter2])][0] # [0] To use the first index out of the possible values
min_list_index.append(min_index)
# Find if there is a maximum in range 2
if ((averagerange1 < averagerange2) and (averagerange2 > averagerange3)):
max_list.append(max(rawdata[delimiter1:delimiter2])) # Find the value of all the maximum
#Find the index of the maximum
max_index = delimiter1 + [k for k, j in enumerate(rawdata[delimiter1:delimiter2]) if j == max(rawdata[delimiter1:delimiter2])][0] # [0] To use the first index out of the possible values
max_list_index.append(max_index)
# *** PROCESS EACH RISE PATTERN ***
# One rise pattern goes from a min to a max
numb_of_rise_pattern = 50 # This number can be increased or lowered. This will average 50 rise patterns
max_min_diff_total = 0
for i in range(0, numb_of_rise_pattern):
max_min_diff_total = max_min_diff_total + (max_list_index[i]-min_list_index[i])
# Find the average number of points for each rise pattern
max_min_diff_avg = abs(max_min_diff_total / numb_of_rise_pattern)
# Find the average values for each of the rise pattern
avg_position_value_list = []
for i in range(0, max_min_diff_avg):
sum_position_value = 0
for j in range(0, numb_of_rise_pattern):
sum_position_value = sum_position_value + float( rawdata[ min_list_index[j] + i ] )
avg_position_value = sum_position_value / numb_of_rise_pattern
avg_position_value_list.append(avg_position_value)
#Plot the Processed Signal
plt.plot(avg_position_value_list, 'r-')
plt.title(data_filename)
plt.ylabel('Lightpower (V)')
plt.show()
At the end, the processed signal looks like this:
I would expect a straighter line, but I could be wrong. I believe that there are probably a lot of flaws in my code and there would certainly be better ways to achieve what I want. I have included a link to a text file with some raw data if any of you guys want to have fun with it.
http://www108.zippyshare.com/v/2iba0XMD/file.html
Simpler might be to use a smoothing function, such as a moving window average. This is pretty simple to implement using the rolling function from pandas.Series. (Only 501 points are shown.) Tweak the numerical argument (window size) to get different amounts of smoothing.
import pandas as pd
import matplotlib.pyplot as plt
# Plot the Raw Data
ts = rawdata[0:500]
plt.plot(ts, 'r-')
plt.ylabel('Lightpower (V)')
# previous version
# smooth_data = pd.rolling_mean(rawdata[0:500],5).plot(style='k')
# changes to pandas require a change to the code as follows:
smooth_data = pd.Series(ts).rolling(window=7).mean().plot(style='k')
plt.show()
Moving Average
A moving average is, basically, a low-pass filter. So, we could also implement a low-pass filter with functions from SciPy as follows:
import scipy.signal as signal
# First, design the Buterworth filter
N = 3 # Filter order
Wn = 0.1 # Cutoff frequency
B, A = signal.butter(N, Wn, output='ba')
smooth_data = signal.filtfilt(B,A, rawdata[0:500])
plt.plot(ts,'r-')
plt.plot(smooth_data[0:500],'b-')
plt.show()
Low-Pass Filter
The Butterworth filter method is from OceanPython.org, BTW.
I have a pair of 1D arrays (of different lengths) like the following:
data1 = [0,0,0,1,1,1,0,1,0,0,1]
data2 = [0,1,1,0,1,0,0,1]
I would like to get the max cross correlation of the 2 series in python. In matlab, the xcorr() function will return it OK
I have tried the following 2 methods:
numpy.correlate(data1, data2)
signal.fftconvolve(data2, data1[::-1], mode='full')
Both methods give me the same values, but the values I get from python are different from what comes out of matlab. Python gives me integers values > 1, whereas matlab gives actual correlation values between 0 and 1.
I have tried normalizing the 2 arrays first (value-mean/SD), but the cross correlation values I get are in the thousands which doesnt seem correct.
Matlab will also give you a lag value at which the cross correlation is the greatest. I assume it is easy to do this using indices but whats the most appropriate way of doing this if my arrays contain 10's of thousands of values?
I would like to mimic the xcorr() function that matlab has, any thoughts on how I would do that in python?
numpy.correlate(arr1,arr2,"full")
gave me same output as
xcorr(arr1,arr2)
gives in matlab
Implementation of MATLAB xcorr(x,y) and comparision of result with example.
import scipy.signal as signal
def xcorr(x,y):
"""
Perform Cross-Correlation on x and y
x : 1st signal
y : 2nd signal
returns
lags : lags of correlation
corr : coefficients of correlation
"""
corr = signal.correlate(x, y, mode="full")
lags = signal.correlation_lags(len(x), len(y), mode="full")
return lags, corr
n = np.array([i for i in range(0,15)])
x = 0.84**n
y = np.roll(x,5);
lags,c = xcorr(x,y);
plt.figure()
plt.stem(lags,c)
plt.show()
This code will help in finding the delay between two channels in audio file
xin, fs = sf.read('recording1.wav')
frame_len = int(fs*5*1e-3)
dim_x =xin.shape
M = dim_x[0] # No. of rows
N= dim_x[1] # No. of col
sample_lim = frame_len*100
tau = [0]
M_lim = 20000 # for testing as processing takes time
for i in range(1,N):
c = np.correlate(xin[0:M_lim,0],xin[0:M_lim,i],"full")
maxlags = M_lim-1
c = c[M_lim -1 -maxlags: M_lim + maxlags]
Rmax_pos = np.argmax(c)
pos = Rmax_pos-M_lim+1
tau.append(pos)
print(tau)
I have a .wav file, I load it and I get the next spectrogram showing the spectrum in dB
http://i.stack.imgur.com/22TjY.png
Now I would like to know these values exactly because I want to compare with other wav file, for recognizing if these 4 values are there.
http://i.stack.imgur.com/Jun25.png
The source to generate that pictures (taken from other stackoverflow example)
## some stuff here
for i in range(0, int(RATE / CHUNK_SIZE * RECORD_SECONDS)):
# little endian, signed shortdata_chunk
data_chunk = array('h', stream.read(CHUNK_SIZE))
if byteorder == 'big':
data_chunk.byteswap()
data_all.extend(data_chunk)
## some stuff here
Fs = 16000
f = np.arange(1, 9) * 2000
t = np.arange(RECORD_SECONDS * Fs) / Fs
x = np.empty(t.shape)
for i in range(8):
x[i*Fs:(i+1)*Fs] = np.cos(2*np.pi * f[i] * t[i*Fs:(i+1)*Fs])
w = np.hamming(512)
Pxx, freqs, bins = mlab.specgram(data_all, NFFT=512, Fs=Fs, window=w,
noverlap=464)
#plot the spectrogram in dB
Pxx_dB = np.log10(Pxx)
pyplot.subplots_adjust(hspace=0.4)
pyplot.subplot(211)
ex1 = bins[0], bins[-1], freqs[0], freqs[-1]
pyplot.imshow(np.flipud(Pxx_dB), extent=ex1)
pyplot.axis('auto')
pyplot.axis(ex1)
pyplot.xlabel('time (s)')
pyplot.ylabel('freq (Hz)')
I "think" that the information is in Pxx but I don't know how to get it.
From the documentation, I gather that Pxx is a simple 2D numpy array.
You're interested in periodograms around 1s. Considering Pxx should have 512 columns and your sample is about 5s long, I'd take a slice somewhere around column 100:
periodogram_of_interest = Pxx[:, 100]
Then find the 4 maxima. Unfortunately, each of those 4 frequencies has a finite width, so simply looking for the top 4 maxima will nog be as easy. However, assuming your signal is quite clean, there's a function in scipy.signal that will list all local extrema: argrelmax. You could play with the order argument of that function to reduce your search space.
With the values returned from that function, you could get the frequencies like this: freqs[those_4_indices].
For a series of angle values in (-pi, pi) range, I make a histogram. Is there an effective way to calculate a mean and modal (post probable) value? Consider following examples:
import numpy as N, cmath
deg = N.pi/180.
d = N.array([-175., 170, 175, 179, -179])*deg
i = N.sum(N.exp(1j*d))
ave = cmath.phase(i)
i /= float(d.size)
stdev = -2. * N.log(N.sqrt(i.real**2 + i.imag**2))
print ave/deg, stdev/deg
Now, let's have a histogram:
counts, bins = N.histogram(data, N.linspace(-N.pi, N.pi, 360))
Is it possible to calculate mean, mode having counts and bins? For non-periodic data, calculation of a mean is straightforward:
ave = sum(counts*bins[:-1])
Calculations of a modal value requires more effort. Actually, I'm not sure my code below is correct: firstly, I identify bins which occur most frequently and then I calculate an arithmetic mean:
cmax = bins[N.argmax(counts)]
mode = N.mean(N.take(bins, N.nonzero(counts == cmax)[0]))
I have no idea, how to calculate standard deviation from such data, though. One obvious solution to all my problems (at least those described above) is to convert histogram data to a data series and then use it in calculations. This is not elegant, however, and inefficient.
Any hints will be very appreciated.
This is the partial solution I wrote.
import numpy as N, cmath
import scipy.stats as ST
d = [-175, 170.2, 175.57, 179, -179, 170.2, 175.57, 170.2]
deg = N.pi/180.
data = N.array(d)*deg
i = N.sum(N.exp(1j*data))
ave = cmath.phase(i) # correct and exact mean for periodic data
wrong_ave = N.mean(d)
i /= float(data.size)
stdev = -2. * N.log(N.sqrt(i.real**2 + i.imag**2))
wrong_stdev = N.std(d)
bins = N.linspace(-N.pi, N.pi, 360)
counts, bins = N.histogram(data, bins, normed=False)
# consider it weighted vector addition
nz = N.nonzero(counts)[0]
weight = counts[nz]
i = N.sum(weight * N.exp(1j*bins[nz])/len(nz))
pave = cmath.phase(i) # correct and approximated mean for periodic data
i /= sum(weight)/float(len(nz))
pstdev = -2. * N.log(N.sqrt(i.real**2 + i.imag**2))
print
print 'scipy: %12.3f (mean) %12.3f (stdev)' % (ST.circmean(data)/deg, \
ST.circstd(data)/deg)
When run, it gives following results:
mean: 175.840 85.843 175.360
stdev: 0.472 151.785 0.430
scipy: 175.840 (mean) 3.673 (stdev)
A few comments now: the first column gives mean/stdev calculated. As can be seen, the mean agrees well with scipy.stats.circmean (thanks JoeKington for pointing it out). Unfortunately stdev differs. I will look at it later. The second column gives completely wrong results (non-periodic mean/std from numpy obviously does not work here). The 3rd column gives sth I wanted to obtain from the histogram data (#JoeKington: my raw data won't fit memory of my computer.., #dmytro: thanks for your input: of course, bin size will influence result but in my application I don't have much choice, i.e. I have to reduce data somehow). As can be seen, the mean (3rd column) is properly calculated, stdev needs further attention :)
Have a look at scipy.stats.circmean and scipy.stats.circstd.
Or do you only have the histogram counts, and not the "raw" data? If so, you could fit a Von Mises distribution to your histogram counts and approximate the mean and stddev in that way.
Here's how to get an approximation.
Since Var(x) = <x^2> - <x>^2, we have:
meanX = N.sum(counts * bins[:-1]) / N.sum(counts)
meanX2 = N.sum(counts * bins[:-1]**2) / N.sum(counts)
std = N.sqrt(meanX2 - meanX**2)