Algorithm for smoothing a curve strictly above the original - python

I have an arbitrary input curve, given as numpy array. I want to create a smoothed version of it, similar to a rolling mean, but which is strictly greater than the original and strictly smooth. I could use the rolling mean value but if the input curve has a negative peak, the smoothed version will drop below the original around that peak. I could then simply use the maximum of this and the original but that would introduce non-smooth spots where the transition occurs.
Furthermore, I would like to be able to parameterize the algorithm with a look-ahead and a look-behind for this resulting curve, so that given a large look-ahead and a small look-behind the resulting curve would rather stick to the falling edges, and with a large look-behind and a small look-ahead it would rather be close to rising edges.
I tried using the pandas.Series(a).rolling() facility to get rolling means, rolling maxima, etc., but up to now I found no way to generate a smoothed version of my input which in all cases stays above to input.
I guess there is a way to combine rolling maxima and rolling means somehow to achieve what I want, so here is some code for computing these:
import pandas as pd
import numpy as np
my input curve:
original = np.array([ 5, 5, 5, 8, 8, 8, 2, 2, 2, 2, 2, 3, 3, 7 ])
This can be padded left (pre) and right (post) with the edge values as a preparation for any rolling function:
pre = 2
post = 3
padded = np.pad(original, (pre, post), 'edge')
Now we can apply a rolling mean:
smoothed = pd.Series(padded).rolling(
pre + post + 1).mean().get_values()[pre+post:]
But now the smoothed version is below the original, e. g. at index 4:
print(original[4], smoothed[4]) # 8 and 5.5
To compute a rolling maximum, you can use this:
maximum = pd.Series(padded).rolling(
pre + post + 1).max().get_values()[pre+post:]
But a rolling maximum alone would of course not be smooth in many cases and would display a lot of flat tops around the peaks of the original. I would prefer a smooth approach to these peaks.
If you have also pyqtgraph installed, you can easily plot such curves:
import pyqtgraph as pg
p = pg.plot(original)
p.plotItem.plot(smoothed, pen=(255,0,0))
(Of course, other plot libraries would do as well.)
What I would like to have as a result is a curve which is e. g. like the one formed by these values:
goal = np.array([ 5, 7, 7.8, 8, 8, 8, 7, 5, 3.5, 3, 4, 5.5, 6.5, 7 ])
Here is an image of the curves. The white line is the original (input), the red the rolling mean, the green is about what I would like to have:
EDIT: I just found the functions baseline() and envelope() of a module named peakutils. These two functions can compute polynomials of a given degree fitting the lower resp. upper peaks of the input. For small samples this can be a good solution. I'm looking for something which can also be applied on very large samples with millions of values; then the degree would need to be very high and the computation then also takes a considerate amount of time. Doing it piecewise (section for section) opens up a bunch of new questions and problems (like how to stitch properly while staying smooth and guaranteed above the input, performance when processing a massive amount of pieces etc.), so I'd like to avoid that if possible.
EDIT 2: I have a promising approach by a repetitively applying a filter which creates a rolling mean, shifts it slightly to the left and the right, and then takes the maximum of these two and the original sample. After applying this several times, it smoothes out the curve in the way I wanted it. Some unsmooth spots can remain, though, in deep valleys. Here is the code for this:
pre = 30
post = 30
margin = 10
s = [ np.array(sum([[ x ] * 100 for x in
[ 5, 5, 5, 8, 8, 8, 2, 2, 2, 2, 2, 3, 3, 7 ]], [])) ]
for _ in range(30):
s.append(np.max([
pd.Series(np.pad(s[-1], (margin+pre, post), 'edge')).rolling(
1 + pre + post).mean().get_values()[pre+post:-margin],
pd.Series(np.pad(s[-1], (pre, post+margin), 'edge')).rolling(
1 + pre + post).mean().get_values()[pre+post+margin:],
s[-1]], 0))
This creates 30 iterations of applying the filter, plotting these can be done using pyqtplot so:
p = pg.plot(original)
for q in s:
p.plotItem.plot(q, pen=(255, 100, 100))
The resulting image looks like this:
There are two aspects I don't like about this approach: ① It needs iterating lots of time (which slows me down), ② it still has unsmooth parts in the valleys (although in my usecase this might be acceptable).

I have now played around quite a bit and I think I found two main answers which solve my direct need. I will give them below.
import numpy as np
import pandas as pd
from scipy import signal
import pyqtgraph as pg
These are just the necessary imports, used in all code blow. pyqtgraph is only used for displaying stuff of course, so you do not really need it.
Symmetrical Smoothing
This can be used to create a smooth line which is always above the signal, but it cannot distinguish between rising and falling edges, so the curve around a single peak will look symmetrical. In many cases this might be quite okay and since it is way less complex than the asymmetrical solution below (and also does not have any quirks I would know about).
s = np.repeat([5, 5, 5, 8, 8, 8, 2, 2, 2, 2, 2, 3, 3, 7], 400) + 0.1
s *= np.random.random(len(s))
pre = post = 400
x = pd.Series(np.pad(s, (pre, post), 'edge')).rolling(
pre + 1 + post).max().get_values()[pre+post:]
y = pd.Series(np.pad(x, (pre, post), 'edge')).rolling(
pre + 1 + post, win_type='blackman').mean().get_values()[pre+post:]
p = pg.plot(s, pen=(100,100,100))
for c, pen in ((x, (0, 200, 200)),
(y, pg.mkPen((255, 255, 255), width=3, style=3))):
p.plotItem.plot(c, pen=pen)
Create a rolling maximum (x, cyan), and
create a windowed rolling mean of this (z, white dotted).
Asymmetrical Smoothing
My usecase called for a version which allowed to distinguish between rising and falling edges. The speed of the output should be different when falling or when rising.
Comment: Used as an envelope for a compressor/expander, a quickly rising curve would mean to dampen the effect of a sudden loud noise almost completely, while a slowly rising curve would mean to slowly compress the signal for a long time before the loud noise, keeping the dynamics when the bang appears. On the other hand, if the curve falls quickly after a loud noise, this would make quiet stuff shortly after a bang audible while a slowly falling curve would keep the dynamics there as well and only slowly expanding the signal back to normal levels.
s = np.repeat([5, 5, 5, 8, 8, 8, 2, 2, 2, 2, 2, 3, 3, 7], 400) + 0.1
s *= np.random.random(len(s))
pre, post = 100, 1000
t = pd.Series(np.pad(s, (post, pre), 'edge')).rolling(
pre + 1 + post).max().get_values()[pre+post:]
g = signal.get_window('boxcar', pre*2)[pre:]
g /= g.sum()
u = np.convolve(np.pad(t, (pre, 0), 'edge'), g)[pre:]
g = signal.get_window('boxcar', post*2)[:post]
g /= g.sum()
v = np.convolve(np.pad(t, (0, post), 'edge'), g)[post:]
u, v = u[:len(v)], v[:len(u)]
w = np.min(np.array([ u, v ]),0)
pre = post = max(100, min(pre, post)*3)
x = pd.Series(np.pad(w, (pre, post), 'edge')).rolling(
pre + 1 + post).max().get_values()[pre+post:]
y = pd.Series(np.pad(x, (pre, post), 'edge')).rolling(
pre + 1 + post, win_type='blackman').mean().get_values()[pre+post:]
p = pg.plot(s, pen=(100,100,100))
for c, pen in ((t, (200, 0, 0)),
(u, (200, 200, 0)),
(v, (0, 200, 0)),
(w, (200, 0, 200)),
(x, (0, 200, 200)),
(y, pg.mkPen((255, 255, 255), width=3))):
p.plotItem.plot(c, pen=pen)
This sequence combines ruthlessly several methods of signal processing.
The input signal is shown in grey. It is a noisy version of the input mentioned above.
A rolling maximum is applied to this (t, red).
Then a specially designed convolution curve for the falling edges is created (g) and the convolution is computed (u, yellow).
This is repeated for the rising edges with a different convolution curve (g again) and the convolution is computed (v, green).
The minimum of u and v is a curve having the desired slopes but is not very smooth yet; especially it has ugly spikes when the falling and the rising slope reach into each other (w, purple).
On this the symmetrical method above is applied:
Create a rolling maximum of this curve (x, cyan).
Create a windowed rolling mean of this curve (y, white dotted).

As an initial stab at part of the problem, I've produced a function which fits a polynomial to the data by minimising the integral subject to constraints that the polynomial be strictly above the points. I suspect if you do this piecewise over your data, it may work for you.
import scipy.optimize
def upperpoly(xdata, ydata, order):
def objective(p):
"""Minimize integral"""
pint = np.polyint(p)
integral = np.polyval(pint, xdata[-1]) - np.polyval(pint, xdata[0])
return integral
def constraints(p):
"""Polynomial values be > data at every point"""
return np.polyval(p, xdata) - ydata
p0 = np.polyfit(xdata, ydata, order)
y0 = np.polyval(p0, xdata)
shift = (ydata - y0).max()
p0[-1] += shift
result = scipy.optimize.minimize(objective, p0,
constraints={'type':'ineq',
'fun': constraints})
return result.x

As pointed out in my note, the behaviour of your green line is inconsistent in the regions before and after the eight-high plateau. If the left region behavior is what you want, you could do something like this:
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
from scipy.spatial import ConvexHull
# %matplotlib inline # for interactive notebooks
y=np.array([ 5, 5, 5, 8, 8, 8, 2, 2, 2, 2, 2, 3, 3, 7])
x=np.array(range(len(y)))
#######
# This essentially selects the vertices that you'd touch streatching a
# rubber band over the top of the function
vs = ConvexHull(np.asarray([x,y]).transpose()).vertices
indices_of_upper_hull_verts = list(reversed(np.concatenate([vs[np.where(vs == len(x)-1)[0][0]: ],vs[0:1]])))
newX = x[indices_of_upper_hull_verts]
newY = y[indices_of_upper_hull_verts]
#########
x_smooth = np.linspace(newX.min(), newX.max(),500)
f = interp1d(newX, newY, kind='quadratic')
y_smooth=f(x_smooth)
plt.plot (x,y)
plt.plot (x_smooth,y_smooth)
plt.scatter (x, y)
which yields:
UPDATE:
Here's an alternative that might better suit you. If instead of a rolling average you use a simple convolution centered around 1, the resulting curve will always be larger than the input. Wings of the convolution kernel can be adjusted for look-ahead/look-behind.
Like this:
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
from scipy.ndimage.filters import convolve
## For interactive notebooks
#%matplotlib inline
y=np.array([ 5, 5, 5, 8, 8, 8, 2, 2, 2, 2, 2, 3, 3, 7]).astype(np.float)
preLength = 1
postLength = 1
preWeight = 0.2
postWeight = 0.2
kernal = [preWeight/preLength for i in range(preLength)] + [1] + [postWeight/postLength for i in range(postLength)]
output = convolve(y,kernal)
x=np.array(range(len(y)))
plt.plot (x,y)
plt.plot (x,output)
plt.scatter (x, y)
A drawback is that because the integrated kernel will typically be larger than one (which ensures that the output curve is smooth and never below the input), the output curve will always be larger than the input curve, e.g. on top of the large peak and not sitting right on top as you drew.

Related

Count and Measure Height of bubbles [duplicate]

I am looking to find the peaks in some gaussian smoothed data that I have. I have looked at some of the peak detection methods available but they require an input range over which to search and I want this to be more automated than that. These methods are also designed for non-smoothed data. As my data is already smoothed I require a much more simple way of retrieving the peaks. My raw and smoothed data is in the graph below.
Essentially, is there a pythonic way of retrieving the max values from the array of smoothed data such that an array like
a = [1,2,3,4,5,4,3,2,1,2,3,2,1,2,3,4,5,6,5,4,3,2,1]
would return:
r = [5,3,6]
There exists a bulit-in function argrelextrema that gets this task done:
import numpy as np
from scipy.signal import argrelextrema
a = np.array([1,2,3,4,5,4,3,2,1,2,3,2,1,2,3,4,5,6,5,4,3,2,1])
# determine the indices of the local maxima
max_ind = argrelextrema(a, np.greater)
# get the actual values using these indices
r = a[max_ind] # array([5, 3, 6])
That gives you the desired output for r.
As of SciPy version 1.1, you can also use find_peaks. Below are two examples taken from the documentation itself.
Using the height argument, one can select all maxima above a certain threshold (in this example, all non-negative maxima; this can be very useful if one has to deal with a noisy baseline; if you want to find minima, just multiply you input by -1):
import matplotlib.pyplot as plt
from scipy.misc import electrocardiogram
from scipy.signal import find_peaks
import numpy as np
x = electrocardiogram()[2000:4000]
peaks, _ = find_peaks(x, height=0)
plt.plot(x)
plt.plot(peaks, x[peaks], "x")
plt.plot(np.zeros_like(x), "--", color="gray")
plt.show()
Another extremely helpful argument is distance, which defines the minimum distance between two peaks:
peaks, _ = find_peaks(x, distance=150)
# difference between peaks is >= 150
print(np.diff(peaks))
# prints [186 180 177 171 177 169 167 164 158 162 172]
plt.plot(x)
plt.plot(peaks, x[peaks], "x")
plt.show()
If your original data is noisy, then using statistical methods is preferable, as not all peaks are going to be significant. For your a array, a possible solution is to use double differentials:
peaks = a[1:-1][np.diff(np.diff(a)) < 0]
# peaks = array([5, 3, 6])
>> import numpy as np
>> from scipy.signal import argrelextrema
>> a = np.array([1,2,3,4,5,4,3,2,1,2,3,2,1,2,3,4,5,6,5,4,3,2,1])
>> argrelextrema(a, np.greater)
array([ 4, 10, 17]),)
>> a[argrelextrema(a, np.greater)]
array([5, 3, 6])
If your input represents a noisy distribution, you can try smoothing it with NumPy convolve function.
If you can exclude maxima at the edges of the arrays you can always check if one elements is bigger than each of it's neighbors by checking:
import numpy as np
array = np.array([1,2,3,4,5,4,3,2,1,2,3,2,1,2,3,4,5,6,5,4,3,2,1])
# Check that it is bigger than either of it's neighbors exluding edges:
max = (array[1:-1] > array[:-2]) & (array[1:-1] > array[2:])
# Print these values
print(array[1:-1][max])
# Locations of the maxima
print(np.arange(1, array.size-1)[max])

Create a cauchy distribution histogram with a lower and an upper limit

I would like to simulate the following Lorentzian distribution with a histogram
𝛤/2π
L = ————————
(E − E0) + 0.25 𝛤 2
I found the scipy.stats.cauchy and would like to truncate the distribution at a lower and an upper limit like so:
L = cauchy.rvs(size=300, loc = 5, scale =2.5, limits = [0,15] )
Is it possible?
You cannot add limits to the rvs method. As far as I know, only the truncnorm can do that. What you can do is either clip the values using scipy.clip (or numpy.clip) or filter the values outside your limits using a mask.
The first method will create a lot of 0s and 15s:
import scipy as sp
L = sp.clip(cauchy.rvs(size=300, loc = 5, scale =2.5), 0, 15)
The second will be randomly distributed in your interval:
import scipy as sp
L = cauchy.rvs(size=10000, loc = 5, scale =2.5), 0, 15) #create a larger set to filter it out
L = L[sp.logical_and(L<15,L>0)][:300]

Python: Kernel Density Estimation for positive values

I want to get kernel density estimation for positive data points. Using Python Scipy Stats package, I came up with the following code.
def get_pdf(data):
a = np.array(data)
ag = st.gaussian_kde(a)
x = np.linspace(0, max(data), max(data))
y = ag(x)
return x, y
This works perfectly for most data sets, but it gives an erroneous result for "all positive" data points. To make sure this works correctly, I use numerical integration to compute the area under this curve.
def trapezoidal_2(ag, a, b, n):
h = np.float(b - a) / n
s = 0.0
s += ag(a)[0]/2.0
for i in range(1, n):
s += ag(a + i*h)[0]
s += ag(b)[0]/2.0
return s * h
Since the data is spread in the region (0, int(max(data))), we should get a value close to 1, when executing the following line.
b = 1
data = st.pareto.rvs(b, size=10000)
data = list(data)
a = np.array(data)
ag = st.gaussian_kde(a)
trapezoidal_2(ag, 0, int(max(data)), int(max(data))*2)
But it gives a value close to 0.5 when I test.
But when I intergrate from -100 to max(data), it provides a value close to 1.
trapezoidal_2(ag, -100, int(max(data)), int(max(data))*2+200)
The reason is, ag (KDE) is defined for values less than 0, even though the original data set contains only positive values.
So how can I get a kernel density estimation that considers only positive values, such that area under the curve in the region (o, max(data)) is close to 1?
The choice of the bandwidth is quite important when performing kernel density estimation. I think the Scott's Rule and Silverman's Rule work well for distribution similar to a Gaussian. However, they do not work well for the Pareto distribution.
Quote from the doc:
Bandwidth selection strongly influences the estimate obtained from
the KDE (much more so than the actual shape of the kernel). Bandwidth selection
can be done by a "rule of thumb", by cross-validation, by "plug-in
methods" or by other means; see [3], [4] for reviews. gaussian_kde
uses a rule of thumb, the default is Scott's Rule.
Try with different bandwidth values, for example:
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
b = 1
sample = stats.pareto.rvs(b, size=3000)
kde_sample_scott = stats.gaussian_kde(sample, bw_method='scott')
kde_sample_scalar = stats.gaussian_kde(sample, bw_method=1e-3)
# Compute the integrale:
print('integrale scott:', kde_sample_scott.integrate_box_1d(0, np.inf))
print('integrale scalar:', kde_sample_scalar.integrate_box_1d(0, np.inf))
# Graph:
x_span = np.logspace(-2, 1, 550)
plt.plot(x_span, stats.pareto.pdf(x_span, b), label='theoretical pdf')
plt.plot(x_span, kde_sample_scott(x_span), label="estimated pdf 'scott'")
plt.plot(x_span, kde_sample_scalar(x_span), label="estimated pdf 'scalar'")
plt.xlabel('X'); plt.legend();
gives:
integrale scott: 0.5572130540733236
integrale scalar: 0.9999999999968957
and:
We see that the kde using the Scott method is wrong.

Using PyKalman on Raw Acceleration Data to Calculate Position

This is my first question on Stackoverflow, so I apologize if I word it poorly. I am writing code to take raw acceleration data from an IMU and then integrate it to update the position of an object. Currently this code takes a new accelerometer reading every milisecond, and uses that to update the position. My system has a lot of noise, which results in crazy readings due to compounding error, even with the ZUPT scheme I implemented. I know that a Kalman filter is theoretically ideal for this scenario, and I would like to use the pykalman module instead of building one myself.
My first question is, can pykalman be used in real time like this? From the documentation it looks to me like you have to have a record of all measurements and then perform the smooth operation, which would not be practical as I want to filter recursively every milisecond.
My second question is, for the transition matrix can I only apply pykalman to the acceleration data by itself, or can I somehow include the double integration to position? What would that matrix look like?
If pykalman is not practical for this situation, is there another way I can implement a Kalman Filter? Thank you in advance!
You can use a Kalman Filter in this case, but your position estimation will strongly depend on the precision of your acceleration signal. The Kalman Filter is actually useful for a fusion of several signals. So error of one signal can be compensated by another signal. Ideally you need to use sensors based on different physical effects (for example an IMU for acceleration, GPS for position, odometry for velocity).
In this answer I'm going to use readings from two acceleration sensors (both in X direction). One of these sensors is an expansive and precise. The second one is much cheaper. So you will see the sensor precision influence on the position and velocity estimations.
You already mentioned the ZUPT scheme. I just want to add some notes: it is very important to have a good estimation of the pitch angle, to get rid of the gravitation component in your X-acceleration. If you use Y- and Z-acceleration you need both pitch and roll angles.
Let's start with modelling. Assume you have only acceleration readings in X-direction. So your observation will look like
Now you need to define the smallest data set, which completely describes your system in each point of time. It will be the system state.
The mapping between the measurement and state domains is defined by the observation matrix:
Now you need to describe the system dynamics. According to this information the Filter will predict a new state based on the previous one.
In my case dt=0.01s. Using this matrix the Filter will integrate the acceleration signal to estimate the velocity and position.
The observation covariance R can be described by the variance of your sensor readings. In my case I have only one signal in my observation, so the observation covariance is equal to the variance of the X-acceleration (the value can be calculated based on your sensors datasheet).
Through the transition covariance Q you describe the system noise. The smaller the matrix values, the smaller the system noise. The Filter will become stiffer and the estimation will be delayed. The weight of the system's past will be higher compared to new measurement. Otherwise the filter will be more flexible and will react strongly on each new measurement.
Now everything is ready to configure the Pykalman. In order to use it in real time, you have to use the filter_update function.
from pykalman import KalmanFilter
import numpy as np
import matplotlib.pyplot as plt
load_data()
# Data description
# Time
# AccX_HP - high precision acceleration signal
# AccX_LP - low precision acceleration signal
# RefPosX - real position (ground truth)
# RefVelX - real velocity (ground truth)
# switch between two acceleration signals
use_HP_signal = 1
if use_HP_signal:
AccX_Value = AccX_HP
AccX_Variance = 0.0007
else:
AccX_Value = AccX_LP
AccX_Variance = 0.0020
# time step
dt = 0.01
# transition_matrix
F = [[1, dt, 0.5*dt**2],
[0, 1, dt],
[0, 0, 1]]
# observation_matrix
H = [0, 0, 1]
# transition_covariance
Q = [[0.2, 0, 0],
[ 0, 0.1, 0],
[ 0, 0, 10e-4]]
# observation_covariance
R = AccX_Variance
# initial_state_mean
X0 = [0,
0,
AccX_Value[0, 0]]
# initial_state_covariance
P0 = [[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, AccX_Variance]]
n_timesteps = AccX_Value.shape[0]
n_dim_state = 3
filtered_state_means = np.zeros((n_timesteps, n_dim_state))
filtered_state_covariances = np.zeros((n_timesteps, n_dim_state, n_dim_state))
kf = KalmanFilter(transition_matrices = F,
observation_matrices = H,
transition_covariance = Q,
observation_covariance = R,
initial_state_mean = X0,
initial_state_covariance = P0)
# iterative estimation for each new measurement
for t in range(n_timesteps):
if t == 0:
filtered_state_means[t] = X0
filtered_state_covariances[t] = P0
else:
filtered_state_means[t], filtered_state_covariances[t] = (
kf.filter_update(
filtered_state_means[t-1],
filtered_state_covariances[t-1],
AccX_Value[t, 0]
)
)
f, axarr = plt.subplots(3, sharex=True)
axarr[0].plot(Time, AccX_Value, label="Input AccX")
axarr[0].plot(Time, filtered_state_means[:, 2], "r-", label="Estimated AccX")
axarr[0].set_title('Acceleration X')
axarr[0].grid()
axarr[0].legend()
axarr[0].set_ylim([-4, 4])
axarr[1].plot(Time, RefVelX, label="Reference VelX")
axarr[1].plot(Time, filtered_state_means[:, 1], "r-", label="Estimated VelX")
axarr[1].set_title('Velocity X')
axarr[1].grid()
axarr[1].legend()
axarr[1].set_ylim([-1, 20])
axarr[2].plot(Time, RefPosX, label="Reference PosX")
axarr[2].plot(Time, filtered_state_means[:, 0], "r-", label="Estimated PosX")
axarr[2].set_title('Position X')
axarr[2].grid()
axarr[2].legend()
axarr[2].set_ylim([-10, 1000])
plt.show()
When using the better IMU-sensor, the estimated position is exactly the same as the ground truth:
The cheaper sensor gives significantly worse results:
I hope I could help you. If you have some questions, I will try to answer them.
UPDATE
If you want to experiment with different data you can generate them easily (unfortunately I don't have the original data any more).
Here is a simple matlab script to generate reference, good and poor sensor set.
clear;
dt = 0.01;
t=0:dt:70;
accX_var_best = 0.0005; % (m/s^2)^2
accX_var_good = 0.0007; % (m/s^2)^2
accX_var_worst = 0.001; % (m/s^2)^2
accX_ref_noise = randn(size(t))*sqrt(accX_var_best);
accX_good_noise = randn(size(t))*sqrt(accX_var_good);
accX_worst_noise = randn(size(t))*sqrt(accX_var_worst);
accX_basesignal = sin(0.3*t) + 0.5*sin(0.04*t);
accX_ref = accX_basesignal + accX_ref_noise;
velX_ref = cumsum(accX_ref)*dt;
distX_ref = cumsum(velX_ref)*dt;
accX_good_offset = 0.001 + 0.0004*sin(0.05*t);
accX_good = accX_basesignal + accX_good_noise + accX_good_offset;
velX_good = cumsum(accX_good)*dt;
distX_good = cumsum(velX_good)*dt;
accX_worst_offset = -0.08 + 0.004*sin(0.07*t);
accX_worst = accX_basesignal + accX_worst_noise + accX_worst_offset;
velX_worst = cumsum(accX_worst)*dt;
distX_worst = cumsum(velX_worst)*dt;
subplot(3,1,1);
plot(t, accX_ref);
hold on;
plot(t, accX_good);
plot(t, accX_worst);
hold off;
grid minor;
legend('ref', 'good', 'worst');
title('AccX');
subplot(3,1,2);
plot(t, velX_ref);
hold on;
plot(t, velX_good);
plot(t, velX_worst);
hold off;
grid minor;
legend('ref', 'good', 'worst');
title('VelX');
subplot(3,1,3);
plot(t, distX_ref);
hold on;
plot(t, distX_good);
plot(t, distX_worst);
hold off;
grid minor;
legend('ref', 'good', 'worst');
title('DistX');
The simulated data looks pretty the same like the data above.

Autocorrelation code in Python produces errors (guitar pitch detection)

This link provides code for an autocorrelation-based pitch detection algorithm. I am using it to detect pitches in simple guitar melodies.
In general, it produces very good results. For example, for the melody C4, C#4, D4, D#4, E4 it outputs:
262.743653536
272.144441273
290.826273006
310.431336809
327.094621169
Which correlates to the correct notes.
However, in some cases like this audio file (E4, F4, F#4, G4, G#4, A4, A#4, B4) it produces errors:
325.861452246
13381.6439242
367.518651703
391.479384923
414.604661221
218.345286173
466.503751322
244.994090035
More specifically, there are three errors here: 13381Hz is wrongly detected instead of F4 (~350Hz) (weird error), and also 218Hz instead of A4 (440Hz) and 244Hz instead of B4 (~493Hz), which are octave errors.
I assume the two errors are caused by something different? Here is the code:
slices = segment_signal(y, sr)
for segment in slices:
pitch = freq_from_autocorr(segment, sr)
print pitch
def segment_signal(y, sr, onset_frames=None, offset=0.1):
if (onset_frames == None):
onset_frames = remove_dense_onsets(librosa.onset.onset_detect(y=y, sr=sr))
offset_samples = int(librosa.time_to_samples(offset, sr))
print onset_frames
slices = np.array([y[i : i + offset_samples] for i
in librosa.frames_to_samples(onset_frames)])
return slices
You can see the freq_from_autocorr function in the first link above.
The only think that I have changed is this line:
corr = corr[len(corr)/2:]
Which I have replaced with:
corr = corr[int(len(corr)/2):]
UPDATE:
I noticed the smallest the offset I use (the smallest the signal segment I use to detect each pitch), the more high-frequency (10000+ Hz) errors I get.
Specifically, I noticed that the part that goes differently in those cases (10000+ Hz) is the calculation of the i_peak value. When in cases with no error it is in the range of 50-150, in the case of the error it is 3-5.
The autocorrelation function in the code snippet that you linked is not particularly robust. In order to get the correct result, it needs to locate the first peak on the left hand side of the autocorrelation curve. The method that the other developer used (calling the numpy.argmax() function) does not always find the correct value.
I've implemented a slightly more robust version, using the peakutils package. I don't promise that it's perfectly robust either, but in any case it achieves a better result than the version of the freq_from_autocorr() function that you were previously using.
My example solution is listed below:
import librosa
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import fftconvolve
from pprint import pprint
import peakutils
def freq_from_autocorr(signal, fs):
# Calculate autocorrelation (same thing as convolution, but with one input
# reversed in time), and throw away the negative lags
signal -= np.mean(signal) # Remove DC offset
corr = fftconvolve(signal, signal[::-1], mode='full')
corr = corr[len(corr)//2:]
# Find the first peak on the left
i_peak = peakutils.indexes(corr, thres=0.8, min_dist=5)[0]
i_interp = parabolic(corr, i_peak)[0]
return fs / i_interp, corr, i_interp
def parabolic(f, x):
"""
Quadratic interpolation for estimating the true position of an
inter-sample maximum when nearby samples are known.
f is a vector and x is an index for that vector.
Returns (vx, vy), the coordinates of the vertex of a parabola that goes
through point x and its two neighbors.
Example:
Defining a vector f with a local maximum at index 3 (= 6), find local
maximum if points 2, 3, and 4 actually defined a parabola.
In [3]: f = [2, 3, 1, 6, 4, 2, 3, 1]
In [4]: parabolic(f, argmax(f))
Out[4]: (3.2142857142857144, 6.1607142857142856)
"""
xv = 1/2. * (f[x-1] - f[x+1]) / (f[x-1] - 2 * f[x] + f[x+1]) + x
yv = f[x] - 1/4. * (f[x-1] - f[x+1]) * (xv - x)
return (xv, yv)
# Time window after initial onset (in units of seconds)
window = 0.1
# Open the file and obtain the sampling rate
y, sr = librosa.core.load("./Vocaroo_s1A26VqpKgT0.mp3")
idx = np.arange(len(y))
# Set the window size in terms of number of samples
winsamp = int(window * sr)
# Calcualte the onset frames in the usual way
onset_frames = librosa.onset.onset_detect(y=y, sr=sr)
onstm = librosa.frames_to_time(onset_frames, sr=sr)
fqlist = [] # List of estimated frequencies, one per note
crlist = [] # List of autocorrelation arrays, one array per note
iplist = [] # List of peak interpolated peak indices, one per note
for tm in onstm:
startidx = int(tm * sr)
freq, corr, ip = freq_from_autocorr(y[startidx:startidx+winsamp], sr)
fqlist.append(freq)
crlist.append(corr)
iplist.append(ip)
pprint(fqlist)
# Choose which notes to plot (it's set to show all 8 notes in this case)
plidx = [0, 1, 2, 3, 4, 5, 6, 7]
# Plot amplitude curves of all notes in the plidx list
fgwin = plt.figure(figsize=[8, 10])
fgwin.subplots_adjust(bottom=0.0, top=0.98, hspace=0.3)
axwin = []
ii = 1
for tm in onstm[plidx]:
axwin.append(fgwin.add_subplot(len(plidx)+1, 1, ii))
startidx = int(tm * sr)
axwin[-1].plot(np.arange(startidx, startidx+winsamp), y[startidx:startidx+winsamp])
ii += 1
axwin[-1].set_xlabel('Sample ID Number', fontsize=18)
fgwin.show()
# Plot autocorrelation function of all notes in the plidx list
fgcorr = plt.figure(figsize=[8,10])
fgcorr.subplots_adjust(bottom=0.0, top=0.98, hspace=0.3)
axcorr = []
ii = 1
for cr, ip in zip([crlist[ii] for ii in plidx], [iplist[ij] for ij in plidx]):
if ii == 1:
shax = None
else:
shax = axcorr[0]
axcorr.append(fgcorr.add_subplot(len(plidx)+1, 1, ii, sharex=shax))
axcorr[-1].plot(np.arange(500), cr[0:500])
# Plot the location of the leftmost peak
axcorr[-1].axvline(ip, color='r')
ii += 1
axcorr[-1].set_xlabel('Time Lag Index (Zoomed)', fontsize=18)
fgcorr.show()
The printed output looks like:
In [1]: %run autocorr.py
[325.81996740236065,
346.43374761017725,
367.12435233192753,
390.17291696559079,
412.9358117076161,
436.04054933498134,
465.38986619237039,
490.34120132405866]
The first figure produced by my code sample depicts the amplitude curves for the next 0.1 seconds following each detected onset time:
The second figure produced by the code shows the autocorrelation curves, as computed inside of the freq_from_autocorr() function. The vertical red lines depict the location of the first peak on the left for each curve, as estimated by the peakutils package. The method used by the other developer was getting incorrect results for some of these red lines; that's why his version of that function was occasionally returning the wrong frequencies.
My suggestion would be to test the revised version of the freq_from_autocorr() function on other recordings, see if you can find more challenging examples where even the improved version still gives incorrect results, and then get creative and try to develop an even more robust peak finding algorithm that never, ever mis-fires.
The autocorrelation method is not always right. You may want to implement a more sophisticated method like YIN:
http://audition.ens.fr/adc/pdf/2002_JASA_YIN.pdf
or MPM:
http://www.cs.otago.ac.nz/tartini/papers/A_Smarter_Way_to_Find_Pitch.pdf
Both of the above papers are good reads.

Categories

Resources