How to envelope a non periodical signal? - python

I want to analyse a trc oscilloscope file, find impulses and envelope them. In the end I want to plot the envelope.
data file (trc): https://ufile.io/z4m4d
Code:
import matplotlib.pyplot as plt
import pandas as pd
import readTrc
import numpy as np
from scipy.signal import hilbert
#Read trc file
datX, datY, m = readTrc.readTrc('C220180104_ch2_UHF00014.trc')
srx, sry = pd.Series(datX * 1000), pd.Series(datY * 1000)
df = pd.concat([srx, sry], axis = 1)
df.set_index(0, inplace = True)
#Impulse location
x1 = df[1].idxmax() - 0.0005 #numeric used to show area before impulse
x2 = df[1].idxmax() + 0.003 #numeric used to show area after impulse
df2 = df.loc[x1:x2]
#Locate Maximum
print('Maximum at:', round(df[1].idxmax(), 6), 'ms')
#Plot Impulse (abs)
df3 = df2.abs().interpolate()
df3.plot.area(grid = 1,
linewidth = 0.5)
#Envelope
signal = hilbert(df2)
envelope = np.abs(signal)
df4 = pd.DataFrame(envelope)
df4.plot(color = 'red')
plt.xlabel('Zeit / ms')
plt.ylabel('UHF-Signal / mV')
##plt.savefig('UHF_plot.png', dpi = 600)
plt.show()
print('done')
The Output does not look like an envelope.
Plot:
Envelope:
Edit:
This is an approximation of what I want.

Envelope demodulation requires bandpassing before the hilbert transform. For your case, I believe that low-passing your "envelope signal" will bring you to the required result.
You can also create an semi-envelope using find_peaks but I assume that it was not your intention.

Related

Using scipy.optimize.curve_fit with more than one input data and p0= two variables

I am using scipy.optimize.curve_fit to fit measured (test in code) data to theoretical (run in code) data. Attached are two codes. In the first one I have one measured and theoretical data. When I use scipy.optimize.curve_fit I get approximately the correct temperature. The problem comes when I need to extend scipy.optimize.curve_fit to more one measured and theoretical data. The second code is my progress so far. How do I deal with two input data, i.e, what do I replace x-and y-data with. For example do I a need to combine the data in some manner. I have tried a few ways to non-success. Any help would be appreciated.
import pandas as pd
import numpy as np
from scipy import interpolate
# mr data: wave, counts, temp
run_1 = pd.read_excel("run_1.xlsx")
run_1_temp = np.array(run_1['temp'])
run_1_counts = np.array(run_1['count'])
# test data: wave, counts, temp = 30
test_1 = pd.read_excel("test_1.xlsx")
xdata = test_1['wave']
ydata = test_1['counts']
# Interpolate
inter_run_1 = interpolate.interp1d(run_1_temp,run_1_counts, kind='linear', fill_value='extrapolation')
run_1_temp_new = np.linspace(20,50,0.1)
run_1_count_new = inter_run_1(run_1_temp_new)
# Curve-fit
def f(wave, temp):
signal = inter_run_1(temp)
return signal
popt, pcov = scipy.optimize.curve_fit(f,xdata,ydata,p0=[30])
print(popt, pcov)
import pandas as pd
import numpy as np
from scipy import interpolate
# mr data: wave, counts, temp
run_1 = pd.read_excel("run_1.xlsx")
run_2 = pd.read_excel("run_2.xlsx")
# test data 1: wave, counts, temp = 30
test_1 = pd.read_excel("test_1.xlsx")
xdata = test_1['wave']
ydata = test_1['counts']
test data 1: wave, counts, temp = 40
test_2 = pd.read_excel("test_2.xlsx")
x1data = test_2['wave']
y1data = test_2['counts']
run_1_temp = np.array(run_1['temp'])
run_1_counts = np.array(run_1['count'])
run_2_temp = np.array(run_2['temp'])
run_2_counts = np.array(run_2['count'])
# Interpolate
inter_run_1 = interpolate.interp1d(run_1_temp,run_1_counts, kind='linear', fill_value='extrapolation')
run_1_temp_new = np.linspace(20,50,0.1)
run_1_count_new = inter_run_1(run_1_temp_new)
inter_run_2 = interpolate.interp1d(run_2_temp,run_2_counts, kind='linear', fill_value='extrapolation')
run_2_temp_new = np.linspace(20,50,0.1)
run_2_count_new = inter_run_2(run_2_temp_new)
def f(wave,temp1,temp2):
signal_1 = inter_run_1(temp1)
signal_2 = inter_run_2(temp2)
signal = signal_1 + signal_1
return signal
popt, pcov = scipy.optimize.curve_fit(f,xdata,ydata,p0=[30,50])
print(popt, pcov)

Creating a 2D Gaussian random field from a given 2D variance

I've been trying to create a 2D map of blobs of matter (Gaussian random field) using a variance I have calculated. This variance is a 2D array. I have tried using numpy.random.normal since it allows for a 2D input of the variance, but it doesn't really create a map with the trend I expect from the input parameters. One of the important input constants lambda_c should manifest itself as the physical size (diameter) of the blobs. However, when I change my lambda_c, the size of the blobs does not change if at all. For example, if I set lambda_c = 40 parsecs, the map needs blobs that are 40 parsecs in diameter. A MWE to produce the map using my variance:
import numpy as np
import random
import matplotlib.pyplot as plt
from matplotlib.pyplot import show, plot
import scipy.integrate as integrate
from scipy.interpolate import RectBivariateSpline
n = 300
c = 3e8
G = 6.67e-11
M_sun = 1.989e30
pc = 3.086e16 # parsec
Dds = 1097.07889283e6*pc
Ds = 1726.62069147e6*pc
Dd = 1259e6*pc
FOV_arcsec_original = 5.
FOV_arcmin = FOV_arcsec_original/60.
pix2rad = ((FOV_arcmin/60.)/float(n))*np.pi/180.
rad2pix = 1./pix2rad
x_pix = np.linspace(-FOV_arcsec_original/2/pix2rad/180.*np.pi/3600.,FOV_arcsec_original/2/pix2rad/180.*np.pi/3600.,n)
y_pix = np.linspace(-FOV_arcsec_original/2/pix2rad/180.*np.pi/3600.,FOV_arcsec_original/2/pix2rad/180.*np.pi/3600.,n)
X_pix,Y_pix = np.meshgrid(x_pix,y_pix)
conc = 10.
M = 1e13*M_sun
r_s = 18*1e3*pc
lambda_c = 40*pc ### The important parameter that doesn't seem to manifest itself in the map when changed
rho_s = M/((4*np.pi*r_s**3)*(np.log(1+conc) - (conc/(1+conc))))
sigma_crit = (c**2*Ds)/(4*np.pi*G*Dd*Dds)
k_s = rho_s*r_s/sigma_crit
theta_s = r_s/Dd
Renorm = (4*G/c**2)*(Dds/(Dd*Ds))
#### Here I just interpolate and zoom into my field of view to get better resolutions
A = np.sqrt(X_pix**2 + Y_pix**2)*pix2rad/theta_s
A_1 = A[100:200,0:100]
n_x = n_y = 100
FOV_arcsec_x = FOV_arcsec_original*(100./300)
FOV_arcmin_x = FOV_arcsec_x/60.
pix2rad_x = ((FOV_arcmin_x/60.)/float(n_x))*np.pi/180.
rad2pix_x = 1./pix2rad_x
FOV_arcsec_y = FOV_arcsec_original*(100./300)
FOV_arcmin_y = FOV_arcsec_y/60.
pix2rad_y = ((FOV_arcmin_y/60.)/float(n_y))*np.pi/180.
rad2pix_y = 1./pix2rad_y
x1 = np.linspace(-FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,n_x)
y1 = np.linspace(-FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,n_y)
X1,Y1 = np.meshgrid(x1,y1)
n_x_2 = 500
n_y_2 = 500
x2 = np.linspace(-FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,n_x_2)
y2 = np.linspace(-FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,n_y_2)
X2,Y2 = np.meshgrid(x2,y2)
interp_spline = RectBivariateSpline(y1,x1,A_1)
A_2 = interp_spline(y2,x2)
A_3 = A_2[50:450,0:400]
n_x_3 = n_y_3 = 400
FOV_arcsec_x = FOV_arcsec_original*(100./300)*400./500.
FOV_arcmin_x = FOV_arcsec_x/60.
pix2rad_x = ((FOV_arcmin_x/60.)/float(n_x_3))*np.pi/180.
rad2pix_x = 1./pix2rad_x
FOV_arcsec_y = FOV_arcsec_original*(100./300)*400./500.
FOV_arcmin_y = FOV_arcsec_y/60.
pix2rad_y = ((FOV_arcmin_y/60.)/float(n_y_3))*np.pi/180.
rad2pix_y = 1./pix2rad_y
x3 = np.linspace(-FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,n_x_3)
y3 = np.linspace(-FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,n_y_3)
X3,Y3 = np.meshgrid(x3,y3)
n_x_4 = 1000
n_y_4 = 1000
x4 = np.linspace(-FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,n_x_4)
y4 = np.linspace(-FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,n_y_4)
X4,Y4 = np.meshgrid(x4,y4)
interp_spline = RectBivariateSpline(y3,x3,A_3)
A_4 = interp_spline(y4,x4)
############### Function to calculate variance
variance = np.zeros((len(A_4),len(A_4)))
def variance_fluctuations(x):
for i in xrange(len(x)):
for j in xrange(len(x)):
if x[j][i] < 1.:
variance[j][i] = (k_s**2)*(lambda_c/r_s)*((np.pi/x[j][i]) - (1./(x[j][i]**2 -1)**3.)*(((6.*x[j][i]**4. - 17.*x[j][i]**2. + 26)/3.)+ (((2.*x[j][i]**6. - 7.*x[j][i]**4. + 8.*x[j][i]**2. - 8)*np.arccosh(1./x[j][i]))/(np.sqrt(1-x[j][i]**2.)))))
elif x[j][i] > 1.:
variance[j][i] = (k_s**2)*(lambda_c/r_s)*((np.pi/x[j][i]) - (1./(x[j][i]**2 -1)**3.)*(((6.*x[j][i]**4. - 17.*x[j][i]**2. + 26)/3.)+ (((2.*x[j][i]**6. - 7.*x[j][i]**4. + 8.*x[j][i]**2. - 8)*np.arccos(1./x[j][i]))/(np.sqrt(x[j][i]**2.-1)))))
variance_fluctuations(A_4)
#### Creating the map
mean = 0
delta_kappa = np.random.normal(0,variance,A_4.shape)
xfinal = np.linspace(-FOV_arcsec_x*np.pi/180./3600.*Dd/pc/2,FOV_arcsec_x*np.pi/180./3600.*Dd/pc/2,1000)
yfinal = np.linspace(-FOV_arcsec_x*np.pi/180./3600.*Dd/pc/2,FOV_arcsec_x*np.pi/180./3600.*Dd/pc/2,1000)
Xfinal, Yfinal = np.meshgrid(xfinal,yfinal)
plt.contourf(Xfinal,Yfinal,delta_kappa,100)
plt.show()
The map looks like this, with the density of blobs increasing towards the right. However, the size of the blobs don't change and the map looks virtually the same whether I use lambda_c = 40*pc or lambda_c = 400*pc.
I'm wondering if the np.random.normal function isn't really doing what I expect it to do? I feel like the pixel scale of the map and the way samples are drawn make no link to the size of the blobs. Maybe there is a better way to create the map using the variance, would appreciate any insight.
I expect the map to look something like this , the blob sizes change based on the input parameters for my variance :
This is quite a well visited problem in (surprise surprise) astronomy and cosmology.
You could use lenstool: https://lenstools.readthedocs.io/en/latest/examples/gaussian_random_field.html
You could also try here:
https://andrewwalker.github.io/statefultransitions/post/gaussian-fields
Not to mention:
https://github.com/bsciolla/gaussian-random-fields
I am not reproducing code here because all credit goes to the above authors. However, they did just all come right out a google search :/
Easiest of all is probably a python module FyeldGenerator, apparently designed for this exact purpose:
https://github.com/cphyc/FyeldGenerator
So (adapted from github example):
pip install FyeldGenerator
from FyeldGenerator import generate_field
from matplotlib import use
use('Agg')
import matplotlib.pyplot as plt
import numpy as np
plt.figure()
# Helper that generates power-law power spectrum
def Pkgen(n):
def Pk(k):
return np.power(k, -n)
return Pk
# Draw samples from a normal distribution
def distrib(shape):
a = np.random.normal(loc=0, scale=1, size=shape)
b = np.random.normal(loc=0, scale=1, size=shape)
return a + 1j * b
shape = (512, 512)
field = generate_field(distrib, Pkgen(2), shape)
plt.imshow(field, cmap='jet')
plt.savefig('field.png',dpi=400)
plt.close())
This gives:
Looks pretty straightforward to me :)
PS: FoV implied a telescope observation of the gaussian random field :)
A completely different and much quicker way may be just to blur the delta_kappa array with gaussian filter. Try adjusting sigma parameter to alter the blobs size.
from scipy.ndimage.filters import gaussian_filter
dk_gf = gaussian_filter(delta_kappa, sigma=20)
Xfinal, Yfinal = np.meshgrid(xfinal,yfinal)
plt.contourf(Xfinal,Yfinal,dk_ma,100, cmap='jet')
plt.show();
this is image with sigma=20
this is image with sigma=2.5
ThunderFlash, try this code to draw the map:
# function to produce blobs:
from scipy.stats import multivariate_normal
def blob (positions, mean=(0,0), var=1):
cov = [[var,0],[0,var]]
return multivariate_normal(mean, cov).pdf(positions)
"""
now prepare for blobs generation.
note that I use less dense grid to pick blobs centers (regulated by `step`)
this makes blobs more pronounced and saves calculation time.
use this part instead of your code section below comment #### Creating the map
"""
delta_kappa = np.random.normal(0,variance,A_4.shape) # same
step = 10 #
dk2 = delta_kappa[::step,::step] # taking every 10th element
x2, y2 = xfinal[::step],yfinal[::step]
field = np.dstack((Xfinal,Yfinal))
print (field.shape, dk2.shape, x2.shape, y2.shape)
>> (1000, 1000, 2), (100, 100), (100,), (100,)
result = np.zeros(field.shape[:2])
for x in range (len(x2)):
for y in range (len(y2)):
res2 = blob(field, mean = (x2[x], y2[y]), var=10000)*dk2[x,y]
result += res2
# the cycle above took over 20 minutes on Ryzen 2700X. It could be accelerated by vectorization presumably.
plt.contourf(Xfinal,Yfinal,result,100)
plt.show()
you may want to play with var parameter in blob() to smoothen the image and with step to make it more compressed.
Here is the image that I got using your code (somehow axes are flipped and more dense areas on the top):

Multiple regression with pykalman?

I'm looking for a way to generalize regression using pykalman from 1 to N regressors. We will not bother about online regression initially - I just want a toy example to set up the Kalman filter for 2 regressors instead of 1, i.e. Y = c1 * x1 + c2 * x2 + const.
For the single regressor case, the following code works. My question is how to change the filter setup so it works for two regressors:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from pykalman import KalmanFilter
if __name__ == "__main__":
file_name = '<path>\KalmanExample.txt'
df = pd.read_csv(file_name, index_col = 0)
prices = df[['ETF', 'ASSET_1']] #, 'ASSET_2']]
delta = 1e-5
trans_cov = delta / (1 - delta) * np.eye(2)
obs_mat = np.vstack( [prices['ETF'],
np.ones(prices['ETF'].shape)]).T[:, np.newaxis]
kf = KalmanFilter(
n_dim_obs=1,
n_dim_state=2,
initial_state_mean=np.zeros(2),
initial_state_covariance=np.ones((2, 2)),
transition_matrices=np.eye(2),
observation_matrices=obs_mat,
observation_covariance=1.0,
transition_covariance=trans_cov
)
state_means, state_covs = kf.filter(prices['ASSET_1'].values)
# Draw slope and intercept...
pd.DataFrame(
dict(
slope=state_means[:, 0],
intercept=state_means[:, 1]
), index=prices.index
).plot(subplots=True)
plt.show()
The example file KalmanExample.txt contains the following data:
Date,ETF,ASSET_1,ASSET_2
2007-01-02,176.5,136.5,141.0
2007-01-03,169.5,115.5,143.25
2007-01-04,160.5,111.75,143.5
2007-01-05,160.5,112.25,143.25
2007-01-08,161.0,112.0,142.5
2007-01-09,155.5,110.5,141.25
2007-01-10,156.5,112.75,141.25
2007-01-11,162.0,118.5,142.75
2007-01-12,161.5,117.0,142.5
2007-01-15,160.0,118.75,146.75
2007-01-16,156.5,119.5,146.75
2007-01-17,155.0,120.5,145.75
2007-01-18,154.5,124.5,144.0
2007-01-19,155.5,126.0,142.75
2007-01-22,157.5,124.5,142.5
2007-01-23,161.5,124.25,141.75
2007-01-24,164.5,125.25,142.75
2007-01-25,164.0,126.5,143.0
2007-01-26,161.5,128.5,143.0
2007-01-29,161.5,128.5,140.0
2007-01-30,161.5,129.75,139.25
2007-01-31,161.5,131.5,137.5
2007-02-01,164.0,130.0,137.0
2007-02-02,156.5,132.0,128.75
2007-02-05,156.0,131.5,132.0
2007-02-06,159.0,131.25,130.25
2007-02-07,159.5,136.25,131.5
2007-02-08,153.5,136.0,129.5
2007-02-09,154.5,138.75,128.5
2007-02-12,151.0,136.75,126.0
2007-02-13,151.5,139.5,126.75
2007-02-14,155.0,169.0,129.75
2007-02-15,153.0,169.5,129.75
2007-02-16,149.75,166.5,128.0
2007-02-19,150.0,168.5,130.0
The single regressor case provides the following output and for the two-regressor case I want a second "slope"-plot representing C2.
Answer edited to reflect my revised understanding of the question.
If I understand correctly you wish to model an observable output variable Y = ETF, as a linear combination of two observable values; ASSET_1, ASSET_2.
The coefficients of this regression are to be treated as the system states, i.e. ETF = x1*ASSET_1 + x2*ASSET_2 + x3, where x1 and x2 are the coefficients assets 1 and 2 respectively, and x3 is the intercept. These coefficients are assumed to evolve slowly.
Code implementing this is given below, note that this is just extending the existing example to have one more regressor.
Note also that you can get quite different results by playing with the delta parameter. If this is set large (far from zero), then the coefficients will change more rapidly, and the reconstruction of the regressand will be near-perfect. If it is set small (very close to zero) then the coefficients will evolve more slowly and the reconstruction of the regressand will be less perfect. You might want to look into the Expectation Maximisation algorithm - supported by pykalman.
CODE:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from pykalman import KalmanFilter
if __name__ == "__main__":
file_name = 'KalmanExample.txt'
df = pd.read_csv(file_name, index_col = 0)
prices = df[['ETF', 'ASSET_1', 'ASSET_2']]
delta = 1e-3
trans_cov = delta / (1 - delta) * np.eye(3)
obs_mat = np.vstack( [prices['ASSET_1'], prices['ASSET_2'],
np.ones(prices['ASSET_1'].shape)]).T[:, np.newaxis]
kf = KalmanFilter(
n_dim_obs=1,
n_dim_state=3,
initial_state_mean=np.zeros(3),
initial_state_covariance=np.ones((3, 3)),
transition_matrices=np.eye(3),
observation_matrices=obs_mat,
observation_covariance=1.0,
transition_covariance=trans_cov
)
# state_means, state_covs = kf.em(prices['ETF'].values).smooth(prices['ETF'].values)
state_means, state_covs = kf.filter(prices['ETF'].values)
# Re-construct ETF from coefficients and 'ASSET_1' and ASSET_2 values:
ETF_est = np.array([a.dot(b) for a, b in zip(np.squeeze(obs_mat), state_means)])
# Draw slope and intercept...
pd.DataFrame(
dict(
slope1=state_means[:, 0],
slope2=state_means[:, 1],
intercept=state_means[:, 2],
), index=prices.index
).plot(subplots=True)
plt.show()
# Draw actual y, and estimated y:
pd.DataFrame(
dict(
ETF_est=ETF_est,
ETF_act=prices['ETF'].values
), index=prices.index
).plot()
plt.show()
PLOTS:

From Amplitude or FFT to dB

I've a Python code which performs FFT on a wav file and plot the amplitude vs time / amplitude vs freq graphs. I want to calculate dB from these graphs (they are long arrays). I do not want to calculate exact dBA, I just want to see a linear relationship after my calculations. I've dB meter, I will compare it. Here is my code:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import print_function
import scipy.io.wavfile as wavfile
import scipy
import scipy.fftpack
import numpy as np
from matplotlib import pyplot as plt
fs_rate, signal = wavfile.read("output.wav")
print ("Frequency sampling", fs_rate)
l_audio = len(signal.shape)
print ("Channels", l_audio)
if l_audio == 2:
signal = signal.sum(axis=1) / 2
N = signal.shape[0]
print ("Complete Samplings N", N)
secs = N / float(fs_rate)
print ("secs", secs)
Ts = 1.0/fs_rate # sampling interval in time
print ("Timestep between samples Ts", Ts)
t = scipy.arange(0, secs, Ts) # time vector as scipy arange field / numpy.ndarray
FFT = abs(scipy.fft(signal))
FFT_side = FFT[range(N//4)] # one side FFT range
freqs = scipy.fftpack.fftfreq(signal.size, t[1]-t[0])
fft_freqs = np.array(freqs)
freqs_side = freqs[range(N//4)] # one side frequency range
fft_freqs_side = np.array(freqs_side)
makespositive = signal[44100:]*(-1)
logal = np.log10(makespositive)
sn1 = np.mean(logal[1:44100])
sn2 = np.mean(logal[44100:88200])
sn3 = np.mean(logal[88200:132300])
sn4 = np.mean(logal[132300:176400])
print(sn1)
print(sn2)
print(sn3)
print(sn4)
abs(FFT_side)
for a in range(500):
FFT_side[a] = 0
plt.subplot(311)
p1 = plt.plot(t[44100:], signal[44100:], "g") # plotting the signal
plt.xlabel('Time')
plt.ylabel('Amplitude')
plt.subplot(312)
p1 = plt.plot(t[44100:], logal, "r") # plotting the signal
plt.xlabel('Time')
plt.ylabel('Amplitude')
plt.subplot(313)
p3 = plt.plot(freqs_side, abs(FFT_side), "b") # plotting the positive fft spectrum
plt.xlabel('Frequency (Hz)')
plt.ylabel('Count single-sided')
plt.show()
First plot is amplitude vs time, second one is logarithm of previous graph and the last one is FFT.
In sn1,sn2 part I tried to calculate dB from signal. First I took log and then calculated mean value for each second. It did not give me a clear relationship. I also tried this and did not worked.
import numpy as np
import matplotlib.pyplot as plt
import scipy.io.wavfile as wf
fs, signal = wf.read('output.wav') # Load the file
ref = 32768 # 0 dBFS is 32678 with an int16 signal
N = 8192
win = np.hamming(N)
x = signal[0:N] * win # Take a slice and multiply by a window
sp = np.fft.rfft(x) # Calculate real FFT
s_mag = np.abs(sp) * 2 / np.sum(win) # Scale the magnitude of FFT by window and factor of 2,
# because we are using half of FFT spectrum
s_dbfs = 20 * np.log10(s_mag / ref) # Convert to dBFS
freq = np.arange((N / 2) + 1) / (float(N) / fs) # Frequency axis
plt.plot(freq, s_dbfs)
plt.grid(True)
So which steps should I perform? (Sum/mean all freq amplitudes then take log or reverse, or perform it for signal etc.)
import numpy as np
import matplotlib.pyplot as plt
import scipy.io.wavfile as wf
fs, signal = wf.read('db1.wav')
signal2 = signal[44100:]
chunk_size = 44100
num_chunk = len(signal2) // chunk_size
sn = []
for chunk in range(0, num_chunk):
sn.append(np.mean(signal2[chunk*chunk_size:(chunk+1)*chunk_size].astype(float)**2))
print(sn)
logsn = 20*np.log10(sn)
print(logsn)
Output:
[4.6057844427695475e+17, 5.0025315250895744e+17, 5.028593412665193e+17, 4.910948397471887e+17]
[353.26607217 353.98379668 354.02893044 353.82330741]
A decibel meter measures a signal's mean power. So from your time signal recording you can calculate the mean signal power with:
chunk_size = 44100
num_chunk = len(signal) // chunk_size
sn = []
for chunk in range(0, num_chunk):
sn.append(np.mean(signal[chunk*chunk_size:(chunk+1)*chunk_size]**2))
Then the corresponding mean signal power in decibels is simply given by:
logsn = 10*np.log10(sn)
A equivalent relationship could also be obtained for a frequency domain signal with the use of Parseval's theorem, but in your case would require unecessary FFT computations (this relationship is mostly useful when you already have to compute the FFT for other purposes).
Note however that depending on what you compare there may be some (hopefully small) discrepancies. For example the use of non-linear amplifier and speakers would affect the relationship. Similarly ambient noises would add to the measured power by the decibel meter.

Python: High Pass FIR Filter by Windowing

I'd like to create a basic High Pass FIR Filter by Windowing within Python.
My code is below and is intentionally idiomatic - I'm aware you can (most likely) complete this with a single line of code in Python but I'm learning. I have used a basic a sinc function with a rectangular window: My output works for signals that are additive (f1+f2) but not multiplicative (f1*f2), where f1=25kHz and f2=1MHz.
My questions are: Have I misunderstood something fundamental or is my code wrong?
In summary, I'd like to extract just the high pass signal (f2=1MHz) and filter everything else out. I've also included screen shots of what is generated for (f1+f2) and (f1*f2):
import numpy as np
import matplotlib.pyplot as plt
# create an array of 1024 points sampled at 40MHz
# [each sample is 25ns apart]
Fs = 40e6
T = 1/Fs
t = np.arange(0,(1024*T),T)
# create an ip signal sampled at Fs, using two frequencies
F_low = 25e3 # 25kHz
F_high = 1e6 # 1MHz
ip = np.sin(2*np.pi*F_low*t) + np.sin(2*np.pi*F_high*t)
#ip = np.sin(2*np.pi*F_low*t) * np.sin(2*np.pi*F_high*t)
op = [0]*len(ip)
# Define -
# Fsample = 40MHz
# Fcutoff = 900kHz,
# this gives the normalised transition freq, Ft
Fc = 0.9e6
Ft = Fc/Fs
Length = 101
M = Length - 1
Weight = []
for n in range(0, Length):
if( n != (M/2) ):
Weight.append( -np.sin(2*np.pi*Ft*(n-(M/2))) / (np.pi*(n-(M/2))) )
else:
Weight.append( 1-2*Ft )
for n in range(len(Weight), len(ip)):
y = 0
for i in range(0, len(Weight)):
y += Weight[i]*ip[n-i]
op[n] = y
plt.subplot(311)
plt.plot(Weight,'ro', linewidth=3)
plt.xlabel( 'weight number' )
plt.ylabel( 'weight value' )
plt.grid()
plt.subplot(312)
plt.plot( ip,'r-', linewidth=2)
plt.xlabel( 'sample length' )
plt.ylabel( 'ip value' )
plt.grid()
plt.subplot(313)
plt.plot( op,'k-', linewidth=2)
plt.xlabel( 'sample length' )
plt.ylabel( 'op value' )
plt.grid()
plt.show()
You've misunderstood something fundamental. The windowed sinc filter is designed to separate linearly combined frequencies; i.e. frequencies combined through addition, not frequencies combined through multiplication. See chapter 5 of The Scientist and Engineer's Guide to
Digital Signal Processing for more details.
Code based on scipy.signal will provide similar results to your code:
from pylab import *
import scipy.signal as signal
# create an array of 1024 points sampled at 40MHz
# [each sample is 25ns apart]
Fs = 40e6
nyq = Fs / 2
T = 1/Fs
t = np.arange(0,(1024*T),T)
# create an ip signal sampled at Fs, using two frequencies
F_low = 25e3 # 25kHz
F_high = 1e6 # 1MHz
ip_1 = np.sin(2*np.pi*F_low*t) + np.sin(2*np.pi*F_high*t)
ip_2 = np.sin(2*np.pi*F_low*t) * np.sin(2*np.pi*F_high*t)
Fc = 0.9e6
Length = 101
# create a low pass digital filter
a = signal.firwin(Length, cutoff = F_high / nyq, window="hann")
# create a high pass filter via signal inversion
a = -a
a[Length/2] = a[Length/2] + 1
figure()
plot(a, 'ro')
# apply the high pass filter to the two input signals
op_1 = signal.lfilter(a, 1, ip_1)
op_2 = signal.lfilter(a, 1, ip_2)
figure()
plot(ip_1)
figure()
plot(op_1)
figure()
plot(ip_2)
figure()
plot(op_2)
Impulse Response:
Linearly Combined Input:
Filtered Output:
Non-linearly Combined Input:
Filtered Output:

Categories

Resources