How to set scale and parameter w in wavelet transform (scipy)? - python

I try do a wavelet transform on a signal.
The signal is recorded with 500 Hz (: 500 datapoints per second). I just want to do a wavelet transform on 0.2 seconds of the signal (-> 100 datapoints).
I am really not sure how to define parameter "w" and also if the y-axis is scaled correctly? I cannot find anything in the documentation of scipy to "w".
My code and output looks as follows:
import matplotlib.pyplot as plt
import numpy as np
from scipy import signal
t, dt = np.linspace(0, 0.2, 100, retstep=True)
fs = 1/dt
w = 6.
sig = sig[0:100] # 0.2 seconds of the signal
freq = np.linspace(1, fs/2, 100)
widths = w*fs / (2*freq*np.pi)
cwtm = signal.cwt(sig, signal.morlet2, widths, w=w)
plt.pcolormesh(t, freq, np.abs(cwtm), cmap='jet', shading='gouraud')
plt.xlabel("Time")
plt.ylabel("Scale")
plt.show()
Output:
Bonus question: How would you interprete the output in two sentences?

Related

Recover the time shift from nympy.correlate result in Python

This is not a duplicate question since other answers only explain how to plot the cross-correlation function and do not explain how you can get the time difference.
Given a sin signal and shifted version, we should be able to get the time delay between them.
I have created a sin signal and shifted it by t_d=0.05. The following is my code and its output:
import numpy as np
import matplotlib.pyplot as plt
fs = 1000
x = np.linspace(0, 1, fs)
f = 5
t_shift = 0.05
y = np.sin(2*np.pi*f*x)
y_shifted = np.sin(2*np.pi*f*(x-t_shift))
fig, ax = plt.subplots()
ax.plot(x, y, x, y_shifted)
plt.show()
By normalizing signals and applying numpy.correlate we get the following:
y_norm = (y-y.mean())/y.std()
y_shifted_norm = (y_shifted - y_shifted.mean())/y_shifted.std()
cc = np.correlate(y_norm, y_shifted_norm, 'full')
fig, ax = plt.subplots()
ax.plot(range(len(cc)), cc)
plt.show()
Question
From the indices of cross-correlation function, how can I get t_shift=0.05?
#Sepide. It seems to me as if you are trying to maximise the correlation between the signal y and a shifted version of y_shifted. This might be accomplished using np.correlate() but it seems nontrivial indeed to recover the time shifts in the signals. In the solution below I manually shift the time series and compute the correlation coefficient using np.corrcoef. As soon as this Pearson correlation coefficient equals 1, the two signals are aligned.
import numpy as np
import matplotlib.pyplot as plt
# Setting
fs = 1000
x = np.linspace(0, 1, fs)
f = 5
t_shift = 0.05
t_step = 1/fs
# Data
y = np.sin(2*np.pi*f*x)
y_shifted = np.sin(2*np.pi*f*(x-t_shift))
# Compute correlation
MaxTimeShift = 200
CorrelationList = np.empty((MaxTimeShift,1));
CorrelationList[:] = np.NaN
# Compute correlation for various shifts
for iter in range(MaxTimeShift):
CorrelationList[iter] = np.corrcoef( y[0:801].T, y_shifted[iter:(801+iter)].T)[0,1]
# Plot 1
plt.figure(1)
plt.plot(x, y, x, y_shifted)
plt.show()
# Plot 2
plt.figure(2)
ShiftList = t_step*np.arange(MaxTimeShift)
plt.plot(ShiftList, CorrelationList)
plt.title("Correlation coefficient")
plt.show()
print("The time shift between the signals is: ", ShiftList[np.argmax(CorrelationList)])

librosa melspectrogram y-axis scale wrong?

I'm trying to figure out why Mel scale spectrogram seems to have the wrong frequency scale. I generate a 4096Hz tone and plot it using librosa's display library, and the tone does not align with the known frequency? I'm obviously doing something wrong, can someone help? Thanks!
import numpy as np
import librosa.display
import matplotlib.pyplot as plt
sr = 44100
t = np.linspace(0, 1, sr)
y = 0.1 * np.sin(2 * np.pi * 4096 * t)
M = librosa.feature.melspectrogram(y=y, sr=sr)
M_db = librosa.power_to_db(M, ref=np.max)
librosa.display.specshow(M_db, y_axis='mel', x_axis='time')
plt.show()
When you compute the mel spectrogram using librosa.feature.melspectrogram(y=y, sr=sr) you implicitly create a mel filter using the parameters fmin=0 and fmax=sr/2 (see docs here). To correctly plot the spectrogram, librosa.display.specshow needs to know how it was created, i.e. what sample rate sr was used (to get the time axis right) and what frequency range was used to get the frequency axis right. While librosa.feature.melspectrogram defaults to 0 - sr/2, librosa.display.specshow unfortunately defaults to 0 - 11050 (see here). This describes librosa 0.8—I could imagine this changes in the future.
To get this to work correctly, explicitly add fmax parameters. To also get the time axis right, add the sr parameter to librosa.display.specshow:
import numpy as np
import librosa.display
import matplotlib.pyplot as plt
sr = 44100
t = np.linspace(0, 1, sr)
y = 0.1 * np.sin(2 * np.pi * 4096 * t)
M = librosa.feature.melspectrogram(y=y, sr=sr, fmax=sr/2)
M_db = librosa.power_to_db(M, ref=np.max)
librosa.display.specshow(M_db, sr=sr, y_axis='mel', x_axis='time', fmax=sr/2)
plt.show()

Denoise a Signal using wavelets in python

I want to denoise a signal using wavelets.
I have generated an ideal sine wave of two different frequencies and added noise to it.
Please let me know how to denoise the signal "I_as_fft_array_noise" as mentioned in the below code using wavelets.
Import Libraries
from scipy import signal
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.pyplot as plt
import pywt
import sys
Define the signal sine signal of 100Hz and 50Hz
Fs_fft = 1e4 # Sampling Frequency
Pi=np.pi
N1 = (1e5)/2
time1 = np.arange(N1) / float(Fs_fft)
f1 = 100
x1 = 1000*np.sin(2*Pi*f1*time1)
time2 = np.arange(N1,2*N1) / float(Fs_fft)
f2 = 50
x2 = 500*np.sin(2*Pi*f2*time2)
T_fft_array =np.concatenate((time1,time2))
I_as_fft_array = np.concatenate((x1,x2))
Adding noise using target SNR
Reference: adding noise to a signal in python
I_as_fft_array_watts = I_as_fft_array ** 2
I_as_fft_array_db = 10 * np.log10(I_as_fft_array_watts)
target_snr_db = 20
sig_avg_watts = np.mean(I_as_fft_array_watts)
sig_avg_db = 10 * np.log10(sig_avg_watts)
noise_avg_db = sig_avg_db - target_snr_db
noise_avg_watts = 10 ** (noise_avg_db / 10)
mean_noise = 0
noise_volts = np.random.normal(mean_noise, np.sqrt(noise_avg_watts), len(I_as_fft_array_watts))
I_as_fft_array_noise = I_as_fft_array + noise_volts
Plot ideal signal
plt.plot(T_fft_array , I_as_fft_array)
plt.title('Ideal Signal')
plt.ylabel('Current [A]')
plt.xlabel('Time [sec]')
plt.show()
Plot signal with noise
plt.plot(T_fft_array, I_as_fft_array_noise)
plt.title('Signal with noise')
plt.ylabel('Voltage (V)')
plt.xlabel('Time (s)')
plt.show()

Erasing noise from fft chart

Do you know how to delete so much noise from the FFT?
Here is my code of FFT:
import numpy as np
fft1 = (Bx[51:-14])
fft2 = (By[1:-14])
# Loop for FFT data
for dataset in [fft1]:
dataset = np.asarray(dataset)
psd = np.abs(np.fft.fft(dataset))**2
freq = np.fft.fftfreq(dataset.size, float(300)/dataset.size)
plt.semilogy(freq[freq>0], psd[freq>0]/dataset.size**2, color='r')
for dataset2 in [fft2]:
dataset2 = np.asarray(dataset2)
psd2 = np.abs(np.fft.fft(dataset2))**2
freq2 = np.fft.fftfreq(dataset2.size, float(300)/dataset2.size)
plt.semilogy(freq2[freq2>0], psd2[freq2>0]/dataset2.size**2, color='b')
What I get:
What I need:
Any ideas? Welch does not work, so as you can see, I don't want to smooth my chart, but erase so much noise to the level which is presented on the second picture.
This is what Welch do:
and a bit of code:
freqs, psd = scipy.signal.welch(dataset, fs=300, window='hamming')
Updated Welch:
A bit of code:
# Loop for FFT data
for dataset in [fft1]:
dataset = np.asarray(dataset)
freqs, psd = welch(dataset, fs=266336/300, window='hamming', nperseg=512)
plt.semilogy(freqs, psd/dataset.size**2, color='r')
for dataset2 in [fft2]:
dataset2 = np.asarray(dataset2)
freqs2, psd2 = welch(dataset2, fs=266336/300, window='hamming', nperseg=512)
plt.semilogy(freqs2, psd2/dataset2.size**2, color='b')
As you can see Welch is well configurated, it shows 60 Hz electricity line, and harmonic modes. It is almost good, but it smoothed completely my plot. See graph two which is desired. Btw. y scale is wrong at Welch plot, but it is just a case of power data to the two.
I have changed to nperseg=8192 and it worked. Look at the results.
Here is an example that shows how to use nperseg to control the frequency resolution vs. noise reduction tradeoff:
Setting nperseg to the length of the signal is more or less equivalent to using the FFT without any averaging.
Here is the code to generate this image:
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
plt.figure(figsize=[8, 12])
n = 2**21
fs = 887
# example data
x = np.random.randn(n)
x += np.sin(np.cumsum(0.42 + np.random.randn(n) * 0.01)) * 5
x = signal.lfilter([1, 0.5], 2, x)
plt.subplot(3, 2, 1)
plt.semilogy(np.abs(np.fft.fft(x)[:n//2])**2 / n**2, label='FFT')
plt.legend(loc='best')
for i, nperseg in enumerate([128, 512, 8192, 65536, n]):
plt.subplot(3, 2, i+2)
f, psd = signal.welch(x, fs=fs, window='hamming', nperseg=nperseg, noverlap=0)
plt.semilogy(f, psd, label='nperseg={}'.format(nperseg))
plt.legend(loc='best')
plt.show()

How to use Python to draw a normal probability plot by using certain column data in dataFrame

I have a Data Frame that contains two columns named, "thousands of dollars per year", and "EMPLOY".
I create a new variable in this data frame named "cubic_Root" by computing the data in df['thousands of dollars per year']
df['cubic_Root'] = -1 / df['thousands of dollars per year'] ** (1. / 3)
The data in df['cubic_Root'] like that:
ID cubic_Root
1 -0.629961
2 -0.405480
3 -0.329317
4 -0.480750
5 -0.305711
6 -0.449644
7 -0.449644
8 -0.480750
Now! How can I draw a normal probability plot by using the data in df['cubic_Root'].
You want the "Probability" Plots.
So for a single plot, you'd have something like below.
import scipy.stats
import numpy as np
import matplotlib.pyplot as plt
# 100 values from a normal distribution with a std of 3 and a mean of 0.5
data = 3.0 * np.random.randn(100) + 0.5
counts, start, dx, _ = scipy.stats.cumfreq(data, numbins=20)
x = np.arange(counts.size) * dx + start
plt.plot(x, counts, 'ro')
plt.xlabel('Value')
plt.ylabel('Cumulative Frequency')
plt.show()
If you want to plot a distribution, and you know it, define it as a function, and plot it as so:
import numpy as np
from matplotlib import pyplot as plt
def my_dist(x):
return np.exp(-x ** 2)
x = np.arange(-100, 100)
p = my_dist(x)
plt.plot(x, p)
plt.show()
If you don't have the exact distribution as an analytical function, perhaps you can generate a large sample, take a histogram and somehow smooth the data:
import numpy as np
from scipy.interpolate import UnivariateSpline
from matplotlib import pyplot as plt
N = 1000
n = N/10
s = np.random.normal(size=N) # generate your data sample with N elements
p, x = np.histogram(s, bins=n) # bin it into n = N/10 bins
x = x[:-1] + (x[1] - x[0])/2 # convert bin edges to centers
f = UnivariateSpline(x, p, s=n)
plt.plot(x, f(x))
plt.show()
You can increase or decrease s (smoothing factor) within the UnivariateSpline function call to increase or decrease smoothing. For example, using the two you get:
Probability density Function (PDF) of inter-arrival time of events.
import numpy as np
import scipy.stats
# generate data samples
data = scipy.stats.expon.rvs(loc=0, scale=1, size=1000, random_state=123)
A kernel density estimation can then be obtained by simply calling
scipy.stats.gaussian_kde(data,bw_method=bw)
where bw is an (optional) parameter for the estimation procedure. For this data set, and considering three values for bw the fit is as shown below
# test values for the bw_method option ('None' is the default value)
bw_values = [None, 0.1, 0.01]
# generate a list of kde estimators for each bw
kde = [scipy.stats.gaussian_kde(data,bw_method=bw) for bw in bw_values]
# plot (normalized) histogram of the data
import matplotlib.pyplot as plt
plt.hist(data, 50, normed=1, facecolor='green', alpha=0.5);
# plot density estimates
t_range = np.linspace(-2,8,200)
for i, bw in enumerate(bw_values):
plt.plot(t_range,kde[i](t_range),lw=2, label='bw = '+str(bw))
plt.xlim(-1,6)
plt.legend(loc='best')
Reference:
Python: Matplotlib - probability plot for several data set
how to plot Probability density Function (PDF) of inter-arrival time of events?

Categories

Resources