I'm working with the MEG signal and to address the problem of spatial leakage I wanted to perform orthogonalization of the time series.
To have better understanding I started working with a sine signal.
I wrote a code in python and to orthogonalize I used a function in [spectral python][1] module.
If you run the following code and plot the signal, you will find something like shown in attached picture. I think the orthogonalize signal should also looks like the original signal. It should not change drastically.
Please let me know if there are better ways to orthogonalize a signal.
[enter image description here][2]
Here is my code:
import matplotlib.pyplot as plt
import spectral
from scipy import signal
from scipy.signal import hilbert
#Computing Sine wave#
t = np.linspace(-0.02, 0.05, 1000)
sig = 325 * np.sin(2*np.pi*50*t)
sig1 = 200 * np.sin(2*np.pi*50*t)
signal = np.zeros((2, 1000))
signal[0] = sig
signal[1] = sig1
#Orthogonalisation
ortho = spectral.orthogonalize(signal)
#Plotting
plt.figure()
plt.plot(signal[0], 'Blue')
plt.plot(signal[1], 'Green')
plt.title('Signal')
plt.figure()
plt.plot(ortho[0], 'Red')
plt.plot(ortho[1], 'Black')
plt.title('Ortho')```
[1]: https://www.spectralpython.net/class_func_ref.html#orthogonalize
[2]: https://i.stack.imgur.com/GKVZ9.png
The problem is that sig1 is just a scalar multiple of sig so there is no orthogonal component between the two signals. The black orthogonal component you are plotting is just noise due to machine precision/roundoff. Try this instead:
sig1 = sig + 200 * np.cos(2*np.pi*50*t)
The result is shown below and the two signals are orthogonal:
In [4]: np.dot(ortho[0], ortho[1])
Out[4]: 1.9263943063278366e-16
Related
I'm currently working on an application which aims to measure wow&flutter of analogue audio devices. The key task is to demodulate the test signal. I found some code and some pieces of information about FM demodulation, but I still have some major issues. The code down below is my "test field", but it doesn't seem to be working good. The amplitude of demodulated signal increase in time, but I don't know why. I really hope for help, because understanding this problem is really crucial for me. Thanks for the help in advance.
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import hilbert
sampling_freq = 44100
T = 1/sampling_freq
fm = 4
fc = 3150
ts = np.arange(0,1-T,T)
ts2 = np.arange(0,44098)
xm = np.sin(2*np.pi*fm*ts) # Modulating signal
mf=100 # Modulation factor
xc = np.sin(2*np.pi*(fc+mf*xm)*ts) #Modulated carrier
def fm_demod(x, df=1.0, fc=0.0):
# Making complex signal from real values input
z = hilbert(x)
# Remove carrier.
n = np.arange(0,len(z))
rx = z*np.exp(-1j*2*np.pi*fc*n)
# Extract phase of carrier.
phi = np.arctan2(np.imag(rx), np.real(rx))
# Calculate frequency from phase.
y = np.diff(np.unwrap(phi)/(2*np.pi*df))
return y
xd = fm_demod(xc,10,fc) # Demodulated signal
data = np.fft.rfft(xd) # Demodulated signal spectrum
fig, (ax1,ax2,ax3,ax4) = plt.subplots(4,1)
ax1.plot(ts,xm)
ax1.set_title('Modulating signal')
ax2.plot(ts,xc)
ax2.set_title('Modulated signal')
ax3.plot(ts2,xd)
ax3.set_title('Demodulated signal')
ax4.stem(abs(data),use_line_collection=True)
ax4.set_title('Demodulated signal spectrum')
plt.show()
Plots from the program
Your modulation formula is wrong, this is the correct form:
xc = np.sin(2*np.pi*fc*ts+(mf*xm))
(also remember that "xm" is actually the derivate of the message)
I am trying to compute the envelope of a signal using the Hilbert transform in Scipy.
Here is the code,
import numpy as np
from scipy.signal import hilbert
A=2
lamb=20
w=2*np.pi*100
signal = A**(-lamb*t)*(np.sin(w*t+5)+np.cos(w*t+5))
analytic_signal = hilbert(signal)
amplitude_envelope = np.abs(analytic_signal)
if one plots the signal and the envelope, the latter show quite high values both at the beginning and at the end... see figure attached. Any tips on how to fix this issue and get a better envelope?
Thanks in advance.
A basic assumption of hilbert is that the input signal is periodic. If you extended your signal to be periodic, then at t=1 there would be a big jump from the long, flat tail to the repetition of the initial burst of the signal.
One way to handle this is to apply hilbert to an even extension of the signal, such as the concatenation of the signal with a reversed copy of itself, e.g. np.concatenate((signal[::-1], signal)). Here's a modified version of your script that does this:
import numpy as np
from scipy.signal import hilbert
import matplotlib.pyplot as plt
A = 2
lamb = 20
w = 2*np.pi*100
fs = 8000
T = 1.0
t = np.arange(int(fs*T)) / fs
signal = A**(-lamb*t)*(np.sin(w*t+5)+np.cos(w*t+5))
# Make an even extension of `signal`.
signal2 = np.concatenate((signal[::-1], signal))
analytic_signal2 = hilbert(signal2)
# Get the amplitude of the second half of analytic_signal2
amplitude_envelope = np.abs(analytic_signal2[len(t):])
plt.plot(t, signal, label='signal')
plt.plot(t, amplitude_envelope, label='envelope')
plt.xlabel('t')
plt.legend(framealpha=1, shadow=True)
plt.grid()
plt.show()
Here's the plot that is created by the script:
how to validate whether the down sampled output is correct. For example, I had make some example, however, I am not sure whether the output is correct or not?
Any idea on the validation
Code
import numpy as np
import matplotlib.pyplot as plt # For ploting
from scipy import signal
import mne
fs = 100 # sample rate
rsample=50 # downsample frequency
fTwo=400 # frequency of the signal
x = np.arange(fs)
y = [ np.sin(2*np.pi*fTwo * (i/fs)) for i in x]
f_res = signal.resample(y, rsample)
xnew = np.linspace(0, 100, f_res.size, endpoint=False)
#
# ##############################
#
plt.figure(1)
plt.subplot(211)
plt.stem(x, y)
plt.subplot(212)
plt.stem(xnew, f_res, 'r')
plt.show()
Plotting the data is a good first take at a verification. Here I made regular plot with the points connected by lines. The lines are useful since they give a guide for where you expect the down-sampled data to lie, and also emphasize what the down-sampled data is missing. (It would also work to only show lines for the original data, but lines, as in a stem plot, are too confusing, imho.)
import numpy as np
import matplotlib.pyplot as plt # For ploting
from scipy import signal
fs = 100 # sample rate
rsample=43 # downsample frequency
fTwo=13 # frequency of the signal
x = np.arange(fs, dtype=float)
y = np.sin(2*np.pi*fTwo * (x/fs))
print y
f_res = signal.resample(y, rsample)
xnew = np.linspace(0, 100, f_res.size, endpoint=False)
#
# ##############################
#
plt.figure()
plt.plot(x, y, 'o')
plt.plot(xnew, f_res, 'or')
plt.show()
A few notes:
If you're trying to make a general algorithm, use non-rounded numbers, otherwise you could easily introduce bugs that don't show up when things are even multiples. Similarly, if you need to zoom in to verify, go to a few random places, not, for example, only the start.
Note that I changed fTwo to be significantly less than the number of samples. Somehow, you need at least more than one data point per oscillation if you want to make sense of it.
I also remove the loop for calculating y: in general, you should try to vectorize calculations when using numpy.
The spectrum of the resampled signal should have a tone at the same frequency as the input signal just in a smaller nyquist bandwidth.
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
import scipy.fftpack as fft
fs = 100 # sample rate
rsample=50 # downsample frequency
fTwo=10 # frequency of the signal
n = np.arange(1024)
y = np.sin(2*np.pi*fTwo/fs*n)
y_res = signal.resample(y, len(n)/2)
Y = fft.fftshift(fft.fft(y))
f = -fs*np.arange(-512, 512)/1024
Y_res = fft.fftshift(fft.fft(y_res, 1024))
f_res = -fs/2*np.arange(-512, 512)/1024
plt.figure(1)
plt.subplot(211)
plt.stem(f, abs(Y))
plt.subplot(212)
plt.stem(f_res, abs(Y_res))
plt.show()
The tone is still at 10.
IF you down sample a signal both signals will still have the exact same value and a given time , so just loop through "time" and check that the values are the same. In your case you go from a sample rate of 100 to 50. Assuming you have 1 seconds worth of data from building your x from fs, then just loop through t = 0 to t=1 in 1/50'th increments and make sure that Yd(t) = Ys(t) where Yd d is the down sampled f and Ys is the original sampled frequency. Or to say it simply Yd(n) = Ys(2n) for n = 1,2,3,...n=total_samples-1.
I am writing a code about a mono-energetic gamma beam which the dominated interaction is photoelectric absorption, mu=2 cm-1, and i need to generate 50000 random numbers and sample the interaction depth(which I do not know if i did it or not).
I know that the mean free path=mu-1, but I need to find the mean free path from the simulation and from mu and compare them, is what I did right in the code or not?
import random
import matplotlib.pyplot as plt
import numpy as np
mu=(2)
random.seed=()
data = np.random.randn(50000)*10
bins = np.arange(data.min(), data.max()+1e-8, 0.1)
meanfreepath = 1/mu
print(meanfreepath)
plt.hist(data, bins=bins)
plt.show()
Well, interaction depth distribution is Exponential one, not a gaussian.
So code would be
lmbda = 2 # cm^-1
beta = 1.0/lmbda
data = np.random.exponential(scale=beta, size=50000)
mfp = np.mean(data)
print(mfp)
# build histogram
More details at https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.random.exponential.html
Code above produced
0.4977168417102998
which looks like 2-1 to me
I was trying to filter a signal using the scipy module of python and I wanted to see which of lfilter or filtfilt is better.
I tried to compare them and I got the following plot from my mwe
import numpy as np
import scipy.signal as sp
import matplotlib.pyplot as plt
frequency = 100. #cycles/second
samplingFrequency = 2500. #samples/second
amplitude = 16384
signalDuration = 2.3
cycles = frequency*signalDuration
time = np.linspace(0, 2*np.pi*cycles, signalDuration*samplingFrequency)
freq = np.fft.fftfreq(time.shape[-1])
inputSine = amplitude*np.sin(time)
#Create IIR Filter
b, a = sp.iirfilter(1, 0.3, btype = 'lowpass')
#Apply filter to input
filteredSignal = sp.filtfilt(b, a, inputSine)
filteredSignalInFrequency = np.fft.fft(filteredSignal)
filteredSignal2 = sp.lfilter(b, a, inputSine)
filteredSignal2InFrequency = np.fft.fft(filteredSignal2)
plt.close('all')
plt.figure(1)
plt.title('Sine filtered with filtfilt')
plt.plot(freq, abs(filteredSignalInFrequency))
plt.subplot(122)
plt.title('Sine filtered with lfilter')
plt.plot(freq, abs(filteredSignal2InFrequency))
print max(abs(filteredSignalInFrequency))
print max(abs(filteredSignal2InFrequency))
plt.show()
Can someone please explain why there is a difference in the magnitude response?
Thanks a lot for your help.
Looking at your graphs shows that the signal filtered with filtfilt has a peak magnitude of 4.43x107 in the frequency domain compared with 4.56x107 for the signal filtered with lfilter. In other words, the signal filtered with filtfilt has an peak magnitude that is 0.97 that when filtering with
Now we should note that scipy.signal.filtfilt applies the filter twice, whereas scipy.signal.lfilter only applies it once. As a result, input signals get attenuated twice as much. To confirm this we can have a look at the frequency response of the Butterworth filter you have used (obtained with iirfilter) around the input tone's normalized frequency of 100/2500 = 0.04:
which indeed shows an that the application of this filter does causes an attenuation of ~0.97 at a frequency of 0.04.