How can I write this same code from MATLAB in Python? what modules should I use?
player = audioplayer(y, Fs);
play(player);
% y = Audio signal represented by a vector or two-dimensional array containing
%single, double, int8, uint8, or int16 values.
%Fs = Sampling rate in Hz
The simpleaudio package allows audio playback of numpy vectors with time-domain signals.
Here is a simple usage example taken from the simpleaudio package's documentation:
import numpy as np
import simpleaudio as sa
# calculate note frequencies
A_freq = 440
Csh_freq = A_freq * 2 ** (4 / 12)
E_freq = A_freq * 2 ** (7 / 12)
# get timesteps for each sample, T is note duration in seconds
sample_rate = 44100
T = 0.25
t = np.linspace(0, T, T * sample_rate, False)
# generate sine wave notes
A_note = np.sin(A_freq * t * 2 * np.pi)
Csh_note = np.sin(Csh_freq * t * 2 * np.pi)
E_note = np.sin(E_freq * t * 2 * np.pi)
# concatenate notes
audio = np.hstack((A_note, Csh_note, E_note))
# normalize to 16-bit range
audio *= 32767 / np.max(np.abs(audio))
# convert to 16-bit data
audio = audio.astype(np.int16)
# start playback
play_obj = sa.play_buffer(audio, 1, 2, sample_rate)
# wait for playback to finish before exiting
play_obj.wait_done()
Related
For a given audio signal I want it to be spitted in to 50ms chunks to perform Fourier Transform. The problem is when I use the usual split method in Numpy it adds some high frequency components due to sudden split.
So to solve this I heard we have to use a proper window.
The splitting using the numpy function is equalent to using a rectangular window which in signal processing, a not-so-clever method!
So I need help how I can use Blackman window to make my audio signal spliced in to chunks.
So here is what I have used for a similar application in audio signal processing. To be exact I was making a spectrogram finder for songs. For that I also wanted to find the Fourier Transform of small chunks.
spacing = 0.1 # seconds
window_dura = 0.2 #seconds
signal_N = len(data) #length of whole audio
window_N = int(sample_rate * window_dura)
spacing_N = int(spacing * sample_rate)
#blackman nuttel window
a0,a1,a2,a3 = 0.3635819,0.4891775,0.1365995,0.0106411
xWindow = np.arange(0,window_N,1)
yWindow = a0 - a1 * np.cos(2 * np.pi * xWindow / window_N) + a2 * np.cos(4 * np.pi * xWindow / window_N) -a3 * np.cos(6 * np.pi * xWindow / window_N)
n = int((signal_N - window_N) / spacing_N) # number of chunks targetable (window touches the starting point
# ... total does not exceed signal length)
window_Array = np.zeros(shape=(n,siganl_N)) # all windows horizontally stacked
print("r3:shape",window_Array.shape)
for i in range(n):
window_Array[i,i*spacing_N:i*spacing_N+window_N] = yWindow
Also see how these windows are aligned (only first 10 windows are displayed)
#See how the window_Array's each "window with full duration" looks like
max_plot_N = 10
fig,ax = plt.subplots(min(max_plot_N,n),1,figsize=(30,4*min(max_plot_N,n)))
for i in range(n)[:max_plot_N]:
ax[i].plot(window_Array[i])
plt.show()
I'm working on a nerd project for fun.
The project is analog video recorded onto an audio cassette.
The challenge lies in the very limited bandwidth.
I have a method to record color video at 24fps along with mono audio.
I got the video stuff working but need some help with the audio.
Here is the signal I have to work with:
Note: using YUV color space
Left channel:
Sync Pulses &
Y (luma) data
Right channel:
U & V (chroma) data
mixed with
Mono audio (Amplitude Modulated at 14kHz)
I'm not sure how to separate the color data from the audio.
I've looked into FFT with numpy a bit but am not fully understanding it.
Basically what I need is a band filter to separate 13990Hz - 14010Hz (to account for wow and flutter)
Ok here is a little test code that shows how this works.
import matplotlib.pyplot as plt
import numpy as np
import math as mt
from scipy.signal import butter, sosfilt, lfilter
def butter_bandpass(lowcut, highcut, fs, order=5):
nyq = 0.5 * fs
low = lowcut / nyq
high = highcut / nyq
sos = butter(order, [low, high], analog=False, btype='band', output='sos')
return sos
def bandpass(data, lowcut, highcut, fs, order=5):
sos = butter_bandpass(lowcut, highcut, fs, order=order)
y = sosfilt(sos, data)
return y
def bandstop(data, lowcut, highcut, fs, order):
nyq = 0.5 * fs
low = lowcut / nyq
high = highcut / nyq
i, u = butter(order, [low, high], btype='bandstop')
y = lfilter(i, u, data)
return y
# Modulation & Bandpass Test
# note that the data is just test data and isn't actual audio or color data
Fs = 192000 # rate
f = 14000 # carrier frequency (Hz)
sample = 192000 # total length
x = np.arange(sample)
signal = (np.sin(2 * np.pi * 1000 * x / Fs)) # audio
noise = 0.5*(np.sin(2 * np.pi * 10000 * x / Fs)) # color data
y = [] # combined AM audio and color data
for t in range(0, sample, 1):
amp = (signal[t] + 1) / 2
sine = amp * (mt.sin(2*mt.pi * f * t / Fs))
y.append((sine+noise[t])/2)
# y is raw signal
w = bandpass(y, 1600, 1800, 24000, order=1) # only AM audio signal
v = bandstop(y, 1450, 1950, 24000, order=6) # Rest of the signal
# Note: lowered the sample rate input for the filters (and frequencies accordingly)
# since it didn't like 192khz
# The color data does get impacted by the bandstop filter but this is
# mostly not noticable as the YUV color space is used meaning
# the color resolution can be reduced drastically without noticable effect
plt.plot(y)
plt.plot(w)
plt.plot(v)
plt.xlabel('sample')
plt.ylabel('amplitude')
plt.show()
If you want to check out the full code along with a wav of the signal and an example video of the output here's a link:
https://drive.google.com/drive/folders/18ogpK4n43d_Q0tjdmlm2uIRZBIrRu01y?usp=sharing
I just started learning about isochronic tones and started writing a basic python script to generate the same. Following is a script that I have written for doing the following:
Generate a trapezoidal wave of beat frequency
Modulate a sine wave of base frequency, with the trapezoidal wave of beat frequency
#!/usr/bin/env python
# encoding: utf-8
"""
Small program for creating Isochronic Tones of desired base frequency, beat frequency and ramp frequencies
"""
import math
import wave
import struct
import array
import sys
def make_isochronic_wave(beat_freq, beat_ramp_percent, beat_sampl_rate, beat_time, base_freq, amplitude):
#
# The time for which the trapezoidal beat frequency wave is present in the entire cycle
#
up_time = 1 / beat_freq
#
# Gap between consequtive trapezoidal beats
#
gap_percent = up_time * 0.15
#
# To accomodate gaps
#
up_time = up_time * 0.85
#
# Total number of samples
#
data_size = beat_sampl_rate * beat_time
#
# No. of gaps per sec = No. of beats per sec
#
no_of_gaps = beat_freq
#
# Samples per gap = percentage of total time allocated for gaps * No. of samples per sec
#
sampls_per_gap = gap_percent * beat_sampl_rate
#
# Total number of samples in all the gaps
#
gap_sampls = no_of_gaps * sampls_per_gap
#
# Change the beat sample rate to accomodate gaps
#
beat_sampl_rate = beat_sampl_rate - gap_sampls
#
# nsps = Number of Samples Per Second
#
# NOTE: Please see the image at:
#
beat_nsps_defined = beat_sampl_rate * up_time
beat_nsps_inc = beat_nsps_defined * beat_ramp_percent
beat_nsps_dec = beat_nsps_defined * beat_ramp_percent
beat_nsps_stable = beat_nsps_defined - (beat_nsps_inc + beat_nsps_dec)
beat_nsps_undefined = beat_sampl_rate - beat_nsps_defined
#
# Trapezoidal values
#
values = []
#
# Trapezoidal * sine == Isochronic values
#
isoch_values = []
#
# Samples constructed
#
sampls_const = 0
#
# Iterate till all the samples in data_size are constructed
#
while sampls_const < data_size:
prev_sampl_value_inc = 0.0
prev_sampl_value_dec = 1.0
#
# Construct one trapezoidal beat wave (remember this is not the entire sample)
#
for sampl_itr in range(0, int(beat_sampl_rate)):
if sampl_itr < beat_nsps_inc:
value = prev_sampl_value_inc + (1 / beat_nsps_inc)
prev_sampl_value_inc = value
values.append(value)
if sampl_itr > beat_nsps_inc:
if sampl_itr < (beat_nsps_inc + beat_nsps_stable):
value = 1
values.append(value)
elif (sampl_itr > (beat_nsps_inc + beat_nsps_stable)) and (sampl_itr < (beat_nsps_inc + beat_nsps_stable + beat_nsps_dec)):
value = prev_sampl_value_dec - (1 / beat_nsps_dec)
prev_sampl_value_dec = value
values.append(value)
#
# Add the gap cycles
#
for gap_iter in range(0, int(sampls_per_gap)):
values.append(0)
#
# Increment the number of samples constructed to reflect the values
#
sampls_const = sampls_const + beat_nsps_defined + gap_sampls
#
# Open the wave file
#
wav_file = wave.open("beat_wave_%s_%s.wav" % (base_freq, beat_freq), "w")
#
# Define parameters
#
nchannels = 2
sampwidth = 2
framerate = beat_sampl_rate
nframes = data_size
comptype = "NONE"
compname = "not compressed"
wav_file.setparams((nchannels, sampwidth, framerate, nframes, comptype, compname))
#
# Calculate isochronic wave point values
#
value_iter = 0
for value in values:
isoch_value = value * math.sin(2 * math.pi * base_freq * (value_iter / beat_sampl_rate))
value_iter = value_iter + 1
isoch_values.append(isoch_value)
#
# Create the wave file (in .wav format)
#
for value in isoch_values:
data = array.array('h')
data.append(int(value * amplitude / 2)) # left channel
data.append(int(value * amplitude / 2)) # right channel
wav_file.writeframes(data.tostring())
wav_file.close()
try:
base_freq = int(sys.argv[1], 10)
beat_freq = int(sys.argv[2], 10)
sample_rate = int(sys.argv[3], 10)
output_time = int(sys.argv[4], 10)
ramp_percent = float(sys.argv[5])
amplitude = float(sys.argv[6])
make_isochronic_wave(beat_freq, ramp_percent, sample_rate, output_time, base_freq, amplitude)
except:
msg = """
<program> <base freqency> <beat frequency> <sample rate> <output time> <ramp percent> <amplitude>
"""
print (msg)
The above code works fine, and I get a wave of the below format:
The above format being similar to what is generated using audacity using IsoMod plugin. However, I would like to generate a tone which has the beat frequency reduce as a ramp. For this, I have enhanced the above script to call the trapezoidal wave generation in a loop, once every second. However, setting the parameters for wav file for writing multiple times (with changes in data_size due to change in beat_freq across the ramp), I am getting the following error
G:\>python gen_isochronic_tones.py 70 10 5 11025 5 0.15 8000
gen_isochronic_tones.py:195: DeprecationWarning: tostring() is deprecated. Use tobytes() instead.
make_isochronic_wave(beat_freq_start, beat_freq_end, ramp_percent, sample_rate, output_time, base_freq, amplitude)
Traceback (most recent call last):
File "gen_isochronic_tones.py", line 195, in <module>
make_isochronic_wave(beat_freq_start, beat_freq_end, ramp_percent, sample_rate, output_time, base_freq, amplitude)
File "gen_isochronic_tones.py", line 156, in make_isochronic_wave
wav_file.setparams((nchannels, sampwidth, framerate, data_size, comptype, compname))
File "G:\Python37-32\lib\wave.py", line 399, in setparams
raise Error('cannot change parameters after starting to write')
wave.Error: cannot change parameters after starting to write
Looks like wave module allows the paramters (namely data_size above) to be changed only once. Any idea how to make this work for changing data_size?
is there a way in python to generate a continuous series of beeps in increasing amplitude and export it into a WAV file?
I've based this on the answer to the previous question and added a lot of comments. Hopefully this makes it clear. You'll probably want to introduce a for loop to control the number of beeps and the increasing volume.
#!/usr/bin/python
# based on : www.daniweb.com/code/snippet263775.html
import math
import wave
import struct
# Audio will contain a long list of samples (i.e. floating point numbers describing the
# waveform). If you were working with a very long sound you'd want to stream this to
# disk instead of buffering it all in memory list this. But most sounds will fit in
# memory.
audio = []
sample_rate = 44100.0
def append_silence(duration_milliseconds=500):
"""
Adding silence is easy - we add zeros to the end of our array
"""
num_samples = duration_milliseconds * (sample_rate / 1000.0)
for x in range(int(num_samples)):
audio.append(0.0)
return
def append_sinewave(
freq=440.0,
duration_milliseconds=500,
volume=1.0):
"""
The sine wave generated here is the standard beep. If you want something
more aggresive you could try a square or saw tooth waveform. Though there
are some rather complicated issues with making high quality square and
sawtooth waves... which we won't address here :)
"""
global audio # using global variables isn't cool.
num_samples = duration_milliseconds * (sample_rate / 1000.0)
for x in range(int(num_samples)):
audio.append(volume * math.sin(2 * math.pi * freq * ( x / sample_rate )))
return
def save_wav(file_name):
# Open up a wav file
wav_file=wave.open(file_name,"w")
# wav params
nchannels = 1
sampwidth = 2
# 44100 is the industry standard sample rate - CD quality. If you need to
# save on file size you can adjust it downwards. The stanard for low quality
# is 8000 or 8kHz.
nframes = len(audio)
comptype = "NONE"
compname = "not compressed"
wav_file.setparams((nchannels, sampwidth, sample_rate, nframes, comptype, compname))
# WAV files here are using short, 16 bit, signed integers for the
# sample size. So we multiply the floating point data we have by 32767, the
# maximum value for a short integer. NOTE: It is theortically possible to
# use the floating point -1.0 to 1.0 data directly in a WAV file but not
# obvious how to do that using the wave module in python.
for sample in audio:
wav_file.writeframes(struct.pack('h', int( sample * 32767.0 )))
wav_file.close()
return
append_sinewave(volume=0.25)
append_silence()
append_sinewave(volume=0.5)
append_silence()
append_sinewave()
save_wav("output.wav")
I added minor improvements to the JCx code above. As author said, its not cool to use global variables. So I wrapped his solution into class, and it works just fine:
import math
import wave
import struct
class BeepGenerator:
def __init__(self):
# Audio will contain a long list of samples (i.e. floating point numbers describing the
# waveform). If you were working with a very long sound you'd want to stream this to
# disk instead of buffering it all in memory list this. But most sounds will fit in
# memory.
self.audio = []
self.sample_rate = 44100.0
def append_silence(self, duration_milliseconds=500):
"""
Adding silence is easy - we add zeros to the end of our array
"""
num_samples = duration_milliseconds * (self.sample_rate / 1000.0)
for x in range(int(num_samples)):
self.audio.append(0.0)
return
def append_sinewave(
self,
freq=440.0,
duration_milliseconds=500,
volume=1.0):
"""
The sine wave generated here is the standard beep. If you want something
more aggresive you could try a square or saw tooth waveform. Though there
are some rather complicated issues with making high quality square and
sawtooth waves... which we won't address here :)
"""
num_samples = duration_milliseconds * (self.sample_rate / 1000.0)
for x in range(int(num_samples)):
self.audio.append(volume * math.sin(2 * math.pi * freq * ( x / self.sample_rate )))
return
def save_wav(self, file_name):
# Open up a wav file
wav_file=wave.open(file_name,"w")
# wav params
nchannels = 1
sampwidth = 2
# 44100 is the industry standard sample rate - CD quality. If you need to
# save on file size you can adjust it downwards. The stanard for low quality
# is 8000 or 8kHz.
nframes = len(self.audio)
comptype = "NONE"
compname = "not compressed"
wav_file.setparams((nchannels, sampwidth, self.sample_rate, nframes, comptype, compname))
# WAV files here are using short, 16 bit, signed integers for the
# sample size. So we multiply the floating point data we have by 32767, the
# maximum value for a short integer. NOTE: It is theortically possible to
# use the floating point -1.0 to 1.0 data directly in a WAV file but not
# obvious how to do that using the wave module in python.
for sample in self.audio:
wav_file.writeframes(struct.pack('h', int( sample * 32767.0 )))
wav_file.close()
return
if __name__ == "__main__":
bg = BeepGenerator()
bg.append_sinewave(volume=0.25, duration_milliseconds=100)
bg.append_silence()
bg.append_sinewave(volume=0.5, duration_milliseconds=700)
bg.append_silence()
bg.save_wav("output.wav")
I adjusted it a bit further, now it should be a lot faster, and I added a function for playing multiple tones at the same time.
import numpy as np
import scipy.io.wavfile
class BeepGenerator:
def __init__(self):
# Audio will contain a long list of samples (i.e. floating point numbers describing the
# waveform). If you were working with a very long sound you'd want to stream this to
# disk instead of buffering it all in memory list this. But most sounds will fit in
# memory.
self.audio = []
self.sample_rate = 44100.0
def append_silence(self, duration_milliseconds=500):
"""
Adding silence is easy - we add zeros to the end of our array
"""
num_samples = duration_milliseconds * (self.sample_rate / 1000.0)
for x in range(int(num_samples)):
self.audio.append(0.0)
return
def append_sinewave(
self,
freq=440.0,
duration_milliseconds=500,
volume=1.0):
"""
The sine wave generated here is the standard beep. If you want something
more aggressive you could try a square or saw tooth waveform. Though there
are some rather complicated issues with making high quality square and
sawtooth waves... which we won't address here :)
"""
num_samples = duration_milliseconds * (self.sample_rate / 1000.0)
x = np.array([i for i in range(int(num_samples))])
sine_wave = volume * np.sin(2 * np.pi * freq * (x / self.sample_rate))
self.audio.extend(list(sine_wave))
return
def append_sinewaves(
self,
freqs=[440.0],
duration_milliseconds=500,
volumes=[1.0]):
"""
The sine wave generated here is the standard beep. If you want something
more aggressive you could try a square or saw tooth waveform. Though there
are some rather complicated issues with making high quality square and
sawtooth waves... which we won't address here :)
len(freqs) must be the same as len(volumes)
"""
volumes = list(np.array(volumes)/sum(volumes))
num_samples = duration_milliseconds * (self.sample_rate / 1000.0)
x = np.array([i for i in range(int(num_samples))])
first_it = True
for volume, freq in zip(volumes, freqs):
print(freq)
if first_it:
sine_wave = volume * np.sin(2 * np.pi * freq * (x / self.sample_rate))
first_it = False
else:
sine_wave += volume * np.sin(2 * np.pi * freq * (x / self.sample_rate))
self.audio.extend(list(sine_wave))
return
def save_wav(self, file_name):
# Open up a wav file
# wav params
# 44100 is the industry standard sample rate - CD quality. If you need to
# save on file size you can adjust it downwards. The standard for low quality
# is 8000 or 8kHz.
# WAV files here are using short, 16 bit, signed integers for the
# sample size. So we multiply the floating point data we have by 32767, the
# maximum value for a short integer. NOTE: It is theoretically possible to
# use the floating point -1.0 to 1.0 data directly in a WAV file but not
# obvious how to do that using the wave module in python.
self.audio = np.array(self.audio).astype(np.float32)
scipy.io.wavfile.write(file_name, int(self.sample_rate), np.array(self.audio))
return
if __name__ == "__main__":
bg = BeepGenerator()
bg.append_sinewave(volume=1, duration_milliseconds=100)
bg.append_silence()
bg.append_sinewave(volume=0.5, duration_milliseconds=700)
bg.append_silence()
bg.append_sinewaves(volumes=[1, 1], duration_milliseconds=700, freqs=[880, 660])
bg.append_silence()
bg.save_wav("output.wav")
SciPy/Numpy seems to support many filters, but not the root-raised cosine filter. Is there a trick to easily create one rather than calculating the transfer function? An approximation would be fine as well.
The commpy package has several filters included with it. The order of return variables was switched in an earlier version (as of this edit, current version is 0.7.0). To install, foemphasized textllow instructions here or here.
Here's a use example for 1024 symbols of QAM16:
import numpy as np
from commpy.modulation import QAMModem
from commpy.filters import rrcosfilter
N = 1024 # Number of symbols
os = 8 #over sampling factor
# Create modulation. QAM16 makes 4 bits/symbol
mod1 = QAMModem(16)
# Generate the bit stream for N symbols
sB = np.random.randint(0, 2, N*mod1.num_bits_symbol)
# Generate N complex-integer valued symbols
sQ = mod1.modulate(sB)
sQ_upsampled = np.zeros(os*(len(sQ)-1)+1,dtype = np.complex64)
sQ_upsampled[::os] = sQ
# Create a filter with limited bandwidth. Parameters:
# N: Filter length in samples
# 0.8: Roll off factor alpha
# 1: Symbol period in time-units
# 24: Sample rate in 1/time-units
sPSF = rrcosfilter(N, alpha=0.8, Ts=1, Fs=over_sample)[1]
# Analog signal has N/2 leading and trailing near-zero samples
qW = np.convolve(sPSF, sQ_upsampled)
Here's some explanation of the parameters. N is the number of baud samples. You need 4 times as many bits (in the case of QAM) as samples. I made the sPSF array return with N elements so we can see the signal with leading and trailing samples. See the Wikipedia Root-raised-cosine filter page for explanation of parameter alpha. Ts is the symbol period in seconds and Fs is the number of filter samples per Ts. I like to pretend Ts=1 to keep things simple (unit symbol rate). Then Fs is the number of complex waveform samples per baud point.
If you use return element 0 from rrcosfilter to get the sample time indexes, you need to insert the correct symbol period and filter sample rate in Ts and Fs for the index values to be correctly scaled.
It would be nice to have the root-raised cosine filter standardized in a common package. Here is my implementation in the meantime based on commpy. It vectorized with numpy, and normalized without consideration of the symbol rate.
def raised_root_cosine(upsample, num_positive_lobes, alpha):
"""
Root raised cosine (RRC) filter (FIR) impulse response.
upsample: number of samples per symbol
num_positive_lobes: number of positive overlaping symbols
length of filter is 2 * num_positive_lobes + 1 samples
alpha: roll-off factor
"""
N = upsample * (num_positive_lobes * 2 + 1)
t = (np.arange(N) - N / 2) / upsample
# result vector
h_rrc = np.zeros(t.size, dtype=np.float)
# index for special cases
sample_i = np.zeros(t.size, dtype=np.bool)
# deal with special cases
subi = t == 0
sample_i = np.bitwise_or(sample_i, subi)
h_rrc[subi] = 1.0 - alpha + (4 * alpha / np.pi)
subi = np.abs(t) == 1 / (4 * alpha)
sample_i = np.bitwise_or(sample_i, subi)
h_rrc[subi] = (alpha / np.sqrt(2)) \
* (((1 + 2 / np.pi) * (np.sin(np.pi / (4 * alpha))))
+ ((1 - 2 / np.pi) * (np.cos(np.pi / (4 * alpha)))))
# base case
sample_i = np.bitwise_not(sample_i)
ti = t[sample_i]
h_rrc[sample_i] = np.sin(np.pi * ti * (1 - alpha)) \
+ 4 * alpha * ti * np.cos(np.pi * ti * (1 + alpha))
h_rrc[sample_i] /= (np.pi * ti * (1 - (4 * alpha * ti) ** 2))
return h_rrc
commpy doesn't seem to be released yet. But here is my nugget of knowledge.
beta = 0.20 # roll off factor
Tsample = 1.0 # sampling period, should at least twice the rate of the symbol
oversampling_rate = 8 # oversampling of the bit stream, this gives samples per symbol
# must be at least 2X the bit rate
Tsymbol = oversampling_rate * Tsample # pulse duration should be at least 2 * Ts
span = 50 # number of symbols to span, must be even
n = span*oversampling_rate # length of the filter = samples per symbol * symbol span
# t_step must be from -span/2 to +span/2 symbols.
# each symbol has 'sps' number of samples per second.
t_step = Tsample * np.linspace(-n/2,n/2,n+1) # n+1 to include 0 time
BW = (1 + beta) / Tsymbol
a = np.zeros_like(t_step)
for item in list(enumerate(t_step)):
i,t = item
# t is n*Ts
if (1-(2.0*beta*t/Tsymbol)**2) == 0:
a[i] = np.pi/4 * np.sinc(t/Tsymbol)
print 'i = %d' % i
elif t == 0:
a[i] = np.cos(beta * np.pi * t / Tsymbol)/ (1-(2.0*beta*t/Tsymbol)**2)
print 't = 0 captured'
print 'i = %d' % i
else:
numerator = np.sinc( np.pi * t/Tsymbol )*np.cos( np.pi*beta*t/Tsymbol )
denominator = (1.0 - (2.0*beta*t/Tsymbol)**2)
a[i] = numerator / denominator
#a = a/sum(a) # normalize total power
plot_filter = 0
if plot_filter == 1:
w,h = signal.freqz(a)
fig = plt.figure()
plt.subplot(2,1,1)
plt.title('Digital filter (raised cosine) frequency response')
ax1 = fig.add_subplot(211)
plt.plot(w/np.pi, 20*np.log10(abs(h)),'b')
#plt.plot(w/np.pi, abs(h),'b')
plt.ylabel('Amplitude (dB)', color = 'b')
plt.xlabel(r'Normalized Frequency ($\pi$ rad/sample)')
ax2 = ax1.twinx()
angles = np.unwrap(np.angle(h))
plt.plot(w/np.pi, angles, 'g')
plt.ylabel('Angle (radians)', color = 'g')
plt.grid()
plt.axis('tight')
plt.show()
plt.subplot(2,1,2)
plt.stem(a)
plt.show()
I think the correct response is to generate the desire impulse response. For a raised cosine filter the function is
h(n) = (sinc(n/T)*cos(pi * alpha* n /T)) / (1-4*(alpha*n/T)**2)
Select the number of points for your filter and generate the weights.
output = scipy.signal.convolve(signal_in, h)
This is basically the same function as in CommPy but much smaller in code:
def rcosfilter(N, beta, Ts, Fs):
t = (np.arange(N) - N / 2) / Fs
return np.where(np.abs(2*t) == Ts / beta,
np.pi / 4 * np.sinc(t/Ts),
np.sinc(t/Ts) * np.cos(np.pi*beta*t/Ts) / (1 - (2*beta*t/Ts) ** 2))
SciPy will support any filter. Just calculate the impulse response and use any of the appropriate scipy.signal filter/convolve functions.