Iam trying to analyze my audio out (I made my audio out become a audio in using a virtual cable and get specific frequencies with their amplitudes.
I managed to find some code that could find the frequency with the highest amplitude. I have tried to understand fft and everything around it but i cant seem to figure out how to filter on specific frequencies.
This is what i have so far:
import pyaudio
import numpy as np
# Audio variables
CHUNK = 4096
RATE = 44100
power = 12
# Opens audio stream
p = pyaudio.PyAudio()
stream=p.open(format=pyaudio.paInt16,channels=1,rate=RATE,input=True,frames_per_buffer=CHUNK)
while True:
# Reads the data
data = np.fromstring(stream.read(CHUNK),dtype=np.int16)
# Calculates the peak of the frequency
peak = np.average(np.abs(data))*2
# Shows the bars for amplitude
bars = "#"*int(50*peak/2**power)
# Calculates the frequency from with the peak ws
data = data * np.hanning(len(data))
fft = abs(np.fft.fft(data).real)
fft = fft[:int(len(fft)/2)]
freq = np.fft.fftfreq(CHUNK,1.0/RATE)
freq = freq[:int(len(freq)/2)]
freqPeak = freq[np.where(fft==np.max(fft))[0][0]]+1
# Shows the peak frequency and the bars for the amplitude
print("peak frequency: %d Hz"%freqPeak + " "+bars)
stream.stop_stream()
stream.close()
p.terminate()
Can one of you guys help me and explain how to filter on frequencies?
(If it isn't clear what i want/am trying just ask and ill try to explain better)
Related
I'm trying to detect certain frequencies in the audio output of the computer. For some reason, the frequencies that I find are roughly half the expected value. The total program is a little long, so I'll only post the relevant parts.
I use a fork of pyaudio to stream the output sound (https://github.com/intxcc/pyaudio_portaudio). From this, I read in some audiodata, apply scipy.rfft and plot the spectrum.
frames = np.frombuffer(
self.stream.read(int(self.sampleRate/10)), np.int16)
xf, yf = audioUtils.getSpectrum(frames, self.sampleRate)
self.plotWidget.plot(xf, yf)
The rfft:
def getSpectrum(frames, sampleRate):
n = len(frames)
yf = rfft(frames)
yf = np.abs(yf)
xf = rfftfreq(n, 1 / sampleRate)
I've been testing the code with an online tone generator (https://www.szynalski.com/tone-generator/), which produces the following results for a 10 kHz tone:
For some reason, the main peak is at half the expected frequency. Furthermore, there seems to be a peak mirrored around ~12 kHz. Here is a second example of 20 kHz
My sampling rate is 44100 Hz.
Am I not doing the rfft right?
The problem was that the stream was reading in audio from a 2-channel source. A simple splice fixed it.
The working code:
frames = np.frombuffer(
self.stream.read(int(self.sampleRate)), np.int16)
frames = frames[::2]
xf, yf = audioUtils.getSpectrum(frames, self.sampleRate)
self.plotWidget.plot(xf, yf)
Hi I am currently using Librosa for an audio project I am working on, and I was wondering how can I get the amplitude of a frequency at a specific time-frame in an audio file. I don't know if it is straightforward, but I have looked online and can't find anything. I know that you can produce a spectrogram, but how can you get the information suchas the amplitude of a a frequency at a given timestamp?
EDIT: I meant the amplitude at a timestamp.
The spectrogram is a discrete time-frequency representation. In librosa the frequency bins are along the first axis, and time along the second axis. The frequency bins depend on the number of FFTs chosen, and the time bins depend on the hop length.
Below example shows how to get the amplitude at a given location in the spectrogram, and the associated time and frequency of that location.
import librosa
import numpy
filename = librosa.util.example_audio_file()
y, sr = librosa.load(filename)
n_fft = 1024
hop_length = 512
spec = numpy.abs(librosa.core.stft(y, n_fft=n_fft, hop_length=hop_length))
freqs = librosa.core.fft_frequencies(n_fft=n_fft)
times = librosa.core.frames_to_time(spec[0], sr=sr, n_fft=n_fft, hop_length=hop_length)
print('spectrogram size', spec.shape)
fft_bin = 14
time_idx = 1000
print('freq (Hz)', freqs[fft_bin])
print('time (s)', times[time_idx])
print('amplitude', spec[fft_bin, time_idx])
Similarly you can go from frequency and time to an index in the spectrogram. But since it has been discretized you always have to round to the closest index.
I'm trying to create a home made spectrum analyzer with 8 strips of LED's.
The part i'm struggling with is performing the FFT and understanding how to use the results.
So far this is what I have:
import opc
import time
import pyaudio
import wave
import sys
import numpy
import math
CHUNK = 1024
# Gets the pitch from the audio
def pitch(signal):
# NOT SURE IF ANY OF THIS IS CORRECT
signal = numpy.fromstring(signal, 'Int16');
print "signal = ", signal
testing = numpy.fft.fft(signal)
print "testing = ", testing
wf = wave.open(sys.argv[1], 'rb')
RATE = wf.getframerate()
p = pyaudio.PyAudio() # Instantiate PyAudio
# Open Stream
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=wf.getnchannels(),
rate=wf.getframerate(),
output=True)
# Read data
data = wf.readframes(CHUNK)
# Play Stream
while data != '':
stream.write(data)
data = wf.readframes(CHUNK)
frequency = pitch(data)
print "%f frequency" %frequency
I'm struggling with what to do in the pitch method. I know i need to perform FFT on the data that is passed in, but am really unsure how to do it.
Also should be using this function?
Because of the way np.fft.fft works, if you use 1024 data points you will get values for 512 frequencies (plus a value zero Hz, DC offset). If you only want 8 frequencies you have to feed it 16 data points.
You might be able to do what you want by down sampling by a factor of 64 - then 16 down sampled points would be time-equivalent to 1024 original points. I've never explored this so I don't know what this entails or what the pitfalls might be.
You're going to have to do some learning - The Scientist and Engineer's Guide to Digital Signal Processing really is an excellant resource, at least it was for me.
Keep in mind that for an audio cd .wav file the sample frequency is 44100 Hz - a 1024 sample chunk is only 23 mS of the sound.
scipy.io.wavfile.read makes getting the data easy.
samp_rate, data = scipy.io.wavfile.read(filename)
data is a 2-d numpy array with one channel in in column zero, data[:,0], and the other in column 1, data[:,1]
Matplotlib's specgram and psd functions can give you the data you want. A graphing analog to what you are trying to do would be.
from matplotlib import pyplot as plt
import scipy.io.wavfile
samp_rate, data = scipy.io.wavfile.read(filename)
Pxx, freqs, bins, im = plt.specgram(data[:1024,0], NFFT = 16, noverlap = 0, Fs = samp_rate)
plt.show()
plt.close()
Since you aren't doing any plotting just use matplolib.mlab.specgram.
Pxx, freqs, t = matplolib.mlab.specgram(data[:1024,0], NFFT = 16, noverlap = 0, Fs = samp_rate)
Its return values (Pxx, freqs, t) are
- *Pxx*: 2-D array, columns are the periodograms of successive segments
- *freqs*: 1-D array of frequencies corresponding to the rows in Pxx
- *t*: 1-D array of times corresponding to midpoints of segments.
Pxx[1:, 0] would be the values for the frequencies for T0, Pxx[1:, 1] for T1, Pxx[1:, 2] for T2, ... This is what you would feed to your display. You don't use Pxx[0, :] because it is for 0 Hz.
power spectral density - matplotlib.mlab.psd()
Maybe another strategy to get down to 8 bands would be to use large chunks and normalize the values. Then you could break the values up into eight segments and get the sum of each segments. I think this is valid - maybe only for the power spectral density. sklearn.preprocessing.normalize
w = sklearn.preprocessing.normalize(Pxx[1:,:], norm = 'l1', axis = 0)
But then again, I just made all that up.
I don't know about the scipy.io.wavfile.read function that #wwii mentions in his answer, but it seems that his suggestion is the way to go to handle the signal loading. However, I just wanted to comment on the fourier transform.
What I imagine that you intend to do with your LED setup is to change each of the LED's brightnesses according to the power of the spectra in each of the 8 frequency bands that you intend to use. Thus, what I understood that you need, is to compute in some way the power as time goes by. The first complication is "how to compute the spectral power?"
The best way to do this is with the numpy.fft.rfft, which computes the fourier transform for signals that only have real numbers (not complex numbers). On the other hand, the function numpy.fft.fft is a general purpose function that can compute the fast fourier transform for signals with complex numbers. The conceptual difference is that numpy.fft.fft can be used to study travelling waves and their propagation direction. This is seen because the returned amplitudes correspond to positive or negative frequencies that indicate how the wave travels. numpy.fft.rfft yields the amplitude for real-valued frequencies as seen in numpy.fft.rfftfreq, which is what you need.
The last issue is to choose appropriate frequency bands in which to compute the spectral power. The human ear has a huge frequency response range and the width of each band will vary very much, with the low frequency band being very narrow and the high frequency band being very wide. Googling around, I found this nice resource that defines 7 relevant frequency bands
Sub-bass: 20 to 60 Hz
Bass: 60 to 250 Hz
Low midrange: 250 to 500 Hz
Midrange: 500 Hz to 2 kHz
Upper midrange: 2 to 4 kHz
Presence: 4 to 6 kHz
Brilliance: 6 to 20 kHz
I would suggest to use these bands, but split the upper midrange into 2-3 kHz and 3-4kHz. That way you'll be able to use your 8 LED setup. I'm uploading an updated pitch function for you to use
wf = wave.open(sys.argv[1], 'rb')
CHUNK = 1024
RATE = wf.getframerate()
DT = 1./float(RATE) # time between two successive audio frames
FFT_FREQS = numpy.fft.nfftfreq(CHUNCK,DT)
FFT_FREQS_INDS = -numpy.ones_like(FFT_FREQS)
bands_bounds = [[20,60], # Sub-bass
[60,250], # Bass
[250,500], # Low midrange
[500,2000], # Midrange
[2000,3000], # Upper midrange 0
[3000,4000], # Upper midrange 1
[4000,6000], # Presence
[6000,20000]] # Brilliance
for f_ind,freq in enumerate(FFT_FREQS):
for led_ind,bounds in enumerate(bands_bounds):
if freq<bounds[1] and freq>=bounds[0]:
FFT_FREQS_INDS[ind] = led_ind
# Returns the spectral power in each of the 8 bands assigned to the LEDs
def pitch(signal):
# CONSIDER SWITCHING TO scipy.io.wavfile.read TO GET SIGNAL
signal = numpy.fromstring(signal, 'Int16');
amplitude = numpy.fft.rfft(signal.astype(numpy.float))
power = [np.sum(np.abs(amplitude[FFT_FREQS_INDS==led_ind])**2) for led_ind in range(len(bands_bounds))]
return power
The first part of the code computes the fft frequencies and constructs the array FFT_FREQS_INDS that indicates to which of the 8 frequency bands the fft frequency corresponds to. Then, in pitch the power of the spectra in each of the bands is computed. Of course, this can be optimized but I tried to make the code self-explanatory.
I am doing a project where I want to use the data of a .wav file to drive animation. The problems I am facing are mainly due to the fact that the animation is 25fps and I have 44100 samples per second in the .wav file, so I've broken down apart to 44100/25 samples. Working with the amplitude is fine and I created an initial test to try it out and it worked. This is the code
import wave
import struct
wav = wave.open('test.wav', 'rb')
rate = 44100
nframes = wav.getnframes()
data = wav.readframes(-1)
wav.close()
data_c = [data[offset::2] for offset in range(2)]
ch1 = struct.unpack('%ih' % nframes, data_c[0])
ch2 = struct.unpack('%ih' % nframes, data_c[1])
kf = []
for i in range(0, len(ch2), 44100/25):
cur1 = 0
cur2 = 0
for j in range(i, i+44100/25):
cur1+=ch2[j]
cur2+=ch1[j]
cur = (cur1+cur2) / 44100. / 25. / 2.
kf.append(cur)
min_v = min(kf)
max_v = max(kf)
if abs(max_v) > abs(min_v):
kf = [float(i)/max_v for i in kf]
else:
kf = [float(i)/min_v for i in kf]
Now I want to get the spectrum for each separate keyframe as I do for the amplitude, but I am struggling to think of a way to do it. I can get the spectrum for the whole file using FFT, but that's not I want, because ideally I would like to have different movements of the objects in accordance to different frequencies.
Look at scipy wavfile. It'll turn the wave file into a numpy array. Numpy also has fft functions. Scipy/matplotlib has a spectrogram plot for the entire spectrogram.
from scipy.io import wavfile
sample_rate, data = wavfile.read(filename)
Then you have to get your timing of how you want to read the data. Matplotlib has animation tools that will call a function at a given interval. The other way of doing it is to use PyAudio. If you use pyaudio you can listen to the data while it is displayed.
Next run the data through the FFT. Store the FFT values in a spectrogram array and use matplotlib imshow to display the spectrogram array. You will probably have to rotate the array in some fashion when you display the spectrogram.
From personal experience be careful of python threads. Threading works for I/O, but for calculations the thread can just dominate the whole application slowing everything down. Also GUI elements (like plotting) don't really work in threads. Use matplotlibs animation tools for the plotting.
Ok what im trying to do is a kind of audio processing software that can detect a prevalent frequency an if the frequency is played for long enough (few ms) i know i got a positive match. i know i would need to use FFT or something simiral but in this field of math i suck, i did search the internet but didn not find a code that could do only this.
the goal im trying to accieve is to make myself a custom protocol to send data trough sound, need very low bitrate per sec (5-10bps) but im also very limited on the transmiting end so the recieving software will need to be able custom (cant use an actual hardware/software modem) also i want this to be software only (no additional hardware except soundcard)
thanks alot for the help.
The aubio libraries have been wrapped with SWIG and can thus be used by Python. Among their many features include several methods for pitch detection/estimation including the YIN algorithm and some harmonic comb algorithms.
However, if you want something simpler, I wrote some code for pitch estimation some time ago and you can take it or leave it. It won't be as accurate as using the algorithms in aubio, but it might be good enough for your needs. I basically just took the FFT of the data times a window (a Blackman window in this case), squared the FFT values, found the bin that had the highest value, and used a quadratic interpolation around the peak using the log of the max value and its two neighboring values to find the fundamental frequency. The quadratic interpolation I took from some paper that I found.
It works fairly well on test tones, but it will not be as robust or as accurate as the other methods mentioned above. The accuracy can be increased by increasing the chunk size (or reduced by decreasing it). The chunk size should be a multiple of 2 to make full use of the FFT. Also, I am only determining the fundamental pitch for each chunk with no overlap. I used PyAudio to play the sound through while writing out the estimated pitch.
Source Code:
# Read in a WAV and find the freq's
import pyaudio
import wave
import numpy as np
chunk = 2048
# open up a wave
wf = wave.open('test-tones/440hz.wav', 'rb')
swidth = wf.getsampwidth()
RATE = wf.getframerate()
# use a Blackman window
window = np.blackman(chunk)
# open stream
p = pyaudio.PyAudio()
stream = p.open(format =
p.get_format_from_width(wf.getsampwidth()),
channels = wf.getnchannels(),
rate = RATE,
output = True)
# read some data
data = wf.readframes(chunk)
# play stream and find the frequency of each chunk
while len(data) == chunk*swidth:
# write data out to the audio stream
stream.write(data)
# unpack the data and times by the hamming window
indata = np.array(wave.struct.unpack("%dh"%(len(data)/swidth),\
data))*window
# Take the fft and square each value
fftData=abs(np.fft.rfft(indata))**2
# find the maximum
which = fftData[1:].argmax() + 1
# use quadratic interpolation around the max
if which != len(fftData)-1:
y0,y1,y2 = np.log(fftData[which-1:which+2:])
x1 = (y2 - y0) * .5 / (2 * y1 - y2 - y0)
# find the frequency and output it
thefreq = (which+x1)*RATE/chunk
print "The freq is %f Hz." % (thefreq)
else:
thefreq = which*RATE/chunk
print "The freq is %f Hz." % (thefreq)
# read some more data
data = wf.readframes(chunk)
if data:
stream.write(data)
stream.close()
p.terminate()
If you're going to use FSK (frequency shift keying) for encoding data, you're probably better off using the Goertzel algorithm so you can check just the frequencies you want, instead of a full DFT/FFT.
You can find the frequency spectrum of the sliding windows over your sound from here and then check the presence of the prevalent frequency band via finding the area under the frequency spectrum curve for that band from here.
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import auc
np.random.seed(0)
# Sine sample with a frequency of 5hz and add some noise
sr = 32 # sampling rate
y = np.linspace(0, 5 * 2*np.pi, sr)
y = np.tile(np.sin(y), 5)
y += np.random.normal(0, 1, y.shape)
t = np.arange(len(y)) / float(sr)
# Generate frquency spectrum
spectrum, freqs, _ = plt.magnitude_spectrum(y, sr)
# Calculate percentage for a frequency range
lower_frq, upper_frq = 4, 6
ind_band = np.where((freqs > lower_frq) & (freqs < upper_frq))
plt.fill_between(freqs[ind_band], spectrum[ind_band], color='red', alpha=0.6)
frq_band_perc = auc(freqs[ind_band], spectrum[ind_band]) / auc(freqs, spectrum)
print('{:.1%}'.format(frq_band_perc))
# 19.8%
While I haven't tried audio processing with Python before, perhaps you could build something based on SciPy (or its subproject NumPy), a framework for efficient scientific/engineering numerical computation? You might start by looking at scipy.fftpack for your FFT.