How can I detect specific sound from live application? - python

Is there a way to collect live audio from application (like chrome, mozilla) and execute some code
when a specific sound is played on website ?

If you have a mic on whatever device you are using, you can use that to read whatever sound is coming out of your computer. Then, you can compare those audio frames that you are recording to a sound file of what sound you are looking for
Of course, this leaves it very vulnerable to background noise, so somehow you are going to have to filter it out.
Here is an example using the PyAudio and wave libraries:
import pyaudio
import wave
wf = wave.open("websitSound.wav", "rb")
amountFrames = 100 # just an arbitrary number; could be anything
sframes = wf.readframes(amountFrames)
currentSoundFrame = 0
chunk = 1024 # Record in chunks of 1024 samples
sample_format = pyaudio.paInt16 # 16 bits per sample
channels = 2
fs = 44100 # Record at 44100 samples per second
seconds = 3
p = pyaudio.PyAudio() # Create an interface to PortAudio
stream = p.open(format=sample_format,
channels=channels,
rate=fs,
frames_per_buffer=chunk,
input=True)
# Store data in chunks for 3 seconds
for i in range(0, int(fs / chunk * seconds)):
data = stream.read(chunk)
if data == sframes[currentSoundFrame]:
currentSoundFrame += 1
if currentSoundFrame == len(sframes): #the whole entire sound was played
print("Sound was played!")
frames.append(data)
# Stop and close the stream
stream.stop_stream()
stream.close()
# Terminate the PortAudio interface
p.terminate()

Related

Specify minimum trigger frequency for recording audio in Python

I'm writing a script for sound-activated recording in Python using pyaudio. I want to trigger a 5s recording after a sound that is above a prespecified volume and frequency. I've managed to get the volume part working but don't know how to specify the minimum trigger frequency (I'd like it to trigger at frequencies above 10kHz, for example):
import pyaudio
import wave
from array import array
import time
FORMAT=pyaudio.paInt16
CHANNELS=1
RATE=44100
CHUNK=1024
RECORD_SECONDS=5
audio=pyaudio.PyAudio()
stream=audio.open(format=FORMAT,channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK)
nighttime=True
while nighttime:
data=stream.read(CHUNK)
data_chunk=array('h',data)
vol=max(data_chunk)
if(vol>=3000):
print("recording triggered")
frames=[]
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
frames.append(data)
print("recording saved")
# write to file
words = ["RECORDING-", time.strftime("%Y%m%d-%H%M%S"), ".wav"]
FILE_NAME= "".join(words)
wavfile=wave.open(FILE_NAME,'wb')
wavfile.setnchannels(CHANNELS)
wavfile.setsampwidth(audio.get_sample_size(FORMAT))
wavfile.setframerate(RATE)
wavfile.writeframes(b''.join(frames))
wavfile.close()
# check if still nighttime
nighttime=True
stream.stop_stream()
stream.close()
audio.terminate()
I'd like to add to the line if(vol>=3000): something like if(vol>=3000 and frequency>10000): but I don't know how to set up frequency. How to do this?
To retrieve the frequency of a signal you can compute Fourier transform, thus switching to frequency domain (freq in the code). Your next step is to compute relative amplitude of the signal (amp) . The latter is proportional to the sound volume.
spec = np.abs(np.fft.rfft(audio_array))
freq = np.fft.rfftfreq(len(audio_array), d=1 / sampling_freq)
spec = np.abs(spec)
amp = spec / spec.sum()
Mind that 3000 isn't a sound volume either. The true sound volume information was lost when the signal was digitalised. Now you only work with relative numbers, so you can just check if e.g. 1/3 of energy in a frame is above 10 khz.
Here's some code to illustrate the concept:
idx_above_10khz = np.argmax(freq > 10000)
amp_below_10k = amp[:idx_above_10khz].sum()
amp_above_10k = amp[idx_above_10khz:].sum()
Now you could specify that from certain ratio of amp_below_10k / amp_above_10k you should trigger your program.

pyaudio audio recording python

I am trying to record audio from the microphone with Python. And I have following code:
import pyaudio
import wave
import threading
FORMAT = pyaudio.paInt16
CHANNELS = 2
RATE = 44100
CHUNK = 1024
WAVE_OUTPUT_FILENAME = "file.wav"
stop_ = False
audio = pyaudio.PyAudio()
stream = audio.open(format=FORMAT, channels=CHANNELS,
rate=RATE, input=True,
frames_per_buffer=CHUNK)
def stop():
global stop_
while True:
if not input('Press Enter >>>'):
print('exit')
stop_ = True
t = threading.Thread(target=stop, daemon=True).start()
frames = []
while True:
data = stream.read(CHUNK)
frames.append(data)
if stop_:
break
stream.stop_stream()
stream.close()
audio.terminate()
waveFile = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
waveFile.setnchannels(CHANNELS)
waveFile.setsampwidth(audio.get_sample_size(FORMAT))
waveFile.setframerate(RATE)
waveFile.writeframes(b''.join(frames))
waveFile.close()
My code works fine, but when I play my recording, I don't hear any sound in my final output file (file.wav).
Why do problems occur here and how do I fix them?
Your code is working fine. The problem you are facing is due to the admin rights. The audio file has constant 0 data, therefore, you can't listen to sound in the generated wav file. I suppose your microphone device is installed and working properly. If you are not sure about the audio installation status, then as per operating system do these steps:
MAC OS:
System Preferences->Sound->Input and there you can visualize the bars as make some sound. Make sure the selected device type is Built-in.
Windos OS:
Sound settings and test Microphone by click listen to this device, you may later uncheck it because it will loop back your voice to speakers and will create big noises.
Most probably you are using Mac OS. I had the similar issue, because I was using Atom editor to run the python code. Try to run your code from the terminal of Mac OS (or Power Shell if you are using windows), (in case a popup appears for access to microphone on Mac OS, press Ok). Thats it! your code will record fine. As a tester, please run the code below to check if you can visualize the sound, and make sure to run it through Terminal (No editors or IDEs).
import queue
import sys
from matplotlib.animation import FuncAnimation
import matplotlib.pyplot as plt
import numpy as np
import sounddevice as sd
# Lets define audio variables
# We will use the default PC or Laptop mic to input the sound
device = 0 # id of the audio device by default
window = 1000 # window for the data
downsample = 1 # how much samples to drop
channels = [1] # a list of audio channels
interval = 30 # this is update interval in miliseconds for plot
# lets make a queue
q = queue.Queue()
# Please note that this sd.query_devices has an s in the end.
device_info = sd.query_devices(device, 'input')
samplerate = device_info['default_samplerate']
length = int(window*samplerate/(1000*downsample))
# lets print it
print("Sample Rate: ", samplerate)
# Typical sample rate is 44100 so lets see.
# Ok so lets move forward
# Now we require a variable to hold the samples
plotdata = np.zeros((length,len(channels)))
# Lets look at the shape of this plotdata
print("plotdata shape: ", plotdata.shape)
# So its vector of length 44100
# Or we can also say that its a matrix of rows 44100 and cols 1
# next is to make fig and axis of matplotlib plt
fig,ax = plt.subplots(figsize=(8,4))
# lets set the title
ax.set_title("PyShine")
# Make a matplotlib.lines.Line2D plot item of color green
# R,G,B = 0,1,0.29
lines = ax.plot(plotdata,color = (0,1,0.29))
# We will use an audio call back function to put the data in queue
def audio_callback(indata,frames,time,status):
q.put(indata[::downsample,[0]])
# now we will use an another function
# It will take frame of audio samples from the queue and update
# to the lines
def update_plot(frame):
global plotdata
while True:
try:
data = q.get_nowait()
except queue.Empty:
break
shift = len(data)
plotdata = np.roll(plotdata, -shift,axis = 0)
# Elements that roll beyond the last position are
# re-introduced
plotdata[-shift:,:] = data
for column, line in enumerate(lines):
line.set_ydata(plotdata[:,column])
return lines
ax.set_facecolor((0,0,0))
# Lets add the grid
ax.set_yticks([0])
ax.yaxis.grid(True)
""" INPUT FROM MIC """
stream = sd.InputStream( device = device, channels = max(channels), samplerate = samplerate, callback = audio_callback)
""" OUTPUT """
ani = FuncAnimation(fig,update_plot, interval=interval,blit=True)
with stream:
plt.show()
Save this file as voice.py to a folder (let say AUDIO). Then cd to AUDIO folder from the terminal command and then execute it using:
python3 voice.py
or
python voice.py
depending on your python env name.
By using print(sd.query_devices()), I see a list of devices as below:
Microsoft Sound Mapper - Input, MME (2 in, 0 out)
Microphone (AudioHubNano2D_V1.5, MME (2 in, 0 out)
Internal Microphone (Conexant S, MME (2 in, 0 out)
...
However, if i use device = 0, I can still receive sound from the USB-microphone, which is device number 1. Is it by default, all the audio signal goes to the Sound Mapper? That means if I use device = 0, I will get all audio signal from all audio inputs; and if I just want audio input from one particular device, I need to choose its number x as device = x.
I have another question: is it possible to capture audio signal from device 1 and 2 in one application but in separate manner?

calculating FFT in frames and writing to a file

I'm new to python,I'm trying get a FFT value of a uploaded wav file and return the FFT of each frame in each line of a text file (using GCP)
using scipy or librosa
Frame rate i require is 30fps
wave file will be of 48k sample rate
so my questions are
how do i divide the samples for the whole wav file into samples of each frame
How do add empty samples to make the length of the frame samples power of 2 (as 48000/30 = 1600 add 448 empty samples to make it 2048)
how do i normalize the resulting FFT array to [-1,1]?
You can use pyaudio with callback to acheive whatever you are doing.
import pyaudio
import wave
import time
import struct
import sys
import numpy as np
if len(sys.argv) < 2:
print("Plays a wave file.\n\nUsage: %s filename.wav" % sys.argv[0])
sys.exit(-1)
wf = wave.open(sys.argv[1], 'rb')
# instantiate PyAudio (1)
p = pyaudio.PyAudio()
def callback_test(data, frame_count, time_info, status):
frame_count =1024
elm = wf.readframes(frame_count) # read n frames
da_i = np.frombuffer(elm, dtype='<i2') # convert to little endian int pairs
da_fft = np.fft.rfft(da_i) # fast fourier transform for real values
da_ifft = np.fft.irfft(da_fft) # inverse fast fourier transform for real values
da_i = da_ifft.astype('<i2') # convert to little endian int pairs
da_m = da_i.tobytes() # convert to bytes
return (da_m, pyaudio.paContinue)
# open stream using callback (3)
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=wf.getnchannels(),
rate=wf.getframerate(),# sampling frequency
output=True,
stream_callback=callback_test)
# # start the stream (4)
stream.start_stream()
# # wait for stream to finish (5)
while stream.is_active():
time.sleep(0.1)
# # stop stream (6)
stream.stop_stream()
stream.close()
wf.close()
# close PyAudio (7)
p.terminate()
Please refer these links for further study:
https://people.csail.mit.edu/hubert/pyaudio/docs/#example-callback-mode-audio-i-o
and
Python change pitch of wav file

Pyaudio recording wav file with soundcard gives an empty recording

I've been trying to work on a project to detect time shift between two streaming audio signals. I worked with python3, Pyaudio and I'm using a Motux828 sound card with a Neumann KU-100 microphone which takes a stereo input. So when i check my input_device_index I am the correct one which is the 4th one connnected to MOTU soundcard.
However when i record with:
import time
import pyaudio
import wave
CHUNK = 1024 * 3 # Chunk is the bytes which are currently processed
FORMAT = pyaudio.paInt16
RATE = 44100
RECORD_SECONDS = 2
WAVE_OUTPUT = "temp.wav"
p = pyaudio.PyAudio()
stream = p.open(format=FORMAT,channels=2,rate=RATE,input=True,frames_per_buffer=CHUNK,input_device_index=4)
frames = [] # np array storing all the data
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream1.read(CHUNK)
frames.append(data1)
stream.stop_stream()
stream.close()
p.terminate()
wavef = wave.open(WAVE_OUTPUT, 'wb') # opening the file
wavef.setnchannels(1)
wavef.setsampwidth(p.get_sample_size(FORMAT))
wavef.setframerate(RATE)
wavef.writeframes(b''.join(frames1)) # writing the data to be saved
wavef.close()
I record a wave file with no sound, with almost no noise(naturally)
Also I can record with 3rd party softwares with the specific microphone.
It works completely, fine.
NOTE:
Sound card is 24-bit depth normally, I also tried paInt24 that records a wave file with pure noise
I think u mentioned wrong variable names as i seen your code. The wrong variables are :
data = stream1.read(CHUNK)
frames.append(data1)
wavef.writeframes(b''.join(frames1))
the correct values are :
data = stream.read(CHUNK)
frames.append(data)
wavef.writeframes(b''.join(frames))

How to handle in_data in Pyaudio callback mode?

I'm doing a project on Signal Processing in python. So far I've had a little succes with the nonblocking mode, but it gave a considerable amount of delay and clipping to the output.
I want to implement a simple real-time audio filter using Pyaudio and Scipy.Signal, but in the callback function provided in the pyaudio example when I want to read the in_data I can't process it. Tried converting it in various ways but with no success.
Here's a code I want to achieve(read data from mic, filter, and output ASAP):
import pyaudio
import time
import numpy as np
import scipy.signal as signal
WIDTH = 2
CHANNELS = 2
RATE = 44100
p = pyaudio.PyAudio()
b,a=signal.iirdesign(0.03,0.07,5,40)
fulldata = np.array([])
def callback(in_data, frame_count, time_info, status):
data=signal.lfilter(b,a,in_data)
return (data, pyaudio.paContinue)
stream = p.open(format=pyaudio.paFloat32,
channels=CHANNELS,
rate=RATE,
output=True,
input=True,
stream_callback=callback)
stream.start_stream()
while stream.is_active():
time.sleep(5)
stream.stop_stream()
stream.close()
p.terminate()
What is the right way to do this?
Found the answer to my question in the meantime, the callback looks like this:
def callback(in_data, frame_count, time_info, flag):
global b,a,fulldata #global variables for filter coefficients and array
audio_data = np.fromstring(in_data, dtype=np.float32)
#do whatever with data, in my case I want to hear my data filtered in realtime
audio_data = signal.filtfilt(b,a,audio_data,padlen=200).astype(np.float32).tostring()
fulldata = np.append(fulldata,audio_data) #saves filtered data in an array
return (audio_data, pyaudio.paContinue)
I had a similar issue trying to work with the PyAudio callback mode, but my requirements where:
Working with stereo output (2 channels).
Processing in real time.
Processing the input signal using an arbitrary impulse response, that could change in the middle of the process.
I succeeded after a few tries, and here are fragments of my code (based on the PyAudio example found here):
import pyaudio
import scipy.signal as ss
import numpy as np
import librosa
track1_data, track1_rate = librosa.load('path/to/wav/track1', sr=44.1e3, dtype=np.float64)
track2_data, track2_rate = librosa.load('path/to/wav/track2', sr=44.1e3, dtype=np.float64)
track3_data, track3_rate = librosa.load('path/to/wav/track3', sr=44.1e3, dtype=np.float64)
# instantiate PyAudio (1)
p = pyaudio.PyAudio()
count = 0
IR_left = first_IR_left # Replace for actual IR
IR_right = first_IR_right # Replace for actual IR
# define callback (2)
def callback(in_data, frame_count, time_info, status):
global count
track1_frame = track1_data[frame_count*count : frame_count*(count+1)]
track2_frame = track2_data[frame_count*count : frame_count*(count+1)]
track3_frame = track3_data[frame_count*count : frame_count*(count+1)]
track1_left = ss.fftconvolve(track1_frame, IR_left)
track1_right = ss.fftconvolve(track1_frame, IR_right)
track2_left = ss.fftconvolve(track2_frame, IR_left)
track2_right = ss.fftconvolve(track2_frame, IR_right)
track3_left = ss.fftconvolve(track3_frame, IR_left)
track3_right = ss.fftconvolve(track3_frame, IR_right)
track_left = 1/3 * track1_left + 1/3 * track2_left + 1/3 * track3_left
track_right = 1/3 * track1_right + 1/3 * track2_right + 1/3 * track3_right
ret_data = np.empty((track_left.size + track_right.size), dtype=track1_left.dtype)
ret_data[1::2] = br_left
ret_data[0::2] = br_right
ret_data = ret_data.astype(np.float32).tostring()
count += 1
return (ret_data, pyaudio.paContinue)
# open stream using callback (3)
stream = p.open(format=pyaudio.paFloat32,
channels=2,
rate=int(track1_rate),
output=True,
stream_callback=callback,
frames_per_buffer=2**16)
# start the stream (4)
stream.start_stream()
# wait for stream to finish (5)
while_count = 0
while stream.is_active():
while_count += 1
if while_count % 3 == 0:
IR_left = first_IR_left # Replace for actual IR
IR_right = first_IR_right # Replace for actual IR
elif while_count % 3 == 1:
IR_left = second_IR_left # Replace for actual IR
IR_right = second_IR_right # Replace for actual IR
elif while_count % 3 == 2:
IR_left = third_IR_left # Replace for actual IR
IR_right = third_IR_right # Replace for actual IR
time.sleep(10)
# stop stream (6)
stream.stop_stream()
stream.close()
# close PyAudio (7)
p.terminate()
Here are some important reflections about the code above:
Working with librosa instead of wave allows me to use numpy arrays for processing which is much better than the chunks of data from wave.readframes.
The data type you set in p.open(format= must match the format of the ret_data bytes. And PyAudio works with float32 at most.
Even index bytes in ret_data go to the right headphone, and odd index bytes go to the left one.
Just to clarify, this code sends the mix of three tracks to the output audio in stereo, and every 10 seconds it changes the impulse response and thus the filter being applied.
I used this for testing a 3d audio app I'm developing, and so the impulse responses where Head Related Impulse Responses (HRIRs), that changed the position of the sound every 10 seconds.
EDIT:
This code had a problem: the output had a noise of a frequency corresponding to the size of the frames (higher frequency when size of frames was smaller). I fixed that by manually doing an overlap and add of the frames. Basically, the ss.oaconvolve returned an array of size track_frame.size + IR.size - 1, so I separated that array into the first track_frame.size elements (which was then used for ret_data), and then the last IR.size - 1 elements I saved for later. Those saved elements would then be added to the first IR.size - 1 elements of the next frame. The first frame adds zeros.

Categories

Resources