pyaudio audio recording python - python

I am trying to record audio from the microphone with Python. And I have following code:
import pyaudio
import wave
import threading
FORMAT = pyaudio.paInt16
CHANNELS = 2
RATE = 44100
CHUNK = 1024
WAVE_OUTPUT_FILENAME = "file.wav"
stop_ = False
audio = pyaudio.PyAudio()
stream = audio.open(format=FORMAT, channels=CHANNELS,
rate=RATE, input=True,
frames_per_buffer=CHUNK)
def stop():
global stop_
while True:
if not input('Press Enter >>>'):
print('exit')
stop_ = True
t = threading.Thread(target=stop, daemon=True).start()
frames = []
while True:
data = stream.read(CHUNK)
frames.append(data)
if stop_:
break
stream.stop_stream()
stream.close()
audio.terminate()
waveFile = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
waveFile.setnchannels(CHANNELS)
waveFile.setsampwidth(audio.get_sample_size(FORMAT))
waveFile.setframerate(RATE)
waveFile.writeframes(b''.join(frames))
waveFile.close()
My code works fine, but when I play my recording, I don't hear any sound in my final output file (file.wav).
Why do problems occur here and how do I fix them?

Your code is working fine. The problem you are facing is due to the admin rights. The audio file has constant 0 data, therefore, you can't listen to sound in the generated wav file. I suppose your microphone device is installed and working properly. If you are not sure about the audio installation status, then as per operating system do these steps:
MAC OS:
System Preferences->Sound->Input and there you can visualize the bars as make some sound. Make sure the selected device type is Built-in.
Windos OS:
Sound settings and test Microphone by click listen to this device, you may later uncheck it because it will loop back your voice to speakers and will create big noises.
Most probably you are using Mac OS. I had the similar issue, because I was using Atom editor to run the python code. Try to run your code from the terminal of Mac OS (or Power Shell if you are using windows), (in case a popup appears for access to microphone on Mac OS, press Ok). Thats it! your code will record fine. As a tester, please run the code below to check if you can visualize the sound, and make sure to run it through Terminal (No editors or IDEs).
import queue
import sys
from matplotlib.animation import FuncAnimation
import matplotlib.pyplot as plt
import numpy as np
import sounddevice as sd
# Lets define audio variables
# We will use the default PC or Laptop mic to input the sound
device = 0 # id of the audio device by default
window = 1000 # window for the data
downsample = 1 # how much samples to drop
channels = [1] # a list of audio channels
interval = 30 # this is update interval in miliseconds for plot
# lets make a queue
q = queue.Queue()
# Please note that this sd.query_devices has an s in the end.
device_info = sd.query_devices(device, 'input')
samplerate = device_info['default_samplerate']
length = int(window*samplerate/(1000*downsample))
# lets print it
print("Sample Rate: ", samplerate)
# Typical sample rate is 44100 so lets see.
# Ok so lets move forward
# Now we require a variable to hold the samples
plotdata = np.zeros((length,len(channels)))
# Lets look at the shape of this plotdata
print("plotdata shape: ", plotdata.shape)
# So its vector of length 44100
# Or we can also say that its a matrix of rows 44100 and cols 1
# next is to make fig and axis of matplotlib plt
fig,ax = plt.subplots(figsize=(8,4))
# lets set the title
ax.set_title("PyShine")
# Make a matplotlib.lines.Line2D plot item of color green
# R,G,B = 0,1,0.29
lines = ax.plot(plotdata,color = (0,1,0.29))
# We will use an audio call back function to put the data in queue
def audio_callback(indata,frames,time,status):
q.put(indata[::downsample,[0]])
# now we will use an another function
# It will take frame of audio samples from the queue and update
# to the lines
def update_plot(frame):
global plotdata
while True:
try:
data = q.get_nowait()
except queue.Empty:
break
shift = len(data)
plotdata = np.roll(plotdata, -shift,axis = 0)
# Elements that roll beyond the last position are
# re-introduced
plotdata[-shift:,:] = data
for column, line in enumerate(lines):
line.set_ydata(plotdata[:,column])
return lines
ax.set_facecolor((0,0,0))
# Lets add the grid
ax.set_yticks([0])
ax.yaxis.grid(True)
""" INPUT FROM MIC """
stream = sd.InputStream( device = device, channels = max(channels), samplerate = samplerate, callback = audio_callback)
""" OUTPUT """
ani = FuncAnimation(fig,update_plot, interval=interval,blit=True)
with stream:
plt.show()
Save this file as voice.py to a folder (let say AUDIO). Then cd to AUDIO folder from the terminal command and then execute it using:
python3 voice.py
or
python voice.py
depending on your python env name.

By using print(sd.query_devices()), I see a list of devices as below:
Microsoft Sound Mapper - Input, MME (2 in, 0 out)
Microphone (AudioHubNano2D_V1.5, MME (2 in, 0 out)
Internal Microphone (Conexant S, MME (2 in, 0 out)
...
However, if i use device = 0, I can still receive sound from the USB-microphone, which is device number 1. Is it by default, all the audio signal goes to the Sound Mapper? That means if I use device = 0, I will get all audio signal from all audio inputs; and if I just want audio input from one particular device, I need to choose its number x as device = x.
I have another question: is it possible to capture audio signal from device 1 and 2 in one application but in separate manner?

Related

How can I detect specific sound from live application?

Is there a way to collect live audio from application (like chrome, mozilla) and execute some code
when a specific sound is played on website ?
If you have a mic on whatever device you are using, you can use that to read whatever sound is coming out of your computer. Then, you can compare those audio frames that you are recording to a sound file of what sound you are looking for
Of course, this leaves it very vulnerable to background noise, so somehow you are going to have to filter it out.
Here is an example using the PyAudio and wave libraries:
import pyaudio
import wave
wf = wave.open("websitSound.wav", "rb")
amountFrames = 100 # just an arbitrary number; could be anything
sframes = wf.readframes(amountFrames)
currentSoundFrame = 0
chunk = 1024 # Record in chunks of 1024 samples
sample_format = pyaudio.paInt16 # 16 bits per sample
channels = 2
fs = 44100 # Record at 44100 samples per second
seconds = 3
p = pyaudio.PyAudio() # Create an interface to PortAudio
stream = p.open(format=sample_format,
channels=channels,
rate=fs,
frames_per_buffer=chunk,
input=True)
# Store data in chunks for 3 seconds
for i in range(0, int(fs / chunk * seconds)):
data = stream.read(chunk)
if data == sframes[currentSoundFrame]:
currentSoundFrame += 1
if currentSoundFrame == len(sframes): #the whole entire sound was played
print("Sound was played!")
frames.append(data)
# Stop and close the stream
stream.stop_stream()
stream.close()
# Terminate the PortAudio interface
p.terminate()

How to change microphone data in python sounddevice in realtime

I play sound from my microphone with sounddevice (Python 3.6.3, Win 7) in real time.
I did my X global.
global x
for n in range(10):
х = n + 1
def cаllbасk(indаtа, outdаtа, frаmes, time):
оutdаta[:] = indаta * х
with sd.Stream(device=(1, 3),samplerate=44100,dtype='float32',
latency=None,channels=2, callback=callback):
input()
I actually need to change volume separately in left independently from right channel. Imagine it's "indata * x". I know how to do it permanently by using numerical constant but don't know how to do it in real time in a loop. Data doesn't change. Maybe I must stop stream after one loop but I don't know how. I'm newby. I'd like not to use functions or map if it's possible. Thanks for understanding :)
I also wanted to use PyAudio but I don't know how to have access to left and right channel data to have ability change volume of each channel:
`CHUNK = 1024
WIDTH = 2
CHANNELS = 2
RATE = 44100
р = pyаudio.PyАudio()
strеаm = р.open(format=p.get_format_from_width(WIDTH),
channels=CHANELS,
rate=RATE,
input=True,
output=True,
frames_per_buffer=CHUNK)
dаta=[]
fоr i in rаnge(1):
data.append(stream.read(CHUNK))
sound=[bytes(dаtа[0])]
stream.write(sоund.pop(0). СНUNK)`

Pyaudio recording wav file with soundcard gives an empty recording

I've been trying to work on a project to detect time shift between two streaming audio signals. I worked with python3, Pyaudio and I'm using a Motux828 sound card with a Neumann KU-100 microphone which takes a stereo input. So when i check my input_device_index I am the correct one which is the 4th one connnected to MOTU soundcard.
However when i record with:
import time
import pyaudio
import wave
CHUNK = 1024 * 3 # Chunk is the bytes which are currently processed
FORMAT = pyaudio.paInt16
RATE = 44100
RECORD_SECONDS = 2
WAVE_OUTPUT = "temp.wav"
p = pyaudio.PyAudio()
stream = p.open(format=FORMAT,channels=2,rate=RATE,input=True,frames_per_buffer=CHUNK,input_device_index=4)
frames = [] # np array storing all the data
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream1.read(CHUNK)
frames.append(data1)
stream.stop_stream()
stream.close()
p.terminate()
wavef = wave.open(WAVE_OUTPUT, 'wb') # opening the file
wavef.setnchannels(1)
wavef.setsampwidth(p.get_sample_size(FORMAT))
wavef.setframerate(RATE)
wavef.writeframes(b''.join(frames1)) # writing the data to be saved
wavef.close()
I record a wave file with no sound, with almost no noise(naturally)
Also I can record with 3rd party softwares with the specific microphone.
It works completely, fine.
NOTE:
Sound card is 24-bit depth normally, I also tried paInt24 that records a wave file with pure noise
I think u mentioned wrong variable names as i seen your code. The wrong variables are :
data = stream1.read(CHUNK)
frames.append(data1)
wavef.writeframes(b''.join(frames1))
the correct values are :
data = stream.read(CHUNK)
frames.append(data)
wavef.writeframes(b''.join(frames))

Play multiple sounds at the same time in python

I have been looking into a way to play sounds from a list of samples, and I found some modules that can do this.
I am using AudioLazy module to play the sound using the following script:
from audiolazy import AudioIO
sound = Somelist
with AudioIO(True) as player:
player.play(sound, rate=44100)
The problem with this code is that it stop the whole application till the sound stop playing and I can't play multiple sound at the same time.
My program is interactive so what I want is to be able to play multiple sound at the same time,So for instance I can run this script which will play a 5 second sound then at the second 2 I can play a 5 second sound again.
And I don't want the whole program to stop till the sound finish playing.
Here is a simpler solution using pydub.
Using overlay function of AudioSegment module, you can very easily superimpose multiple audio on to each other.
Here is a working code to combine three audio files. Using same concept you can combine multiple audio onto each other.
More on overlay function here
pydub supports multiple audio formats as well.
from pydub import AudioSegment
from pydub.playback import play
audio1 = AudioSegment.from_file("chunk1.wav") #your first audio file
audio2 = AudioSegment.from_file("chunk2.wav") #your second audio file
audio3 = AudioSegment.from_file("chunk3.wav") #your third audio file
mixed = audio1.overlay(audio2) #combine , superimpose audio files
mixed1 = mixed.overlay(audio3) #Further combine , superimpose audio files
#If you need to save mixed file
mixed1.export("mixed.wav", format='wav') #export mixed audio file
play(mixed1) #play mixed audio file
Here are updates as per our discussions.
First we create 44KHz signal and save to sound.wav
Next Read wave file and save signal to text file
Then create three variations of input signal to test overlay.
Original signal has dtype int16
Then we create three audio segments
then mix/overlay as above.
wav signal data is stored in test.txt
Working Modified Code
import numpy as np
from scipy.io.wavfile import read
from pydub import AudioSegment
from pydub.playback import play
import wave, struct, math
#Create 44KHz signal and save to 'sound.wav'
sampleRate = 44100.0 # hertz
duration = 1.0 # seconds
frequency = 440.0 # hertz
wavef = wave.open('sound.wav','w')
wavef.setnchannels(1) # mono
wavef.setsampwidth(2)
wavef.setframerate(sampleRate)
for i in range(int(duration * sampleRate)):
value = int(32767.0*math.cos(frequency*math.pi*float(i)/float(sampleRate)))
data = struct.pack('<h', value)
wavef.writeframesraw( data )
wavef.writeframes('')
wavef.close()
#Read wave file and save signal to text file
rate, signal = read("sound.wav")
np.savetxt('test.txt', signal, delimiter=',') # X is an array
#load wav data from text file
wavedata1 = np.loadtxt("test.txt", comments="#", delimiter=",", unpack=False, dtype=np.int16)
#Create variation of signal
wavedata2 = np.loadtxt("test.txt", comments="#", delimiter=",", unpack=False, dtype=np.int32)
#Create variation of signal
wavedata3 = np.loadtxt("test.txt", comments="#", delimiter=",", unpack=False, dtype=np.float16)
#create first audio segment
audio_segment1 = AudioSegment(
wavedata1.tobytes(),
frame_rate=rate,
sample_width=2,
channels=1
)
#create second audio segment
audio_segment2 = AudioSegment(
wavedata2.tobytes(),
frame_rate=rate,
sample_width=2,
channels=1
)
#create third audio segment
audio_segment3 = AudioSegment(
wavedata3.tobytes(),
frame_rate=rate,
sample_width=2,
channels=1
)
# Play audio (requires ffplay, or pyaudio):
play(audio_segment1)
play(audio_segment2)
play(audio_segment3)
#Mix three audio segments
mixed1 = audio_segment1.overlay(audio_segment2) #combine , superimpose audio files
mixed2 = mixed1.overlay(audio_segment3) #Further combine , superimpose audio files
#If you need to save mixed file
mixed2.export("mixed.wav", format='wav') #export mixed audio file
play(mixed2) #play mixed audio file
Using multiple threads will solve your problem :
import threading
from audiolazy import AudioIO
sound = Somelist
with AudioIO(True) as player:
t = threading.Thread(target=player.play, args=(sound,), kwargs={'rate':44100})
t.start()
I suggest using Pyaudio to do this.
import pyaudio
import wave
sound1 = wave.open("/path/to/sound1", 'rb')
sound2 = wave.open("/path/to/sound2", 'rb')
def callback(in_data, frame_count, time_info, status):
data1 = sound1.readframes(frame_count)
data2 = sound2.readframes(frame_count)
decodeddata1 = numpy.fromstring(data1, numpy.int16)
decodeddata2 = numpy.fromstring(data2, numpy.int16)
newdata = (decodeddata1 * 0.5 + decodeddata2* 0.5).astype(numpy.int16)
return (newdata.tostring(), pyaudio.paContinue)

How to handle in_data in Pyaudio callback mode?

I'm doing a project on Signal Processing in python. So far I've had a little succes with the nonblocking mode, but it gave a considerable amount of delay and clipping to the output.
I want to implement a simple real-time audio filter using Pyaudio and Scipy.Signal, but in the callback function provided in the pyaudio example when I want to read the in_data I can't process it. Tried converting it in various ways but with no success.
Here's a code I want to achieve(read data from mic, filter, and output ASAP):
import pyaudio
import time
import numpy as np
import scipy.signal as signal
WIDTH = 2
CHANNELS = 2
RATE = 44100
p = pyaudio.PyAudio()
b,a=signal.iirdesign(0.03,0.07,5,40)
fulldata = np.array([])
def callback(in_data, frame_count, time_info, status):
data=signal.lfilter(b,a,in_data)
return (data, pyaudio.paContinue)
stream = p.open(format=pyaudio.paFloat32,
channels=CHANNELS,
rate=RATE,
output=True,
input=True,
stream_callback=callback)
stream.start_stream()
while stream.is_active():
time.sleep(5)
stream.stop_stream()
stream.close()
p.terminate()
What is the right way to do this?
Found the answer to my question in the meantime, the callback looks like this:
def callback(in_data, frame_count, time_info, flag):
global b,a,fulldata #global variables for filter coefficients and array
audio_data = np.fromstring(in_data, dtype=np.float32)
#do whatever with data, in my case I want to hear my data filtered in realtime
audio_data = signal.filtfilt(b,a,audio_data,padlen=200).astype(np.float32).tostring()
fulldata = np.append(fulldata,audio_data) #saves filtered data in an array
return (audio_data, pyaudio.paContinue)
I had a similar issue trying to work with the PyAudio callback mode, but my requirements where:
Working with stereo output (2 channels).
Processing in real time.
Processing the input signal using an arbitrary impulse response, that could change in the middle of the process.
I succeeded after a few tries, and here are fragments of my code (based on the PyAudio example found here):
import pyaudio
import scipy.signal as ss
import numpy as np
import librosa
track1_data, track1_rate = librosa.load('path/to/wav/track1', sr=44.1e3, dtype=np.float64)
track2_data, track2_rate = librosa.load('path/to/wav/track2', sr=44.1e3, dtype=np.float64)
track3_data, track3_rate = librosa.load('path/to/wav/track3', sr=44.1e3, dtype=np.float64)
# instantiate PyAudio (1)
p = pyaudio.PyAudio()
count = 0
IR_left = first_IR_left # Replace for actual IR
IR_right = first_IR_right # Replace for actual IR
# define callback (2)
def callback(in_data, frame_count, time_info, status):
global count
track1_frame = track1_data[frame_count*count : frame_count*(count+1)]
track2_frame = track2_data[frame_count*count : frame_count*(count+1)]
track3_frame = track3_data[frame_count*count : frame_count*(count+1)]
track1_left = ss.fftconvolve(track1_frame, IR_left)
track1_right = ss.fftconvolve(track1_frame, IR_right)
track2_left = ss.fftconvolve(track2_frame, IR_left)
track2_right = ss.fftconvolve(track2_frame, IR_right)
track3_left = ss.fftconvolve(track3_frame, IR_left)
track3_right = ss.fftconvolve(track3_frame, IR_right)
track_left = 1/3 * track1_left + 1/3 * track2_left + 1/3 * track3_left
track_right = 1/3 * track1_right + 1/3 * track2_right + 1/3 * track3_right
ret_data = np.empty((track_left.size + track_right.size), dtype=track1_left.dtype)
ret_data[1::2] = br_left
ret_data[0::2] = br_right
ret_data = ret_data.astype(np.float32).tostring()
count += 1
return (ret_data, pyaudio.paContinue)
# open stream using callback (3)
stream = p.open(format=pyaudio.paFloat32,
channels=2,
rate=int(track1_rate),
output=True,
stream_callback=callback,
frames_per_buffer=2**16)
# start the stream (4)
stream.start_stream()
# wait for stream to finish (5)
while_count = 0
while stream.is_active():
while_count += 1
if while_count % 3 == 0:
IR_left = first_IR_left # Replace for actual IR
IR_right = first_IR_right # Replace for actual IR
elif while_count % 3 == 1:
IR_left = second_IR_left # Replace for actual IR
IR_right = second_IR_right # Replace for actual IR
elif while_count % 3 == 2:
IR_left = third_IR_left # Replace for actual IR
IR_right = third_IR_right # Replace for actual IR
time.sleep(10)
# stop stream (6)
stream.stop_stream()
stream.close()
# close PyAudio (7)
p.terminate()
Here are some important reflections about the code above:
Working with librosa instead of wave allows me to use numpy arrays for processing which is much better than the chunks of data from wave.readframes.
The data type you set in p.open(format= must match the format of the ret_data bytes. And PyAudio works with float32 at most.
Even index bytes in ret_data go to the right headphone, and odd index bytes go to the left one.
Just to clarify, this code sends the mix of three tracks to the output audio in stereo, and every 10 seconds it changes the impulse response and thus the filter being applied.
I used this for testing a 3d audio app I'm developing, and so the impulse responses where Head Related Impulse Responses (HRIRs), that changed the position of the sound every 10 seconds.
EDIT:
This code had a problem: the output had a noise of a frequency corresponding to the size of the frames (higher frequency when size of frames was smaller). I fixed that by manually doing an overlap and add of the frames. Basically, the ss.oaconvolve returned an array of size track_frame.size + IR.size - 1, so I separated that array into the first track_frame.size elements (which was then used for ret_data), and then the last IR.size - 1 elements I saved for later. Those saved elements would then be added to the first IR.size - 1 elements of the next frame. The first frame adds zeros.

Categories

Resources