I've been tinkering around with pyaudio for a while now, trying to reverse a simple wave file with no success.
In (my) theory I would only have to iterate from end to beginning through the file with every callback of pyaudio (1024 frames) fetch the audio data from the according index in the file, reverse the resulting string and play it.
Here is my code (only pyaudio callback and file handling, the rest is untouched from the example code):
import pyaudio
import wave
import time
import sys
if len(sys.argv) < 2:
print("Plays a wave file.\n\nUsage: %s filename.wav" % sys.argv[0])
sys.exit(-1)
index = 40*1024
wf = wave.open(sys.argv[1], 'rb')
wf.setpos(index)
p = pyaudio.PyAudio()
def callback(in_data, frame_count, time_info, status):
global index
data = wf.readframes(frame_count)
data = data[::-1]
index-=1024
wf.setpos(index)
return (data, pyaudio.paContinue)
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=wf.getnchannels(),
rate=wf.getframerate(),
output=True,
stream_callback=callback)
stream.start_stream()
while stream.is_active():
time.sleep(0.1)
stream.stop_stream()
stream.close()
wf.close()
p.terminate()
I know this will crash when it reaches the file beginning, but it should play 40 × 1024 frames of reversed audio...
If the file you want reversed is small enough to fit in memory, your best bet would be loading it entirely and reversing the data, then streaming it:
import pyaudio
import wave
wavefile_name = 'wavefile.wav'
wf = wave.open(wavefile_name, 'rb')
p = pyaudio.PyAudio()
stream = p.open(format =
p.get_format_from_width(wf.getsampwidth()),
channels = wf.getnchannels(),
rate = wf.getframerate(),
output = True)
full_data = []
data = wf.readframes(1024)
while data:
full_data.append(data)
data = wf.readframes(1024)
data = ''.join(full_data)[::-1]
for i in range(0, len(data), 1024):
stream.write(data[i:i+1024])
However, if the file is too big to fit in memory, you'll need some way of reading the file backwards and feed into the sound system. That's an entirely different problem, because it involves some low level I/O programming to handle the backward reading.
Edit: after seeing your full code, I don't see any errors. The code runs properly in my computer, only to fail at the ending of the playback. However, you can do two things. First, you can go to the end of the wavefile to play the entire sound backwards, using this:
wf = wave.open(sys.argv[1], 'rb')
index = wf.getnframes() - 1024
wf.setpos(index)
Second, you have to modify the callback so it doesn't fail when the seek head goes beyond the beginning of the file:
def callback(in_data, frame_count, time_info, status):
global index
data = wf.readframes(frame_count)
data = data[::-1]
index-=1024
if index < 0:
return (data, pyaudio.paAbort)
else:
wf.setpos(max(index,0))
return (data, pyaudio.paContinue)
Other than that, it works pretty ok.
Related
Not knowledgeable in programming but in following along with the wire and wire callback examples on the documentation website I am struggling to figure out how to access the stream.read(CHUNK) data in callback mode to be processed by audioop.rms() from the audioop library.
Below are two slightly modified examples. The former is the wire method in blocking mode and processed by the rms function successfully. The latter is the wire method in non-blocking mode which I do not know how to go about accessing the same data.
"""
PyAudio Example: Make a wire between input and output (i.e., record a
few samples and play them back immediately).
"""
import pyaudio
import audioop # new
CHUNK = 1024
WIDTH = 2
CHANNELS = 2
RATE = 44100
RECORD_SECONDS = 5
p = pyaudio.PyAudio()
stream = p.open(format=p.get_format_from_width(WIDTH),
channels=CHANNELS,
rate=RATE,
input=True,
output=True,
frames_per_buffer=CHUNK)
print("* recording")
while True: # <-----------------------------------------
data = stream.read(CHUNK)
stream.write(data, CHUNK)
rms = audioop.rms(data, WIDTH) # new
print(rms) # computer audio power
print("* done")
stream.stop_stream()
stream.close()
p.terminate()
non-blocking
"""
PyAudio Example: Make a wire between input and output (i.e., record a
few samples and play them back immediately).
This is the callback (non-blocking) version.
"""
import pyaudio
import time
import audioop # new
WIDTH = 2
CHANNELS = 2
RATE = 44100
p = pyaudio.PyAudio()
def callback(in_data, frame_count, time_info, status):
return in_data, pyaudio.paContinue
stream = p.open(format=p.get_format_from_width(WIDTH),
channels=CHANNELS,
rate=RATE,
input=True,
output=True,
stream_callback=callback)
stream.start_stream()
while stream.is_active(): # <--------------------------------------------
time.sleep(0.1)
# data = stream.read(1024) # the docs say not to call this
data = stream.get_read_available() # not sure what to do
print(data) # new
rms = audioop.rms(data, WIDTH) # compute audio power
print(rms) # new
stream.stop_stream()
stream.close()
p.terminate()
I see this is from almost a year ago, but it may help others. You don't need (and in fact, as you discovered, can't use) stream.read() when using non-blocking mode and the callback function. You don't need to. The callback function will be automatically called in a different thread when the stream you opened has data to input or output from the default sources.
Since you want to do something with each chunk (compute the rms value), you need to do that inside the callback function as it processes each chunk. However, if you try to do too much, it can't process it all before the next chunk is ready, and you lose data. File writes and print statements are therefore not something you should do in the callback. (Think of it like an interrupt handler: only do what's essential). So in the working code below, I just compute the rms in the callback function, and make it a global variable so that you can access it from the main program for printing.
I didn't try to compute the audio time for each chunk you're passing to the callback, so by keeping the time.sleep() value at 0.1 it's possible that the code below is missing intermediate rms values.
"""
PyAudio Example: Make a wire between input and output (i.e., record a
few samples and play them back immediately).
This is the callback (non-blocking) version.
"""
import pyaudio
import time
import audioop # new
WIDTH = 2
CHANNELS = 2
RATE = 44100
p = pyaudio.PyAudio()
rms = None
def callback(in_data, frame_count, time_info, status):
# print(in_data) # takes too long in callback
global rms
rms = audioop.rms(in_data, WIDTH) # compute audio power
# print(rms) # new # takes too long in callback
return in_data, pyaudio.paContinue
stream = p.open(format=p.get_format_from_width(WIDTH),
channels=CHANNELS,
rate=RATE,
input=True,
output=True,
stream_callback=callback)
stream.start_stream()
while stream.is_active(): # <--------------------------------------------
print(rms) # may be losing some values if sleeping too long, didn't check
time.sleep(0.1)
stream.stop_stream()
stream.close()
p.terminate()
I want to record a audio track and save it 2 diffrent .wav files. The audio tracks should be saved with a delay of ca. 6 seconds and each .wav should be 12 seconds long.
I tried to do it with multiprocessing and pyaudio, but i cant manage to get it working
Please note that i am a beginner in python and that this is my first post on stackoverflow!
def func1():
#Record and save a 12 seconds long .wav
def func2():
#Record and save a 12 seconds long .wav
if __name__ == '__main__':
p1 = Process(target=func1)
p1.start()
p2 = Process(target=func2)
p2.start()
p1.join()
p2.join()
#start func2 6 seconds after func1
I would expect a data structure like this:
|---1.wav---|---1.wav---|---1.wav---|
|---2.wav---|---2.wav---|---2.wav---|
6sec 12sec 18sec 24sec 30sec 36sec 42sec
EDIT:
I came up with a bit of code that seems to work kind of well. It has a delay of .144 seconds. I am happy about improvement od this code. This code uses threading instead of multiprocessing.
import pyaudio
import wave
from threading import Thread
import time
from datetime import datetime
FORMAT = pyaudio.paInt16
CHANNELS = 2
RATE = 44100
CHUNK = 1024
CHUNK1 = 1024
RECORD_SECONDS = 12
WAVE_OUTPUT_FILENAME1 = name = "outputs/output_1"+datetime.now().strftime("%m:%d:%Y-")
WAVE_OUTPUT_FILENAME2 = name = "outputs/output_2"+datetime.now().strftime("%m:%d:%Y-")
def func1():
while 1==1:
global FORMAT
global CHANNELS
global RATE
global CHUNK
global RECORD_SECONDS
global WAVE_OUTPUT_FILENAME1
WAVE_OUTPUT_FILENAME1 = name = "outputs/output1_"#+datetime.now().strftime("%m:%d:%Y-")
audio = pyaudio.PyAudio()
stream = audio.open(format=FORMAT, channels=CHANNELS,
rate=RATE, input=True,
frames_per_buffer=CHUNK)
print("recording...")
frames = []
WAVE_OUTPUT_FILENAME1 = WAVE_OUTPUT_FILENAME1+datetime.now().strftime("%H;%M;%S.%f--")
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
frames.append(data)
WAVE_OUTPUT_FILENAME1 = WAVE_OUTPUT_FILENAME1 + datetime.now().strftime("%H;%M;%S.%f")+".wav"
print("finished recording")
# stop Recording
stream.stop_stream()
stream.close()
audio.terminate()
waveFile = wave.open(WAVE_OUTPUT_FILENAME1, 'wb')
waveFile.setnchannels(CHANNELS)
waveFile.setsampwidth(audio.get_sample_size(FORMAT))
waveFile.setframerate(RATE)
waveFile.writeframes(b''.join(frames))
waveFile.close()
def func2():
time.sleep(6)
while 1==1:
global FORMAT
global CHANNELS
global RATE
global CHUNK1
global RECORD_SECONDS
global WAVE_OUTPUT_FILENAME2
WAVE_OUTPUT_FILENAME2 = name = "outputs/output2_"#+datetime.now().strftime("%m:%d:%Y-")
audio = pyaudio.PyAudio()
stream = audio.open(format=FORMAT, channels=CHANNELS,
rate=RATE, input=True,
frames_per_buffer=CHUNK1)
print("recording...")
frames = []
WAVE_OUTPUT_FILENAME2 = WAVE_OUTPUT_FILENAME2+datetime.now().strftime("%H;%M;%S.%f--")
for i in range(0, int(RATE / CHUNK1 * RECORD_SECONDS)):
data = stream.read(CHUNK1)
frames.append(data)
WAVE_OUTPUT_FILENAME2 = WAVE_OUTPUT_FILENAME2 + datetime.now().strftime("%H;%M;%S.%f")+".wav"
print("finished recording")
# stop Recording
stream.stop_stream()
stream.close()
audio.terminate()
waveFile = wave.open(WAVE_OUTPUT_FILENAME2, 'wb')
waveFile.setnchannels(CHANNELS)
waveFile.setsampwidth(audio.get_sample_size(FORMAT))
waveFile.setframerate(RATE)
waveFile.writeframes(b''.join(frames))
waveFile.close()
if __name__ == '__main__':
Thread(target = func1).start()
Thread(target = func2).start()
why do you think you need multiprocessing? I think it just complicates things
how about just recording in 6 second (or smaller) chunks/frames and write the correct frames to each file.
I've got a bit carried away, and written a nice class to do this:
import pyaudio
import wave
import time
class OverlappedRecorder:
def __init__(
self, secs_per_file, secs_between_file, *,
num_channels=2, sample_rate=48000,
sample_format=pyaudio.paInt16,
):
# various constants needed later
self.num_channels = num_channels
self.sample_width = pyaudio.get_sample_size(sample_format)
self.sample_rate = sample_rate
self.frames_between_start = int(secs_between_file * sample_rate)
self.frames_per_file = int(secs_per_file * sample_rate)
# mutable state needed to keep everything going
self.files = []
self.frames_till_next_file = 0
self.pa = pyaudio.PyAudio()
self.stream = self.pa.open(
format=sample_format, channels=num_channels,
rate=sample_rate, frames_per_buffer=1024,
input=True, start=False,
stream_callback=self._callback,
)
def sleep_while_active(self):
while self.stream.is_active():
time.sleep(0.2)
def begin_wave_file(self):
"internal function to start a new WAV file"
path = time.strftime(
'recording-%Y-%m-%d-%H.%M.%S.wav',
time.localtime()
)
file = wave.open(path, 'wb')
file.setnchannels(self.num_channels)
file.setsampwidth(self.sample_width)
file.setframerate(self.sample_rate)
self.files.append(file)
# context manager stuff, recording starts when entered using "with"
def __enter__(self):
self.stream.start_stream()
return self
# exiting shuts everything down
def __exit__(self, exc_type, exc_val, exc_tb):
self.stream.stop_stream()
self.stream.close()
self.pa.terminate()
for file in self.files:
file.close()
# called by pyaudio when a new set of frames are ready
def _callback(self, data, frame_count, time_info, status):
self.frames_till_next_file -= frame_count
# see if we need to start a new file
if self.frames_till_next_file < 0:
self.frames_till_next_file += self.frames_between_start
self.begin_wave_file()
# can't remove from lists while iterating
# keep a list of files to close and remove later
done = []
for file in self.files:
remain = self.frames_per_file - file.getnframes()
# add appropriate amount of data to all open files
if frame_count < remain:
file.writeframesraw(data)
else:
remain *= self.sample_width * self.num_channels
file.writeframesraw(data[:remain])
done.append(file)
# close anything that finished
for file in done:
file.close()
self.files.remove(file)
# tell pyaudio to keep going
return (None, pyaudio.paContinue)
basic usage is: create an object, enter it using with and it'll start recording, and when you exit it'll stop and clean up.
rec = OverlappedRecorder(12, 6)
with rec:
time.sleep(30)
will let it run for 30 seconds, or you could do:
with OverlappedRecorder(12, 6) as rec:
rec.sleep_while_active()
to let it run until you hit Ctrl+C to kill the program, or you could put a call to input() in there to make it stop when you press enter, or whatever else you like.
a few comments on the code you posted:
you only need to declare global variables if you're going to modify them
why do you have seperate functions? why not just have a single function, and just delay start()ing the second Thread
why are you setting WAVE_OUTPUT_FILENAME1 so many times? just save the start_time and end_time, then format the string in one go
you don't have to read() in chunks, if you know it's going to fit in memory just read everything all in one go
you shouldn't need to keep starting and stopping recording, just open it once in each thread and if you're lucky samples will accumulate in the buffer while you're writing the wav file to disk
something like:
import pyaudio
import wave
import time
from datetime import datetime
from threading import Thread
FORMAT = pyaudio.paInt16
CHANNELS = 2
RATE = 44100
RECORD_SECONDS = 12
def recorder(prefix):
audio = pyaudio.PyAudio()
stream = audio.open(
format=FORMAT, channels=CHANNELS,
rate=RATE, input=True,
)
try:
while True:
start_time = datetime.now()
print("recording started", start_time)
data = stream.read(RATE * RECORD_SECONDS, False)
end_time = datetime.now()
print("finished", end_time)
name = f'{prefix}{start_time:%Y-%m-%d-%H-%M-%S.%f}-{end_time:%H-%M-%S.%f}.wav'
print("writing", name)
with wave.open(name, 'wb') as waveFile:
waveFile.setnchannels(CHANNELS)
waveFile.setsampwidth(audio.get_sample_size(FORMAT))
waveFile.setframerate(RATE)
waveFile.writeframes(data)
finally:
stream.stop_stream()
stream.close()
audio.terminate()
if __name__ == '__main__':
Thread(target=recorder, args=('outputs/output_1-',)).start()
time.sleep(6)
Thread(target=recorder, args=('outputs/output_2-',)).start()
a few differences:
the version using threading is much less code!
my version allows an arbitrary number of files without using up multiple OS threads for each file (there's the Python thread and pyaudio has an internal thread looking after audio buffers)
my version saves partial files
hope all the helps / makes sense!
I'm trying to create a simple app that loads wav files (one for each note of a keyboard) and plays specific ones when a midi note is pressed (or played). So far, I've created a midi input stream using mido and an audio stream using pyaudio in two separate threads. the goal is for the midi stream to update the currently playing notes, and the callback of the pyaudio stream to check for active notes and play those that are. The midi stream works fine, but my audio stream only seems to call the callback once, right when the script is started (print(notes)). Any idea how I can get the audio stream callback to update constantly?
import wave
from io import BytesIO
import os
from mido import MidiFile
import pyaudio
from time import sleep
from threading import Thread
import numpy
# Pipe: active, released
# Rank: many pipes
# Stop: one or more ranks
# Manual: multiple ranks
# Organ: multiple manuals
pipes = []
notes = []
p = pyaudio.PyAudio()
def mapRange(num, inMin, inMax, outMin, outMax):
return int((num - inMin) * (outMax - outMin) / (inMax - inMin) + outMin)
def callback(in_data, frame_count, time_info, status):
data = bytes(frame_count)
print(notes)
for note in notes:
pipedata = bytes()
if len(data) != 0:
data1 = numpy.fromstring(data, numpy.int16)
data2 = numpy.fromstring(note['sample'].readframes(frame_count), numpy.int16)
pipedata = (data1 * 0.5 + data2 * 0.5).astype(numpy.int16)
else:
data2 = numpy.fromstring(note['sample'].readframes(frame_count), numpy.int16)
pipedata = data2.astype(numpy.int16)
data = pipedata.tostring()
return (data, pyaudio.paContinue)
stream = p.open(format=pyaudio.paInt24,
channels=2,
rate=48000,
output=True,
stream_callback=callback,
start=True)
# start the stream (4)
stream.start_stream()
for root, dirs, files in os.walk("samples"):
for filename in files:
file_on_disk = open(os.path.join(root, filename), 'rb')
pipes.append(
{"sample": wave.open(BytesIO(file_on_disk.read()), 'rb')})
for msg in MidiFile('test.mid').play():
if msg.type == "note_on":
notes.append(pipes[mapRange(msg.note, 36, 96, 0, 56)])
print("on")
if msg.type == "note_off":
#notes[mapRange(msg.note, 36, 96, 0, 56)] = False
print("off")
# wait for stream to finish (5)
while stream.is_active():
sleep(0.1)
# stop stream (6)
stream.stop_stream()
stream.close()
# close PyAudio (7)
p.terminate()
I too faced this issue and found this question in hopes of finding an answer, ended up figuring it out myself.
The data returned on the callback must match the number of frames (frames_per_buffer parameter in p.open). I see you didn't specify one so I think the default is 1024.
The thing is frames_per_buffer does not represent bytes but acrual frames. So since you specify the format as being pyaudio.paInt24 that means that one frames is represented by 3 bytes (24 / 8). So in your callback you should be returning 3072 bytes or the callback will not be called again for some reason.
If you were using blocking mode and not writing those 3072 bytes in stream.write() it would result in a weird effect of slow and crackling audio.
I want to play many input data from a microphone without buffering. I tried, but there is buffering. Here is my code.
import pyaudio
import wave
import urllib.request
import struct
import numpy as np
import sounddevice as sd
import matplotlib.pyplot as plt
# Callback function---------------------------------
def callback(indata, outdata, frames, time, status):
# if status:
# print(status)
outdata[:] = indata
#---------------------------------------------------
# Parameters ----------------------------------------------
Window_Size = 22050 # Point
FORMAT_D = pyaudio.paFloat32; FORMAT_W = pyaudio.paInt32
CHANNELS = 1 # Mono
Sample_Rate = 22050 # Hz
dT = 1/Sample_Rate
RECORD_SECONDS = 20 # s
NOFFRAMES = int(Sample_Rate/Window_Size * RECORD_SECONDS)
WAVE_OUTPUT_FILENAME = "output.wav"
#-----------------------------------------------------------
p = pyaudio.PyAudio()
stream_D = p.open(format=FORMAT_D,
channels=CHANNELS,
rate=Sample_Rate,
input=True,
frames_per_buffer=Window_Size)
stream_W = p.open(format=FORMAT_W,
channels=CHANNELS,
rate=Sample_Rate,
input=True,
frames_per_buffer=Window_Size)
print("* recording")
frames = []
# "I think the problem appears from here"------------------------------
for i in range(0, int(Sample_Rate/Window_Size * RECORD_SECONDS)):
data_D = stream_D.read(Window_Size)
# data_W = stream_W.read(Window_Size)
decoded = np.fromstring(data_D, 'Float32')
# np.savetxt(str(i)+'ttt.txt',transform)
sd.play(decoded,22050)
# frames.append(data_W)
#-------------------------------------------------------
print("* done recording")
stream_D.stop_stream()
stream_D.close()
p.terminate()
#plt.plot(transform)
#plt.show()
# Save as a wave file---------------------------
#wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
#wf.setnchannels(CHANNELS)
#wf.setsampwidth(p.get_sample_size(FORMAT_W))
#wf.setframerate(Sample_Rate)
#wf.writeframes(b''.join(frames))
#wf.close()
#-------------------------------------------
This code performs to save input data from a microphone at 1s intervals, transform byte data to nparray data (np.transform()), and play the data with a speaker (sd.play()). This code works, but there is buffering when for loop start again. I want to play the sound from a microphone smoothly. When I asked first, someone recommended to use callback function, so I added it, But, I don't know how to use it. How do I get rid of the buffering? Is there some examples? Should I use Threads or multiprocessing?
The delay is due to buffer size ... you will get a negligible delay using a 1k buffer as per
# Window_Size = 22050 # Point
Window_Size = 1024 # Point
I'm trying to make real-time plotting sound in python. I need to get chunks from my microphone.
Using PyAudio, try to use
import pyaudio
import wave
import sys
chunk = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 44100
RECORD_SECONDS = 5
WAVE_OUTPUT_FILENAME = "output.wav"
p = pyaudio.PyAudio()
stream = p.open(format = FORMAT,
channels = CHANNELS,
rate = RATE,
input = True,
frames_per_buffer = chunk)
print "* recording"
all = []
for i in range(0, RATE / chunk * RECORD_SECONDS):
data = stream.read(chunk)
all.append(data)
print "* done recording"
stream.close()
p.terminate()
After, I getting the followin error:
* recording
Traceback (most recent call last):
File "gg.py", line 23, in <module>
data = stream.read(chunk)
File "/usr/lib64/python2.7/site-packages/pyaudio.py", line 564, in read
return pa.read_stream(self._stream, num_frames)
IOError: [Errno Input overflowed] -9981
I can't understand this buffer. I want, to use blocking IO mode, so if chunks not available, i want to wait for those chunks. But when I creating try except segment or sleep(0.1), i hear clicks, so this is not what i want.
Please suggest the best solution for my ploblem?
pyaudio.Stream.read() has a keyword parameter exception_on_overflow, set this to False.
For your sample code that would look like:
import pyaudio
import wave
import sys
chunk = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 44100
RECORD_SECONDS = 5
WAVE_OUTPUT_FILENAME = "output.wav"
p = pyaudio.PyAudio()
stream = p.open(format = FORMAT,
channels = CHANNELS,
rate = RATE,
input = True,
frames_per_buffer = chunk)
print "* recording"
all = []
for i in range(0, RATE / chunk * RECORD_SECONDS):
data = stream.read(chunk, exception_on_overflow = False)
all.append(data)
print "* done recording"
stream.close()
p.terminate()
See the PyAudio documentation for more details.
It seems like a lot of people are encountering this issue. I dug a bit into it and I think it means that between the previous call to stream.read() and this current call, data from the stream was lost (i.e. the buffer filled up faster than you cleared it).
From the doc for Pa_ReadStream() (the PortAudio function that stream.read() eventually ends up calling):
#return On success PaNoError will be returned, or PaInputOverflowed if
input data was discarded by PortAudio after the previous call and
before this call.
(PaInputOverflowed then causes an IOError in the pyaudio wrapper).
If it's OK for you to not capture every single frame, then you may ignore this error. If it's absolutely critical for you to have every frame, then you'll need to find a way to increase the priority of your application. I'm not familiar enough with Python to know a pythonic way to do this, but it's worth trying a simple nice command, or changing the scheduling policy to SCHED_DEADLINE.
Edit:
One issue right now is that when IOError is thrown, you lose all the frames collected in that call. To instead ignore the overflow and just return what we have, you can apply the patch below, which will cause stream.read() to ignore output underrun and input overflow errors from PortAudio (but still throw something if a different error occurred). A better way would be to make this behaviour (throw/no throw) customizable depending on your needs.
diff --git a/src/_portaudiomodule.c b/src/_portaudiomodule.c
index a8f053d..0878e74 100644
--- a/src/_portaudiomodule.c
+++ b/src/_portaudiomodule.c
## -2484,15 +2484,15 ## pa_read_stream(PyObject *self, PyObject *args)
} else {
/* clean up */
_cleanup_Stream_object(streamObject);
+
+ /* free the string buffer */
+ Py_XDECREF(rv);
+
+ PyErr_SetObject(PyExc_IOError,
+ Py_BuildValue("(s,i)",
+ Pa_GetErrorText(err), err));
+ return NULL;
}
-
- /* free the string buffer */
- Py_XDECREF(rv);
-
- PyErr_SetObject(PyExc_IOError,
- Py_BuildValue("(s,i)",
- Pa_GetErrorText(err), err));
- return NULL;
}
return rv;
I got the same error when I ran your code. I looked at the default sample rate of my default audio device, my macbook's internal microphone, it was 48000Hz not 44100Hz.
p.get_device_info_by_index(0)['defaultSampleRate']
Out[12]: 48000.0
When I changed RATE to this value, it worked.
I worked this on OS X 10.10, Got the same error while trying to get audio from the microphone in a SYBA USB card (C Media chipset), and process it in real time with fft's and more:
IOError: [Errno Input overflowed] -9981
The overflow was completely solved when using a Callback Mode, instead of the Blocking Mode, as written by libbkmz.(https://www.python.org/dev/peps/pep-0263/)
Based on that, the bit of the working code looked like this:
"""
Creating the audio stream from our mic
"""
rate=48000
self.chunk=2**12
width = 2
p = pyaudio.PyAudio()
# callback function to stream audio, another thread.
def callback(in_data,frame_count, time_info, status):
self.audio = numpy.fromstring(in_data,dtype=numpy.int16)
return (self.audio, pyaudio.paContinue)
#create a pyaudio object
self.inStream = p.open(format = p.get_format_from_width(width, unsigned=False),
channels=1,
rate=rate,
input=True,
frames_per_buffer=self.chunk,
stream_callback = callback)
"""
Setting up the array that will handle the timeseries of audio data from our input
"""
self.audio = numpy.empty((self.buffersize),dtype="int16")
self.inStream.start_stream()
while True:
try:
self.ANY_FUNCTION() #any function to run parallel to the audio thread, running forever, until ctrl+C is pressed.
except KeyboardInterrupt:
self.inStream.stop_stream()
self.inStream.close()
p.terminate()
print("* Killed Process")
quit()
This code will create a callback function, then create a stream object, start it and then loop in any function. A separate thread streams audio, and that stream is closed when the main loop is stopped. self.audio is used in any function. I also had problems with the thread running forever if not terminated.
Since Pyaudio runs this stream in a separate thread, and this made the audio stream stable, the Blocking mode might have been saturating depending on the speed or timing of the rest of the processes in the script.
Note that the chunk size is 2^12, but smaller chunks work just as well. There are other parameters I considered and played around with to make sure they all made sense:
Chunk size larger or smaller(no effect)
Number and format of bits for the words in the buffer, signed 16 bit in this case.
signedness of variables(tried with unsigned and got saturation patterns)
Nature of mic input, and selection as default in the system, gain etc.
Hope that works for someone!
My other answer solved the problem in most cases. However sometimes the error still occurs.
That was the reason why I scrapped pyaudio and switched to pyalsaaudio. My Raspy now smoothly records any sound.
import alsaaudio
import numpy as np
import array
# constants
CHANNELS = 1
INFORMAT = alsaaudio.PCM_FORMAT_FLOAT_LE
RATE = 44100
FRAMESIZE = 1024
# set up audio input
recorder=alsaaudio.PCM(type=alsaaudio.PCM_CAPTURE)
recorder.setchannels(CHANNELS)
recorder.setrate(RATE)
recorder.setformat(INFORMAT)
recorder.setperiodsize(FRAMESIZE)
buffer = array.array('f')
while <some condition>:
buffer.fromstring(recorder.read()[1])
data = np.array(buffer, dtype='f')
FORMAT = pyaudio.paInt16
Make sure to set the correct format, my internal microphone was set to 24 Bit (see Audio-Midi-Setup application).
I had the same issue on the really slow raspberry pi, but I was able to solve it (for most cases) by using the faster array module for storing the data.
import array
import pyaudio
FORMAT = pyaudio.paInt16
CHANNELS = 1
INPUT_CHANNEL=2
RATE = 48000
CHUNK = 512
p = pyaudio.PyAudio()
stream = p.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=INPUT_CHANNEL,
frames_per_buffer =CHUNK)
print("* recording")
try:
data = array.array('h')
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data.fromstring(stream.read(CHUNK))
finally:
stream.stop_stream()
stream.close()
p.terminate()
print("* done recording")
The content of data is rather binary afterwards.
But you can use numpy.array(data, dtype='i') to get a numpy array of intergers.
Instead of
chunk = 1024
use:
chunk = 4096
It worked for me on a USB microphone.
This was helpful for me:
input_ = stream.read(chunk, exception_on_overflow=False)
exception_on_overflow = False
For me this helped: https://stackoverflow.com/a/46787874/5047984
I used multiprocessing to write the file in parallel to recording audio. This is my code:
recordAudioSamples.py
import pyaudio
import wave
import datetime
import signal
import ftplib
import sys
import os
# configuration for assos_listen
import config
# run the audio capture and send sound sample processes
# in parallel
from multiprocessing import Process
# CONFIG
CHUNK = config.chunkSize
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = config.samplingRate
RECORD_SECONDS = config.sampleLength
# HELPER FUNCTIONS
# write to ftp
def uploadFile(filename):
print("start uploading file: " + filename)
# connect to container
ftp = ftplib.FTP(config.ftp_server_ip, config.username, config.password)
# write file
ftp.storbinary('STOR '+filename, open(filename, 'rb'))
# close connection
ftp.quit()
print("finished uploading: " +filename)
# write to sd-card
def storeFile(filename,frames):
print("start writing file: " + filename)
wf = wave.open(filename, 'wb')
wf.setnchannels(CHANNELS)
wf.setsampwidth(p.get_sample_size(FORMAT))
wf.setframerate(RATE)
wf.writeframes(b''.join(frames))
wf.close()
print(filename + " written")
# abort the sampling process
def signal_handler(signal, frame):
print('You pressed Ctrl+C!')
# close stream and pyAudio
stream.stop_stream()
stream.close()
p.terminate()
sys.exit(0)
# MAIN FUNCTION
def recordAudio(p, stream):
sampleNumber = 0
while (True):
print("* recording")
sampleNumber = sampleNumber +1
frames = []
startDateTimeStr = datetime.datetime.now().strftime("%Y_%m_%d_%I_%M_%S_%f")
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
frames.append(data)
fileName = str(config.sensorID) + "_" + startDateTimeStr + ".wav"
# create a store process to write the file in parallel
storeProcess = Process(target=storeFile, args=(fileName,frames))
storeProcess.start()
if (config.upload == True):
# since waiting for the upload to finish will take some time
# and we do not want to have gaps in our sample
# we start the upload process in parallel
print("start uploading...")
uploadProcess = Process(target=uploadFile, args=(fileName,))
uploadProcess.start()
# ENTRYPOINT FROM CONSOLE
if __name__ == '__main__':
p = pyaudio.PyAudio()
stream = p.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK)
# directory to write and read files from
os.chdir(config.storagePath)
# abort by pressing C
signal.signal(signal.SIGINT, signal_handler)
print('\n\n--------------------------\npress Ctrl+C to stop the recording')
# start recording
recordAudio(p, stream)
config.py
### configuration file for assos_listen
# upload
upload = False
# config for this sensor
sensorID = "al_01"
# sampling rate & chunk size
chunkSize = 8192
samplingRate = 44100 # 44100 needed for Aves sampling
# choices=[4000, 8000, 16000, 32000, 44100] :: default 16000
# sample length in seconds
sampleLength = 10
# configuration for assos_store container
ftp_server_ip = "192.168.0.157"
username = "sensor"
password = "sensor"
# storage on assos_listen device
storagePath = "/home/pi/assos_listen_pi/storage/"