Playing a Lot of Sounds at Once - python

I am attempting to create a program in python that plays a particular harpsichord note when a certain key is pressed. I want it to remain responsive so you can continue to play more notes (kind of like a normal electric piano.) However, because the wav files that the notes are stored in are about 7-10 seconds long I am experiencing some issues. I can press at least 10 keys per second. So, over the duration of one note I could have around 100 different wav files playing at once. I tried to use winsound, but it was unable to play multiple wav files at once. I then moved on to PyAudio and it works kind of. The only way that I found to accomplish what I wanted was this:
from msvcrt import getch
import pyaudio
import wave
import multiprocessing as mp
#This function is just code for playing a sound in PyAudio
def playNote(filename):
CHUNK = 1024
wf = wave.open(filename, 'rb')
p = pyaudio.PyAudio()
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=wf.getnchannels(),
rate=wf.getframerate(),
output=True)
data = wf.readframes(CHUNK)
while data != '':
stream.write(data)
data = wf.readframes(CHUNK)
stream.stop_stream()
stream.close()
p.terminate()
if __name__ == "__main__":
while True:
#If the 'a' key is pressed: start a new process that calls playNote
#and pass in the file name for a note.
if ord(getch()) == 97: #a
mp.Process(target=playNote, args=("F:\Project Harpsichord\The wavs\A1.wav",)).start()
#If the 's' key is pressed: start a new process that calls playNote
#and pass in the file name for another note.
if ord(getch()) == 115: #s
mp.Process(target=playNote, args=("F:\Project Harpsichord\The wavs\A0.wav",)).start()
Basically whenever I want to play a new wav, I have to start a new process that runs the code in the playNote function. As I already stated I can potentially have up to 100 of these playing at once. Suffice it to say, one hundred copies of the python interpreter all running at once almost crashed my computer. I also tried a similar approach with multi-threading, but had the same problems.
This post shows a way to mix multiple wav files together so they can be played at the same time, but since my program will not necessarily be starting the sounds at the same time I am unsure if this will work.
I need an efficient way to play multiple notes at the same time. Whether this comes in the form of another library, or even a different language I really don't care.

I checked out pygame like J.F Sebastian suggested. It ended up being exactly what I needed. I used pygame.mixer.Sound() in conjunction with pygame.mixer.set_num_channels(). Here's what I came up with.
import pygame as pg
import time
pg.mixer.init()
pg.init()
a1Note = pg.mixer.Sound("F:\Project Harpsichord\The wavs\A1.wav")
a2Note = pg.mixer.Sound("F:\Project Harpsichord\The wavs\A0.wav")
pg.mixer.set_num_channels(50)
for i in range(25):
a1Note.play()
time.sleep(0.3)
a2Note.play()
time.sleep(0.3)

This doesn't really solve your problem, but it's too long for the comments, and it may be useful. I gave it a bash, got defeated on a few fronts - giving up and going for pizza. Audio is really not my thing, but it was quite a lot of fun playing around with it.
Give Pydub a look. I've Played around with a couple of methods, but haven't had any satisfactory success. This answer here explains quite a few things regarding adding two signals together nicely. I assume that the static you have is because of clipping.
Sorry that I didn't deliver, but I may as well post all the things I've created in case you or someone else wants to grab something from it:
#using python 2.7
#example animal sounds from http://www.wavsource.com/animals/animals.htm
#note that those sounds have lots of different sampling rates and encoding types. Causes problems.
#required installs:
#numpy
#scipy
#matplotlib
#pyaudio -sudo apt-get install python-pyaudio
#pydub: -pip install pydub
def example():
"example sounds and random inputs"
sExampleSoundsDir = "/home/roman/All/Code/sound_files"
sExampleFile1 = 'bird.wav'
sExampleFile2 = 'frog.wav'
oJ = Jurgenmeister(sExampleSoundsDir)
#load audio into numpy array
dSound1 = oJ.audio2array(sExampleFile1)
dSound2 = oJ.audio2array(sExampleFile2)
#Simply adding the arrays is noisy...
dResSound1 = oJ.resample(dSound1)
dResSound2 = oJ.resample(dSound2)
dJoined = oJ.add_sounds(dResSound1, dResSound2)
#pydub method
oJ.overlay_sounds(sExampleFile1, sExampleFile2)
#listen to the audio - mixed success with these sounds.
oJ.play_array(dSound1)
oJ.play_array(dSound2)
oJ.play_array(dResSound1)
oJ.play_array(dResSound2)
oJ.play_array(dJoined)
#see what the waveform looks like
oJ.plot_audio(dJoined)
class Jurgenmeister:
"""
Methods to play as many sounds on command as necessary
Named in honour of op, and its as good a name as I can come up with myself.
"""
def __init__(self, sSoundsDir):
import os
import random
lAllSounds = os.listdir(sSoundsDir)
self.sSoundsDir = sSoundsDir
self.lAllSounds = lAllSounds
self.sRandSoundName = lAllSounds[random.randint(0, len(lAllSounds)-1)]
def play_wave(self, sFileName):
"""PyAudio play a wave file."""
import pyaudio
import wave
iChunk = 1024
sDir = "{}/{}".format(self.sSoundsDir, sFileName)
oWave = wave.open(sDir, 'rb')
oPyaudio = pyaudio.PyAudio()
oStream = oPyaudio.open(
format = oPyaudio.get_format_from_width(oWave.getsampwidth()),
channels = oWave.getnchannels(),
rate = oWave.getframerate(),
output = True
)
sData = oWave.readframes(iChunk)
while sData != '':
oStream.write(sData)
sData = oWave.readframes(iChunk)
oStream.stop_stream()
oStream.close()
oPyaudio.terminate()
def audio2array(self, sFileName):
"""
Returns monotone data for a wav audio file in form:
iSampleRate, aNumpySignalArray, aNumpyTimeArray
Should perhaps do this with scipy again, but I threw that code away because I wanted
to try the pyaudio package because of its streaming functions. They defeated me.
"""
import wave
import numpy as np
sDir = "{}/{}".format(self.sSoundsDir, sFileName)
oWave = wave.open(sDir,"rb")
tParams = oWave.getparams()
iSampleRate = tParams[2] #frames per second
iLen = tParams[3] # number of frames
#depending on the type of encoding of the file. Usually 16
try:
sSound = oWave.readframes(iLen)
oWave.close()
aSound = np.fromstring(sSound, np.int16)
except ValueError:
raise ValueError("""wave package seems to want all wav incodings to be in int16, else it throws a mysterious error.
Short way around it: find audio encoded in the right format. Or use scipy.io.wavfile.
""")
aTime = np.array( [float(i)/iSampleRate for i in range(len(aSound))] )
dRet = {
'iSampleRate': iSampleRate,
'aTime': aTime,
'aSound': aSound,
'tParams': tParams
}
return dRet
def resample(self, dSound, iResampleRate=11025):
"""resample audio arrays
common audio sample rates are 44100, 22050, 11025, 8000
#creates very noisy results sometimes.
"""
from scipy import interpolate
import numpy as np
aSound = np.array(dSound['aSound'])
iOldRate = dSound['iSampleRate']
iOldLen = len(aSound)
rPeriod = float(iOldLen)/iOldRate
iNewLen = int(rPeriod*iResampleRate)
aTime = np.arange(0, rPeriod, 1.0/iOldRate)
aTime = aTime[0:iOldLen]
oInterp = interpolate.interp1d(aTime, aSound)
aResTime = np.arange(0, aTime[-1], 1.0/iResampleRate)
aTime = aTime[0:iNewLen]
aResSound = oInterp(aResTime)
aResSound = np.array(aResSound, np.int16)
tParams = list(x for x in dSound['tParams'])
tParams[2] = iResampleRate
tParams[3] = iNewLen
tParams = tuple(tParams)
dResSound = {
'iSampleRate': iResampleRate,
'aTime': aResTime,
'aSound': aResSound,
'tParams': tParams
}
return dResSound
def add_sounds(self, dSound1, dSound2):
"""join two sounds together and return new array
This method creates a lot of clipping. Not sure how to get around that.
"""
if dSound1['iSampleRate'] != dSound2['iSampleRate']:
raise ValueError('sample rates must be the same. Please resample first.')
import numpy as np
aSound1 = dSound1['aSound']
aSound2 = dSound2['aSound']
if len(aSound1) < len(aSound2):
aRet = aSound2.copy()
aRet[:len(aSound1)] += aSound1
aTime = dSound2['aTime']
tParams = dSound2['tParams']
else:
aRet = aSound1.copy()
aRet[:len(aSound2)] += aSound2
aTime = dSound1['aTime']
tParams = dSound1['tParams']
aRet = np.array(aRet, np.int16)
dRet = {
'iSampleRate': dSound1['iSampleRate'],
'aTime': aTime,
'aSound': aRet,
'tParams': tParams
}
return dRet
def overlay_sounds(self, sFileName1, sFileName2):
"I think this method warrants a bit more exploration
Also very noisy."
from pydub import AudioSegment
sDir1 = "{}/{}".format(self.sSoundsDir, sFileName1)
sDir2 = "{}/{}".format(self.sSoundsDir, sFileName2)
sound1 = AudioSegment.from_wav(sDir1)
sound2 = AudioSegment.from_wav(sDir2)
# mix sound2 with sound1, starting at 0ms into sound1)
output = sound1.overlay(sound2, position=0)
# save the result
sDir = "{}/{}".format(self.sSoundsDir, 'OUTPUT.wav')
output.export(sDir, format="wav")
def array2audio(self, dSound, sDir=None):
"""
writes an .wav audio file to disk from an array
"""
import struct
import wave
if sDir == None:
sDir = "{}/{}".format(self.sSoundsDir, 'OUTPUT.wav')
aSound = dSound['aSound']
tParams = dSound['tParams']
sSound = struct.pack('h'*len(aSound), *aSound)
oWave = wave.open(sDir,"wb")
oWave.setparams(tParams)
oWave.writeframes(sSound)
oWave.close()
def play_array(self, dSound):
"""Tried to use use pyaudio to play array by just streaming it. It didn't behave, and I moved on.
I'm just not getting the pyaudio stream to play without weird distortion
when not loading from file. Perhaps you have more luck.
"""
self.array2audio(dSound)
self.play_wave('OUTPUT.wav')
def plot_audio(self, dSound):
"just plots the audio array. Nice to see plots when things are going wrong."
import matplotlib.pyplot as plt
plt.plot(dSound['aTime'], dSound['aSound'])
plt.show()
if __name__ == "__main__":
example()
I also get this error when I use wave. It still works, so I just ignore it.
Problem seems to be widespread. Error lines:
ALSA lib pcm_dsnoop.c:618:(snd_pcm_dsnoop_open) unable to open slave
ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
bt_audio_service_open: connect() failed: Connection refused (111)
bt_audio_service_open: connect() failed: Connection refused (111)
bt_audio_service_open: connect() failed: Connection refused (111)
bt_audio_service_open: connect() failed: Connection refused (111)
ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
Good luck!

Related

How to change volume of stem files while playing using python

I'm attempting to write a python project that plays multiple parts of a song at the same time.
For background information, a song is split into "stems", and then each stem is played simultaneously to recreate the full song. What I am trying to achieve is using potentiometers to control the volume of each stem, so that the user can mix songs differently. For a product relation, the StemPlayer from Kanye West is what I am trying to achieve.
I can change the volume of the overlayed song at the end, but what I want to do is change the volume of each stem using a potentiometer while the song is playing. Is this even possible using pyDub? Below is the code I have right now.
from pydub import AudioSegment
from pydub.playback import play
vocals = AudioSegment.from_file("walkin_vocals.mp3")
drums = AudioSegment.from_file("walkin_drums.mp3")
bass = AudioSegment.from_file("walkin_bass.mp3")
vocalsDrums = vocals.overlay(drums)
bassVocalsDrums = vocalsDrums.overlay(bass)
songQuiet = bassVocalsDrums - 20
play(songQuiet)
Solved this question, I ended up using pyaudio instead of pydub.
With pyaudio, I was able to define a custom stream_callback function. Within this callback function, I multiply each stem by a modifier, then add each stem to one audio output.
def callback(in_data, frame_count, time_info, status):
global drumsMod, vocalsMod, bassMod, otherMod
drums = drumsWF.readframes(frame_count)
vocals = vocalsWF.readframes(frame_count)
bass = bassWF.readframes(frame_count)
other = otherWF.readframes(frame_count)
decodedDrums = numpy.frombuffer(drums, numpy.int16)
decodedVocals = numpy.frombuffer(vocals, numpy.int16)
decodedBass = numpy.frombuffer(bass, numpy.int16)
decodedOther = numpy.frombuffer(other, numpy.int16)
newdata = (decodedDrums*drumsMod + decodedVocals*vocalsMod + decodedBass*bassMod + decodedOther*otherMod).astype(numpy.int16)
return (newdata.tobytes(), pyaudio.paContinue)

I want to know how to write audio play callback in python-vlc

I wrote code to stream audio as simple as the following.
If no callback is registered (part **) this code works fine.
But I would like to register a play callback to preprocess the streamed data.
How to register a callback function can be found by looking at the python-vlc documentation.
But I can't figure out how to write the callback function.
The second argument to the callback function, samples, is a pointer to the data to be played.
How should I write a callback function at the (*) part with this pointer?
Any reply would be appreciated.
import vlc
import re
import requests
import ctypes
url = "http://serpent0.duckdns.org:8088/kbsfm.pls"
res = requests.get(url)
res.raise_for_status()
# retrieve url
p = re.compile("https://.+")
m = p.search(res.text)
url = m.group()
# AudioPlayCb = ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.c_void_p, ctypes.c_uint, ctypes.c_int64)
#vlc.CallbackDecorators.AudioPlayCb
def play_callback(opaque, samples, count, pts):
"""
#param data: data pointer as passed to L{libvlc_audio_set_callbacks}() [IN].
#param samples: pointer to a table of audio samples to play back [IN].
#param count: number of audio samples to play back.
#param pts: expected play time stamp (see libvlc_delay()).
"""
# HOW DO I RETURN SOMETHING FROM HERE? (*)
pass
return buffer
instance = vlc.Instance(["--prefetch-buffer-size=2000 --prefetch-read-size=5000 --network-caching=1000"]) #define VLC instance
media = instance.media_new(url)
player = media.player_new_from_media()
player.audio_set_format("f32l", 48000, 2)
# PLAYS WELL WITHOUT THIS LINE (**)
player.audio_set_callbacks(play=play_callback, pause=None, resume=None, flush=None, drain=None, opaque=None)
player.play() #Play the media
c = input()
I wrote the play callback function as below.
I think the buffer_array passed in should be copied to another buffer to play samples.
#vlc.CallbackDecorators.AudioPlayCb
def play_callback(data, samples, count, pts):
bytes_read = count * 4 * 2
buffer_array = ctypes.cast(samples, ctypes.POINTER(ctypes.c_char * bytes_read))
# This code is invalid. To do it right, where should buffer_array be copied to?
buffer = bytearray(bytes_read)
buffer[:] = buffer_array.contents

Reading QAudioProbe buffer

The Qt documentation (https://doc.qt.io/qtforpython-5/PySide2/QtMultimedia/QAudioBuffer.html) says that we should read the buffer from QAudioProbe like this:
// With a 16bit sample buffer:
quint16 *data = buffer->data<quint16>(); // May cause deep copy
This is C++, but I need to write this in Python.
I am not sure how to use the Qt quint16 data type or even how to import it.
Here is my full code:
#!/bin/python3
from PySide2.QtMultimedia import QMediaPlayer, QMediaContent, QAudioProbe, QAudioBuffer
from PySide2.QtCore import QUrl, QCoreApplication, QObject, Signal, Slot
import sys
def main():
app = QCoreApplication()
player = QMediaPlayer()
url = QUrl.fromLocalFile("/home/ubuntu/sound.wav")
content = QMediaContent(url)
player.setMedia(content)
player.setVolume(50)
probe = QAudioProbe()
probe.setSource(player)
probe.audioBufferProbed.connect(processProbe)
player.play()
def processProbe(probe):
print(probe.data())
if __name__ == "__main__":
main()
Output:
shiboken2.shiboken2.VoidPtr(Address 0x2761000, Size 0, isWritable False)
shiboken2.shiboken2.VoidPtr(Address 0x2761000, Size 0, isWritable False)
shiboken2.shiboken2.VoidPtr(Address 0x2761000, Size 0, isWritable False)
shiboken2.shiboken2.VoidPtr(Address 0x2761000, Size 0, isWritable False)
...
I ran into the same issue with a fresh PySide2 5.13.2 environment, and running print(probe.data().toBytes()) returned chunks of size 0 which I knew couldn't be the case because other built-in functionality was accessing the data.
I hate this hack as much as anyone else, but if you want to test things it is possible to access the buffer contents this way (please do not use this in production code):
Find out about the datatype, endian-ness etc of your buffer via format, and infer the proper C type that you'll need (e.g. signed int 16).
Extract the printed address from the VoidPtr printout, and convert it to an integer
Create a numpy array by reading at the given address, with the given type, and by the given amount of frames.
Code:
First of all, somewhere in your app, you'll be connecting your QAudioProbe to your source via setSource, and then the audioBufferProbed signal to a method e.g.:
self.audio_probe.audioBufferProbed.connect(self.on_audio_probed)
Then, the following on_audio_probed functionality will fetch the numpy array and print its norm, which should increase in presence of sound:
import numpy as np
import ctypes
def get_buffer_info(buf):
"""
"""
num_bytes = buf.byteCount()
num_frames = buf.frameCount()
#
fmt = buf.format()
sample_type = fmt.sampleType() # float, int, uint
bytes_per_frame = fmt.bytesPerFrame()
sample_rate = fmt.sampleRate()
#
if sample_type == fmt.Float and bytes_per_frame == 4:
dtype = np.float32
ctype = ctypes.c_float
elif sample_type == fmt.SignedInt and bytes_per_frame == 2:
dtype = np.int16
ctype = ctypes.c_int16
elif sample_type == fmt.UnsignedInt and bytes_per_frame == 2:
dtype = np.uint16
ctype = ctypes.c_uint16
#
return dtype, ctype, num_bytes, num_frames, bytes_per_frame, sample_rate
def on_audio_probed(audio_buffer):
"""
"""
cdata = audio_buffer.constData()
(dtype, ctype, num_bytes, num_frames,
bytes_per_frame, sample_rate) = get_buffer_info(audio_buffer)
pointer_addr_str = str(cdata).split("Address ")[1].split(", Size")[0]
pointer_addr = int(pointer_addr_str, 16)
arr = np.array((ctype * num_frames).from_address(pointer_addr))
print(np.linalg.norm(arr)) # should increase in presence of sound
I just tested it with a QAudioRecorder using 16-bit unsigned wavs, and it worked "fine" (audio looked and sounded good, see screenshot below). Again, this is basically a meme code so anything above showing your fancy audio buffered app to your cousins will be extremely risky, do not use in serious code. But in any case let me know if any other workarounds worked for you, or if this also worked in a different context! Hopefully if the devs see that people are actually using this approach they'll fix the issue much sooner :)
Cheers!
Andres

Python: Changing the speed of sound during playback

This is my first post. Is it possible to change the speed of a playback during playback? I want to simulate a car engine sound and for this the first step is to change the speed of a looped sample according to the RPM of the engine. I know how to increase the speed of a complete sample using pyaudio by changing the rate of the wave file, but I want to have a contineous change of the rate. Is this possible without using the scikits.samplerate package, which allows resampling (and is quite old) or pysonic, which is superold?
This is what I have at the moment:
import pygame, sys
import numpy as np
import pyaudio
import wave
from pygame.locals import *
import random as rd
import os
import time
pygame.init()
class AudioFile:
chunk = 1024
def __init__(self, file, speed):
""" Init audio stream """
self.wf = wave.open(file, 'rb')
self.speed = speed
self.p = pyaudio.PyAudio()
self.stream = self.p.open(
format = self.p.get_format_from_width(self.wf.getsampwidth()),
channels = 1,
rate = speed,
output = True)
def play(self):
""" Play entire file """
data = self.wf.readframes(self.chunk)
while data != '':
self.stream.write(data)
def close(self):
""" Graceful shutdown """
self.stream.close()
self.p.terminate()
a = AudioFile("wave.wav")
a.play()
You should be able to do something with numpy. I'm not really familiar with wave etc. and I would expect your play() method to include a readframes() inside the loop in some way (as I attemp to do here) but you can probably get the idea from this
def play(self):
""" Play entire file """
x0 = np.linspace(0.0, self.chunk - 1.0, self.chunk)
x1 = np.linspace(0.0, self.chunk - 1.0, self.chunk * self.factor) # i.e. 0.5 will play twice as fast
data = ''
while data != '':
f_data = np.fromstring(self.wf.readframes(self.chunk),
dtype=np.int).astype(np.float) # need to use floats for interpolation
if len(f_data) < self.chunk:
x1 = x1[:int(len(f_data) * self.factor)]
data = np.interp(x1, x0, f_data).astype(np.int)
self.stream.write(data)
Obviously this uses the same speed up or slow down factor for the whole play. If you wanted to change it mid play you would have to modify x1 inside the while loop.

Use decodebin with adder

I'm trying to create an audio stream that has a constant audio source (in this case, audiotestsrc) to which I can occasionally add sounds from files (of various formats, that's why I'm using decodebin) through the play_file() method. I use an adder for that purpose. However, for some reason, I cannot add the second sound correctly. Not only does the program play the sound incorrectly, it also completely stops the original audiotestsrc. Here's my code so far:
import gst; import gobject; gobject.threads_init()
pipe = gst.Pipeline()
adder = gst.element_factory_make("adder", "adder")
first_sink = adder.get_request_pad('sink%d')
pipe.add(adder)
test = gst.element_factory_make("audiotestsrc", "test")
test.set_property('freq', 100)
pipe.add(test)
testsrc = test.get_pad("src")
testsrc.link(first_sink)
output = gst.element_factory_make("alsasink", "output")
pipe.add(output)
adder.link(output)
pipe.set_state(gst.STATE_PLAYING)
raw_input('Press key to play sound')
def play_file(filename):
adder_sink = adder.get_request_pad('sink%d')
audiofile = gst.element_factory_make('filesrc', 'audiofile')
audiofile.set_property('location', filename)
decoder = gst.element_factory_make('decodebin', 'decoder')
def on_new_decoded_pad(element, pad, last):
pad.link(adder_sink)
decoder.connect('new-decoded-pad', on_new_decoded_pad)
pipe.add(audiofile)
pipe.add(decoder)
audiofile.link(decoder)
pipe.set_state(gst.STATE_PAUSED)
pipe.set_state(gst.STATE_PLAYING)
play_file('sample.wav')
while True:
pass
Thanks to moch on #gstreamer, I realized that all adder sources should have the same format. I modified the above script so as to have the caps "audio/x-raw-int, endianness=(int)1234, channels=(int)1, width=(int)16, depth=(int)16, signed=(boolean)true, rate=(int)11025" (example) go before every input in the adder.

Categories

Resources