How to change microphone data in python sounddevice in realtime - python

I play sound from my microphone with sounddevice (Python 3.6.3, Win 7) in real time.
I did my X global.
global x
for n in range(10):
х = n + 1
def cаllbасk(indаtа, outdаtа, frаmes, time):
оutdаta[:] = indаta * х
with sd.Stream(device=(1, 3),samplerate=44100,dtype='float32',
latency=None,channels=2, callback=callback):
input()
I actually need to change volume separately in left independently from right channel. Imagine it's "indata * x". I know how to do it permanently by using numerical constant but don't know how to do it in real time in a loop. Data doesn't change. Maybe I must stop stream after one loop but I don't know how. I'm newby. I'd like not to use functions or map if it's possible. Thanks for understanding :)
I also wanted to use PyAudio but I don't know how to have access to left and right channel data to have ability change volume of each channel:
`CHUNK = 1024
WIDTH = 2
CHANNELS = 2
RATE = 44100
р = pyаudio.PyАudio()
strеаm = р.open(format=p.get_format_from_width(WIDTH),
channels=CHANELS,
rate=RATE,
input=True,
output=True,
frames_per_buffer=CHUNK)
dаta=[]
fоr i in rаnge(1):
data.append(stream.read(CHUNK))
sound=[bytes(dаtа[0])]
stream.write(sоund.pop(0). СНUNK)`

Related

Realtime step detection from wav file

My goal is to take a real time audio stream, and find the steps in it to signal my lights to flash to it.
Right now I have this code:
import pyaudio
import numpy as np
import matplotlib.pyplot as plt
CHUNK = 2**5
RATE = 44100
LEN = 3
p = pyaudio.PyAudio()
stream = p.open(format=pyaudio.paInt16, channels=1, rate=RATE, input=True, frames_per_buffer=CHUNK)
print(1)
frames = []
n = 0
for i in range(int(LEN*RATE/CHUNK)): #go for a LEN seconds
n += 1
data = np.fromstring(stream.read(CHUNK),dtype=np.int16)
num = 0
for ii in data:
num += abs(ii)
print(num)
frames.append(data)
stream.stop_stream()
stream.close()
p.terminate()
plt.figure(1)
plt.title("Signal Wave...")
plt.plot(frames)
open("frames.txt", "w").write(str(frames))
It takes the live audio steam created by pyaudio in this format
[[0,0,-1,0,0,0,0,-1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,0,0,0,0,0,0,0,0,0]),[1,0,-1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,0,0,0,0,0,0,0,0,0]]
(this is depicting silence)
and adds all of the number together after they have gone through the abs() function (absolute value)
This gives an accurate(ish) representation of what a graph like this looks like
I see the numbers getting larger and the big jumps should be easy to calculate, but the smaller jumps are almost indistinguishable from silence.
I found this answer that seems right, but i dont know how to use it.
Any help would be appreciated
Thanks!

Specify minimum trigger frequency for recording audio in Python

I'm writing a script for sound-activated recording in Python using pyaudio. I want to trigger a 5s recording after a sound that is above a prespecified volume and frequency. I've managed to get the volume part working but don't know how to specify the minimum trigger frequency (I'd like it to trigger at frequencies above 10kHz, for example):
import pyaudio
import wave
from array import array
import time
FORMAT=pyaudio.paInt16
CHANNELS=1
RATE=44100
CHUNK=1024
RECORD_SECONDS=5
audio=pyaudio.PyAudio()
stream=audio.open(format=FORMAT,channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK)
nighttime=True
while nighttime:
data=stream.read(CHUNK)
data_chunk=array('h',data)
vol=max(data_chunk)
if(vol>=3000):
print("recording triggered")
frames=[]
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
frames.append(data)
print("recording saved")
# write to file
words = ["RECORDING-", time.strftime("%Y%m%d-%H%M%S"), ".wav"]
FILE_NAME= "".join(words)
wavfile=wave.open(FILE_NAME,'wb')
wavfile.setnchannels(CHANNELS)
wavfile.setsampwidth(audio.get_sample_size(FORMAT))
wavfile.setframerate(RATE)
wavfile.writeframes(b''.join(frames))
wavfile.close()
# check if still nighttime
nighttime=True
stream.stop_stream()
stream.close()
audio.terminate()
I'd like to add to the line if(vol>=3000): something like if(vol>=3000 and frequency>10000): but I don't know how to set up frequency. How to do this?
To retrieve the frequency of a signal you can compute Fourier transform, thus switching to frequency domain (freq in the code). Your next step is to compute relative amplitude of the signal (amp) . The latter is proportional to the sound volume.
spec = np.abs(np.fft.rfft(audio_array))
freq = np.fft.rfftfreq(len(audio_array), d=1 / sampling_freq)
spec = np.abs(spec)
amp = spec / spec.sum()
Mind that 3000 isn't a sound volume either. The true sound volume information was lost when the signal was digitalised. Now you only work with relative numbers, so you can just check if e.g. 1/3 of energy in a frame is above 10 khz.
Here's some code to illustrate the concept:
idx_above_10khz = np.argmax(freq > 10000)
amp_below_10k = amp[:idx_above_10khz].sum()
amp_above_10k = amp[idx_above_10khz:].sum()
Now you could specify that from certain ratio of amp_below_10k / amp_above_10k you should trigger your program.

Analyzing ambient room volume

I am looking for a function/packages, which basically returns an integer which corresponds to the ambient volume in the room.
I thought that many people might have already wanted such a function, however, searing through the internet did not yield a result.
Any help is much appreciated!
Cheers!
This code does what I want:
import pyaudio
import numpy as np
CHUNK = 2 ** 11
RATE = 44100
p = pyaudio.PyAudio()
stream = p.open(format=pyaudio.paInt16, channels=1, rate=RATE, input=True,
frames_per_buffer=CHUNK)
while True: # go for a few seconds
data = np.frombuffer(stream.read(CHUNK), dtype=np.int16)
peak = np.mean(np.abs(data))
if peak > THRESHOLD:
#do stuff

pyaudio audio recording python

I am trying to record audio from the microphone with Python. And I have following code:
import pyaudio
import wave
import threading
FORMAT = pyaudio.paInt16
CHANNELS = 2
RATE = 44100
CHUNK = 1024
WAVE_OUTPUT_FILENAME = "file.wav"
stop_ = False
audio = pyaudio.PyAudio()
stream = audio.open(format=FORMAT, channels=CHANNELS,
rate=RATE, input=True,
frames_per_buffer=CHUNK)
def stop():
global stop_
while True:
if not input('Press Enter >>>'):
print('exit')
stop_ = True
t = threading.Thread(target=stop, daemon=True).start()
frames = []
while True:
data = stream.read(CHUNK)
frames.append(data)
if stop_:
break
stream.stop_stream()
stream.close()
audio.terminate()
waveFile = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
waveFile.setnchannels(CHANNELS)
waveFile.setsampwidth(audio.get_sample_size(FORMAT))
waveFile.setframerate(RATE)
waveFile.writeframes(b''.join(frames))
waveFile.close()
My code works fine, but when I play my recording, I don't hear any sound in my final output file (file.wav).
Why do problems occur here and how do I fix them?
Your code is working fine. The problem you are facing is due to the admin rights. The audio file has constant 0 data, therefore, you can't listen to sound in the generated wav file. I suppose your microphone device is installed and working properly. If you are not sure about the audio installation status, then as per operating system do these steps:
MAC OS:
System Preferences->Sound->Input and there you can visualize the bars as make some sound. Make sure the selected device type is Built-in.
Windos OS:
Sound settings and test Microphone by click listen to this device, you may later uncheck it because it will loop back your voice to speakers and will create big noises.
Most probably you are using Mac OS. I had the similar issue, because I was using Atom editor to run the python code. Try to run your code from the terminal of Mac OS (or Power Shell if you are using windows), (in case a popup appears for access to microphone on Mac OS, press Ok). Thats it! your code will record fine. As a tester, please run the code below to check if you can visualize the sound, and make sure to run it through Terminal (No editors or IDEs).
import queue
import sys
from matplotlib.animation import FuncAnimation
import matplotlib.pyplot as plt
import numpy as np
import sounddevice as sd
# Lets define audio variables
# We will use the default PC or Laptop mic to input the sound
device = 0 # id of the audio device by default
window = 1000 # window for the data
downsample = 1 # how much samples to drop
channels = [1] # a list of audio channels
interval = 30 # this is update interval in miliseconds for plot
# lets make a queue
q = queue.Queue()
# Please note that this sd.query_devices has an s in the end.
device_info = sd.query_devices(device, 'input')
samplerate = device_info['default_samplerate']
length = int(window*samplerate/(1000*downsample))
# lets print it
print("Sample Rate: ", samplerate)
# Typical sample rate is 44100 so lets see.
# Ok so lets move forward
# Now we require a variable to hold the samples
plotdata = np.zeros((length,len(channels)))
# Lets look at the shape of this plotdata
print("plotdata shape: ", plotdata.shape)
# So its vector of length 44100
# Or we can also say that its a matrix of rows 44100 and cols 1
# next is to make fig and axis of matplotlib plt
fig,ax = plt.subplots(figsize=(8,4))
# lets set the title
ax.set_title("PyShine")
# Make a matplotlib.lines.Line2D plot item of color green
# R,G,B = 0,1,0.29
lines = ax.plot(plotdata,color = (0,1,0.29))
# We will use an audio call back function to put the data in queue
def audio_callback(indata,frames,time,status):
q.put(indata[::downsample,[0]])
# now we will use an another function
# It will take frame of audio samples from the queue and update
# to the lines
def update_plot(frame):
global plotdata
while True:
try:
data = q.get_nowait()
except queue.Empty:
break
shift = len(data)
plotdata = np.roll(plotdata, -shift,axis = 0)
# Elements that roll beyond the last position are
# re-introduced
plotdata[-shift:,:] = data
for column, line in enumerate(lines):
line.set_ydata(plotdata[:,column])
return lines
ax.set_facecolor((0,0,0))
# Lets add the grid
ax.set_yticks([0])
ax.yaxis.grid(True)
""" INPUT FROM MIC """
stream = sd.InputStream( device = device, channels = max(channels), samplerate = samplerate, callback = audio_callback)
""" OUTPUT """
ani = FuncAnimation(fig,update_plot, interval=interval,blit=True)
with stream:
plt.show()
Save this file as voice.py to a folder (let say AUDIO). Then cd to AUDIO folder from the terminal command and then execute it using:
python3 voice.py
or
python voice.py
depending on your python env name.
By using print(sd.query_devices()), I see a list of devices as below:
Microsoft Sound Mapper - Input, MME (2 in, 0 out)
Microphone (AudioHubNano2D_V1.5, MME (2 in, 0 out)
Internal Microphone (Conexant S, MME (2 in, 0 out)
...
However, if i use device = 0, I can still receive sound from the USB-microphone, which is device number 1. Is it by default, all the audio signal goes to the Sound Mapper? That means if I use device = 0, I will get all audio signal from all audio inputs; and if I just want audio input from one particular device, I need to choose its number x as device = x.
I have another question: is it possible to capture audio signal from device 1 and 2 in one application but in separate manner?

How to handle in_data in Pyaudio callback mode?

I'm doing a project on Signal Processing in python. So far I've had a little succes with the nonblocking mode, but it gave a considerable amount of delay and clipping to the output.
I want to implement a simple real-time audio filter using Pyaudio and Scipy.Signal, but in the callback function provided in the pyaudio example when I want to read the in_data I can't process it. Tried converting it in various ways but with no success.
Here's a code I want to achieve(read data from mic, filter, and output ASAP):
import pyaudio
import time
import numpy as np
import scipy.signal as signal
WIDTH = 2
CHANNELS = 2
RATE = 44100
p = pyaudio.PyAudio()
b,a=signal.iirdesign(0.03,0.07,5,40)
fulldata = np.array([])
def callback(in_data, frame_count, time_info, status):
data=signal.lfilter(b,a,in_data)
return (data, pyaudio.paContinue)
stream = p.open(format=pyaudio.paFloat32,
channels=CHANNELS,
rate=RATE,
output=True,
input=True,
stream_callback=callback)
stream.start_stream()
while stream.is_active():
time.sleep(5)
stream.stop_stream()
stream.close()
p.terminate()
What is the right way to do this?
Found the answer to my question in the meantime, the callback looks like this:
def callback(in_data, frame_count, time_info, flag):
global b,a,fulldata #global variables for filter coefficients and array
audio_data = np.fromstring(in_data, dtype=np.float32)
#do whatever with data, in my case I want to hear my data filtered in realtime
audio_data = signal.filtfilt(b,a,audio_data,padlen=200).astype(np.float32).tostring()
fulldata = np.append(fulldata,audio_data) #saves filtered data in an array
return (audio_data, pyaudio.paContinue)
I had a similar issue trying to work with the PyAudio callback mode, but my requirements where:
Working with stereo output (2 channels).
Processing in real time.
Processing the input signal using an arbitrary impulse response, that could change in the middle of the process.
I succeeded after a few tries, and here are fragments of my code (based on the PyAudio example found here):
import pyaudio
import scipy.signal as ss
import numpy as np
import librosa
track1_data, track1_rate = librosa.load('path/to/wav/track1', sr=44.1e3, dtype=np.float64)
track2_data, track2_rate = librosa.load('path/to/wav/track2', sr=44.1e3, dtype=np.float64)
track3_data, track3_rate = librosa.load('path/to/wav/track3', sr=44.1e3, dtype=np.float64)
# instantiate PyAudio (1)
p = pyaudio.PyAudio()
count = 0
IR_left = first_IR_left # Replace for actual IR
IR_right = first_IR_right # Replace for actual IR
# define callback (2)
def callback(in_data, frame_count, time_info, status):
global count
track1_frame = track1_data[frame_count*count : frame_count*(count+1)]
track2_frame = track2_data[frame_count*count : frame_count*(count+1)]
track3_frame = track3_data[frame_count*count : frame_count*(count+1)]
track1_left = ss.fftconvolve(track1_frame, IR_left)
track1_right = ss.fftconvolve(track1_frame, IR_right)
track2_left = ss.fftconvolve(track2_frame, IR_left)
track2_right = ss.fftconvolve(track2_frame, IR_right)
track3_left = ss.fftconvolve(track3_frame, IR_left)
track3_right = ss.fftconvolve(track3_frame, IR_right)
track_left = 1/3 * track1_left + 1/3 * track2_left + 1/3 * track3_left
track_right = 1/3 * track1_right + 1/3 * track2_right + 1/3 * track3_right
ret_data = np.empty((track_left.size + track_right.size), dtype=track1_left.dtype)
ret_data[1::2] = br_left
ret_data[0::2] = br_right
ret_data = ret_data.astype(np.float32).tostring()
count += 1
return (ret_data, pyaudio.paContinue)
# open stream using callback (3)
stream = p.open(format=pyaudio.paFloat32,
channels=2,
rate=int(track1_rate),
output=True,
stream_callback=callback,
frames_per_buffer=2**16)
# start the stream (4)
stream.start_stream()
# wait for stream to finish (5)
while_count = 0
while stream.is_active():
while_count += 1
if while_count % 3 == 0:
IR_left = first_IR_left # Replace for actual IR
IR_right = first_IR_right # Replace for actual IR
elif while_count % 3 == 1:
IR_left = second_IR_left # Replace for actual IR
IR_right = second_IR_right # Replace for actual IR
elif while_count % 3 == 2:
IR_left = third_IR_left # Replace for actual IR
IR_right = third_IR_right # Replace for actual IR
time.sleep(10)
# stop stream (6)
stream.stop_stream()
stream.close()
# close PyAudio (7)
p.terminate()
Here are some important reflections about the code above:
Working with librosa instead of wave allows me to use numpy arrays for processing which is much better than the chunks of data from wave.readframes.
The data type you set in p.open(format= must match the format of the ret_data bytes. And PyAudio works with float32 at most.
Even index bytes in ret_data go to the right headphone, and odd index bytes go to the left one.
Just to clarify, this code sends the mix of three tracks to the output audio in stereo, and every 10 seconds it changes the impulse response and thus the filter being applied.
I used this for testing a 3d audio app I'm developing, and so the impulse responses where Head Related Impulse Responses (HRIRs), that changed the position of the sound every 10 seconds.
EDIT:
This code had a problem: the output had a noise of a frequency corresponding to the size of the frames (higher frequency when size of frames was smaller). I fixed that by manually doing an overlap and add of the frames. Basically, the ss.oaconvolve returned an array of size track_frame.size + IR.size - 1, so I separated that array into the first track_frame.size elements (which was then used for ret_data), and then the last IR.size - 1 elements I saved for later. Those saved elements would then be added to the first IR.size - 1 elements of the next frame. The first frame adds zeros.

Categories

Resources