PyAudio Print Sound Level From 1 To 100 - python

I am looking to get the sound coming from my microphone printed out as a loudness level from 1 to 100 using PyAudio. Currently my code just prints the raw sound which is just numbers and letters, how would I turn it into a scale from 1 to 100? Here is my code so far:
import pyaudio
import wave
import threading
CHUNK = 1024
FORMAT = pyaudio.paInt16
CHANNEL = 1
RATE = 44100
pa = pyaudio.PyAudio()
stream = pa.open(format=FORMAT, channels=CHANNEL,
rate=RATE, input=True,
frames_per_buffer=CHUNK)
def getdata():
threading.Timer(1, getdata).start()
audio_data = stream.read(CHUNK)
print(audio_data)
getdata()
I am quite a beginner so please explain thing thoroughly. Thanks!
EDIT: Here is a small sample of what is outputted:
0\xdd\x00\xdf\x00\xd6\x00\xd4\x00\xd8\x00\xc3\x00\xb6\x00\xc5\x00\xd0\x00\xc1\x00\xbb\x00\xbf\x00\xc5\x00\xc6\x00\xcf\x00\xb7\x00\xb1\x00\xcb\x00\xc2\x00\xc8\x00\xc5\x00\xc6\x00\xbe\x00\xaa\x00\xac\x00\xb1\x00\xa8\x00\xa7\x00\xb3\x00\xaa\x00\xa6\x00\xaa\x00\xa4\x00\x98\x00\x92\x00\xa0\x00\x9a\x00\x99\x00\x95\x00\x9f\x00\xb0\x00\x90\x00\x94\x00\x91\x00\x98\x00\xa2\x00\xa3\x00\xaa\x00\x94\x00\x98\x00\xa1\x00\x9d\x00\x96\x00\x90\x00\x91\x00\x89\x00\x85\x00{\x00\x83\x00\x84\x00\x8b\x00\x85\x00|\x00z\x00\x83\x00\x88\x00\x89\x00\x8a\x00\x8b\x00\x84\x00\x8f\x00\x83\x00o\x00p\x00p\x00\x88\x00\x8c\x00\x8b\x00\x8d\x00\x89\x00y\x00r\x00s\x00w\x00q\x00a\x00q\x00i\x00
SOLVED: Found an answer here:
Pyaudio : how to check volume

Related

CHUNKING an audio signal with python

when i chunk an audio file (Sine wave in this case) the sound changes!
i want to stream out an audio signal (a sine wave). first of all i tried streaming the whole original Signal
import numpy as np
import pyaudio as py
from scipy.io.wavfile import read
fs, y = read('Sinus_440Hz.wav')
p = py.PyAudio()
stream = p.open(format=p.get_format_from_width(y.dtype.itemsize),
channels=1,
rate=fs,
output=True,
frames_per_buffer=1024)
stream.write(y) #Output 1 to 1 Original Sound (WORKS FINE)
stream.stop_stream()
stream.close()
py.terminate()
this works fine and i hear the original sine wave without any artefacts or modifications.
i need to treat the data in Chunks and then stream it out. i did it this way
import numpy as np
import pyaudio as py
from scipy.io.wavfile import read
fs, y = read('Sinus_440Hz.wav')
totalSamps = len(y)
sample = 128
seg = 0
p = py.PyAudio()
stream = p.open(format=p.get_format_from_width(y.dtype.itemsize),
channels=1,
rate=fs,
output=True,
frames_per_buffer=1024)
while True:
inds = 1 + np.mod((np.arange(sample) + sample * seg), totalSamps) # Chunks of 128
Output = y[inds]
stream.write(Output) # Signal is not the same and have a lot of artefacts!!
seg = seg + 1
stream.stop_stream()
stream.close()
py.terminate()
i didnt alter the signal yet and the sound of the sine wave has already changed
Why am i getting this signal changed although i didnt modifie anything yet? im just splitting it in Chunks
and stream it out.
Thanks in advance!

Realtime step detection from wav file

My goal is to take a real time audio stream, and find the steps in it to signal my lights to flash to it.
Right now I have this code:
import pyaudio
import numpy as np
import matplotlib.pyplot as plt
CHUNK = 2**5
RATE = 44100
LEN = 3
p = pyaudio.PyAudio()
stream = p.open(format=pyaudio.paInt16, channels=1, rate=RATE, input=True, frames_per_buffer=CHUNK)
print(1)
frames = []
n = 0
for i in range(int(LEN*RATE/CHUNK)): #go for a LEN seconds
n += 1
data = np.fromstring(stream.read(CHUNK),dtype=np.int16)
num = 0
for ii in data:
num += abs(ii)
print(num)
frames.append(data)
stream.stop_stream()
stream.close()
p.terminate()
plt.figure(1)
plt.title("Signal Wave...")
plt.plot(frames)
open("frames.txt", "w").write(str(frames))
It takes the live audio steam created by pyaudio in this format
[[0,0,-1,0,0,0,0,-1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,0,0,0,0,0,0,0,0,0]),[1,0,-1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,0,0,0,0,0,0,0,0,0]]
(this is depicting silence)
and adds all of the number together after they have gone through the abs() function (absolute value)
This gives an accurate(ish) representation of what a graph like this looks like
I see the numbers getting larger and the big jumps should be easy to calculate, but the smaller jumps are almost indistinguishable from silence.
I found this answer that seems right, but i dont know how to use it.
Any help would be appreciated
Thanks!

Analyzing ambient room volume

I am looking for a function/packages, which basically returns an integer which corresponds to the ambient volume in the room.
I thought that many people might have already wanted such a function, however, searing through the internet did not yield a result.
Any help is much appreciated!
Cheers!
This code does what I want:
import pyaudio
import numpy as np
CHUNK = 2 ** 11
RATE = 44100
p = pyaudio.PyAudio()
stream = p.open(format=pyaudio.paInt16, channels=1, rate=RATE, input=True,
frames_per_buffer=CHUNK)
while True: # go for a few seconds
data = np.frombuffer(stream.read(CHUNK), dtype=np.int16)
peak = np.mean(np.abs(data))
if peak > THRESHOLD:
#do stuff

How can I detect specific sound from live application?

Is there a way to collect live audio from application (like chrome, mozilla) and execute some code
when a specific sound is played on website ?
If you have a mic on whatever device you are using, you can use that to read whatever sound is coming out of your computer. Then, you can compare those audio frames that you are recording to a sound file of what sound you are looking for
Of course, this leaves it very vulnerable to background noise, so somehow you are going to have to filter it out.
Here is an example using the PyAudio and wave libraries:
import pyaudio
import wave
wf = wave.open("websitSound.wav", "rb")
amountFrames = 100 # just an arbitrary number; could be anything
sframes = wf.readframes(amountFrames)
currentSoundFrame = 0
chunk = 1024 # Record in chunks of 1024 samples
sample_format = pyaudio.paInt16 # 16 bits per sample
channels = 2
fs = 44100 # Record at 44100 samples per second
seconds = 3
p = pyaudio.PyAudio() # Create an interface to PortAudio
stream = p.open(format=sample_format,
channels=channels,
rate=fs,
frames_per_buffer=chunk,
input=True)
# Store data in chunks for 3 seconds
for i in range(0, int(fs / chunk * seconds)):
data = stream.read(chunk)
if data == sframes[currentSoundFrame]:
currentSoundFrame += 1
if currentSoundFrame == len(sframes): #the whole entire sound was played
print("Sound was played!")
frames.append(data)
# Stop and close the stream
stream.stop_stream()
stream.close()
# Terminate the PortAudio interface
p.terminate()

Pyaudio recording wav file with soundcard gives an empty recording

I've been trying to work on a project to detect time shift between two streaming audio signals. I worked with python3, Pyaudio and I'm using a Motux828 sound card with a Neumann KU-100 microphone which takes a stereo input. So when i check my input_device_index I am the correct one which is the 4th one connnected to MOTU soundcard.
However when i record with:
import time
import pyaudio
import wave
CHUNK = 1024 * 3 # Chunk is the bytes which are currently processed
FORMAT = pyaudio.paInt16
RATE = 44100
RECORD_SECONDS = 2
WAVE_OUTPUT = "temp.wav"
p = pyaudio.PyAudio()
stream = p.open(format=FORMAT,channels=2,rate=RATE,input=True,frames_per_buffer=CHUNK,input_device_index=4)
frames = [] # np array storing all the data
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream1.read(CHUNK)
frames.append(data1)
stream.stop_stream()
stream.close()
p.terminate()
wavef = wave.open(WAVE_OUTPUT, 'wb') # opening the file
wavef.setnchannels(1)
wavef.setsampwidth(p.get_sample_size(FORMAT))
wavef.setframerate(RATE)
wavef.writeframes(b''.join(frames1)) # writing the data to be saved
wavef.close()
I record a wave file with no sound, with almost no noise(naturally)
Also I can record with 3rd party softwares with the specific microphone.
It works completely, fine.
NOTE:
Sound card is 24-bit depth normally, I also tried paInt24 that records a wave file with pure noise
I think u mentioned wrong variable names as i seen your code. The wrong variables are :
data = stream1.read(CHUNK)
frames.append(data1)
wavef.writeframes(b''.join(frames1))
the correct values are :
data = stream.read(CHUNK)
frames.append(data)
wavef.writeframes(b''.join(frames))

Categories

Resources