I'm working on an experiment concerned with spatial sound perception. In this experiment, different sounds should be simultaneously presented from up to eight speakers. For this purpose, I would like to create a Python code for OS X (10.10.5) can read from a multichannel sound file and send each channel of this sound file to a designated speaker (via an appropriate hardware device).
I came across a rather convenient solution for the "present from multiple speakers"-part of the problem: Following this post, it can be easily done for mono/stereo files by adding more entries to the channel_map in PyAudio. The (slightly modified) code looks like this:
import pyaudio
import wave
import sys
chunk = 4096
PyAudio = pyaudio.PyAudio
if len(sys.argv) < 2:
print("Plays a wave file.\n\nUsage: %s filename.wav" % sys.argv[0])
sys.exit(-1)
wf = wave.open(sys.argv[1], 'rb')
p = PyAudio()
channel_map = (0, 1, -1, -1, -1, -1, -1, -1)
try:
stream_info = pyaudio.PaMacCoreStreamInfo(
flags=pyaudio.PaMacCoreStreamInfo.paMacCorePlayNice, # default
channel_map=channel_map)
except AttributeError:
print("Sorry, couldn't find PaMacCoreStreamInfo. Make sure that "
"you're running on Mac OS X.")
sys.exit(-1)
print("Stream Info Flags:", stream_info.get_flags())
print("Stream Info Channel Map:", stream_info.get_channel_map())
print("channels",wf.getnchannels())
print('sample width',wf.getsampwidth())
stream = p.open(
format=p.get_format_from_width(wf.getsampwidth()),
channels=wf.getnchannels(),
rate=wf.getframerate(),
output=True,
output_host_api_specific_stream_info=stream_info)
data = wf.readframes(chunk)
while data != '':
stream.write(data)
data = wf.readframes(chunk)
stream.stop_stream()
stream.close()
p.terminate()
However, I wonder whether this also works with multichannel sound files in PyAudio? Is it possible to read from a multichannel (i.e 8-channel) sound file and to send specific channels to different output devices (e.g. speakers) with PyAudio?
If yes, can someone provide an example of how it can be done? I don't mind digging some more into libraries/modules but providing an example for the code in question would help me (as a novice) a lot.
If PyAudio is not the right choice, I would really appreciate any further recommendations/ideas/comments on how it can be done.
Thanks a lot!
Malte
Related
I am trying to use pyAudioAnalysis to analyse an audio stream in real-time from a HTTP stream. My goal is to use the Zero Crossing Rate (ZCR) and other methods in this library to identify events in the stream.
pyAudioAnalysis only supports input from a file but converting a http stream to a .wav will create a large overhead and temporary file management I would like to avoid.
My method is as follows:
Using ffmpeg I was able to get the raw audio bytes into a subprocess pipe.
try:
song = subprocess.Popen(["ffmpeg", "-i", "https://media-url/example", "-acodec", "pcm_s16le", "-ac", "1", "-f", "wav", "pipe:1"],
stdout=subprocess.PIPE)
I then buffered this data using pyAudio with the hope of being able to use the bytes in pyAudioAnalysis
CHUNK = 65536
p = pyaudio.PyAudio()
stream = p.open(format=pyaudio.paInt16,
channels=1,
rate=44100,
output=True)
data = song.stdout.read(CHUNK)
while len(data) > 0:
stream.write(data)
data = song.stdout.read(CHUNK)
However, inputting this data output into AudioBasicIO.read_audio_generic() produces an empty numpy array.
Is there a valid solution to this problem without temporary file creation?
You can try my ffmpegio package:
pip install ffmpegio
import ffmpegio
# read entire stream
fs, x = ffmpegio.audio.read("https://media-url/example", ac=1, sample_fmt='s16')
# fs - sampling rate
# x - [nx1] numpy array
# or read a block at a time:
with ffmpegio.open(["https://media-url/example", "ra", blocksize=1024, ac=1, sample_fmt='s16') as f:
fs = f.rate
for x in f:
# x: [1024x1] numpy array (or shorter for the last block)
process_data(x)
Note that if you need normalized samples, you can set sample_fmt to 'flt' 'dbl'.
If you prefer to keep dependency low, the key in calling ffmpeg subprocess is to use raw output format:
import subprocess as sp
import numpy as np
song = sp.Popen(["ffmpeg", "-i", "https://media-url/example", "-f", "s16le","-c:a", "pcm_s16le", "-ac", "1", "pipe:1"], stdout=sp.PIPE)
CHUNK = 65536
n = CHUNK/2 # 2 bytes/sample
data = np.frombuffer(song.stdout.read(CHUNK),np.int16)
while len(data) > 0:
data = np.frombuffer(song.stdout.read(CHUNK),np.int16)
I cannot speak of pyAudioAnalysis but I suspect it expects samples and not bytes.
I'm trying to stream audio playing internally. Currently, my script has a more hardware based solution where the audio outputs through the auxiliary out which connects to a USB auxiliary line-in adapter. For simplicity, it would be much better to record audio internally rather than having to use hardware to loop the audio signal back into itself.
My relevant code:
def encode(**kwarg):
global audio
print('Encoding: '+str(kwarg))
#encoding algorithm goes here.
writeSaveFlag = False
#self.queueEncoding()
print('process encode')
# create pyaudio stream
stream = False
while not stream:
try:
stream = audio.open(format = kwarg['resolution'],rate = kwarg['sampleRate'],channels = kwarg['channels'],input_device_index = kwarg['deviceIndex'],input = True,frames_per_buffer=kwarg['chunk'])
except:
audio.terminate()
audio = pyaudio.PyAudio()
self.rewindSong()
t = Timer(songDuration-encodingDelayTolarance,checkStatus,kwargs={'currentSong':kwarg['currentSong'],'tolerance':kwarg['tolerance']})
t.start()
startTime = time.time()
playFlag = False
print("recording")
frames = []
# loop through stream and append audio chunks to frame array
for ii in range(0,int((kwarg['sampleRate']/kwarg['chunk'])*kwarg['encodeDuration'])):
#if time.time() - startTime > 2000 and playFlag == False:
# self.play()
data = stream.read(kwarg['chunk'])
frames.append(data)
print("finished recording")
stream.stop_stream()
stream.close()
# save the audio frames as .wav file
wavefile = wave.open(saveFilePath+kwarg['outputFileName']+'.wav','wb')
wavefile.setnchannels(kwarg['channels'])
wavefile.setsampwidth(audio.get_sample_size(kwarg['resolution']))
wavefile.setframerate(kwarg['sampleRate'])
wavefile.writeframes(b''.join(frames))
wavefile.close()
processEncode(trackID=kwarg['trackID'])
#clear memory
gc.collect()
#create a new instance for next recording
self.queueEncoding()
I found this related question but the only answer posted suggests looping the audio as I already have. Would it be better to use an alternative library for this internal recording functionality? Does alsa recognize the internal audio as an audio device? Does pyaudio recognize non-physical audio devices such as an internal audio stream?
I'm trying to read a twitch stream via streamlink (https://streamlink.github.io/api_guide.html)
into OpenCV for further processing in Python.
What works, reading the stream into a stream.ts file via popen and then into opencv:
import subprocess
import os
import time
def create_new_streaming_file(stream_filename="stream0", stream_link="https://www.twitch.tv/tsm_viss"):
try:
os.remove('./Engine/streaming_util/'+stream_filename+'.ts')
except OSError:
pass
cmd = "streamlink --force --output ./Engine/streaming_util/"+stream_filename+".ts "+stream_link+" best"
subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
create_new_streaming_file()
video_capture = cv2.VideoCapture('./Engine/streaming_util/stream0.ts')
Which is very slow and the stream stops after about 30 seconds.
I would like to read the bytestream directly into openCV out of streamlink's python api.
What works is printing out the latest n bytes of a stream into the console:
import streamlink
streams = streamlink.streams("https://www.twitch.tv/grimmmz")
stream = streams["best"]
fd = stream.open()
while True:
data = fd.read(1024)
print(data)
I'm looking for something like this (does not work but you'll get the concept):
streams = streamlink.streams("https://www.twitch.tv/grimmmz")
stream = streams["best"]
fd = stream.open()
bytes=''
while True:
# to read mjpeg frame -
bytes+= fd.read(1024)
a = bytes.find('\xff\xd8')
b = bytes.find('\xff\xd9')
if a!=-1 and b!=-1:
jpg = bytes[a:b+2]
bytes= bytes[b+2:]
img = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8),cv2.CV_LOAD_IMAGE_COLOR)
cv2.imwrite('messigray.png', img)
cv2.imshow('cam2', img)
else:
continue
Thanks a lot in advance!
It was quite tricky to accomplish with reasonable performance, though.
Check out the project: https://github.com/DanielTea/rage-analytics/blob/master/README.md
The main file is realtime_Videostreamer.py in the engine folder. If you initialize this object it creates an ffmpeg subprocess and fills a queue with video frames in an extra thread. This architecture prevents the mainthread from blocking so depending on your networkspeed and cpu power a couple streams can be analyzed in parallel.
This solution works with twitch streams very well. Didn’t try out other streaming sites.
More info about this project.
I have been pulling my hair out trying to get a proxy working. I need to decrypt the packets from a server and client ((this may be out of order..)), then decompress everything but the packet header.
The first 2 packets ((10101 and 20104)) are not compressed, and decrypt, destruct, and decompile properly.
Alas, but to no avail; FAIL!; zlib.error: Error -5 while decompressing data: incomplete or truncated stream
Same error while I am attempting to decompress the encrypted version of the packet.
When I include the packet header, I get a randomly chosen -3 error.
I have also tried changing -zlib.MAX_WBITS to zlib.MAX_WBITS, as well as a few others, but still get the same error.
Here's the code;
import socket, sys, os, struct, zlib
from Crypto.Cipher import ARC4 as rc4
cwd = os.getcwd()
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
ss = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('192.168.2.12',9339))
s.listen(1)
client, addr = s.accept()
key = "fhsd6f86f67rt8fw78fw789we78r9789wer6renonce"
cts = rc4.new(key)
stc = rc4.new(key)
skip = 'a'*len(key)
cts.encrypt(skip)
stc.encrypt(skip)
ss.connect(('game.boombeachgame.com',9339))
ss.settimeout(0.25)
s.settimeout(0.25)
def io():
while True:
try:
pack = client.recv(65536)
decpack = cts.decrypt(pack[7:])
msgid, paylen = dechead(pack)
if msgid != 10101:
decopack = zlib.decompress(decpack, -zlib.MAX_WBITS)
print "ID:",msgid
print "Payload Length",paylen
print "Payload:\n",decpack
ss.send(pack)
dump(msgid, decpack)
except socket.timeout:
pass
try:
pack = ss.recv(65536)
msgid, paylen = dechead(pack)
decpack = stc.decrypt(pack[7:])
if msgid != 20104:
decopack = zlib.decompress(decpack, -zlib.MAX_WBITS)
print "ID:",msgid
print "Payload Length",paylen
print "Payload:\n",decpack
client.send(pack)
dump(msgid, decpack)
except socket.timeout:
pass
def dump(msgid, decpack):
global cwd
pdf = open(cwd+"/"+str(msgid)+".bin",'wb')
pdf.write(decpack)
pdf.close()
def dechead(pack):
msgid = struct.unpack('>H', pack[0:2])[0]
print int(struct.unpack('>H', pack[5:7])[0])
payload_bytes = struct.unpack('BBB', pack[2:5])
payload_len = ((payload_bytes[0] & 255) << 16) | ((payload_bytes[1] & 255) << 8) | (payload_bytes[2] & 255)
return msgid, payload_len
io()
I realize it's messy, disorganized and very bad, but it all works as intended minus the decompression.
Yes, I am sure the packets are zlib compressed.
What is going wrong here and why?
Full Traceback:
Traceback (most recent call last):
File "bbproxy.py", line 68, in <module>
io()
File "bbproxy.py", line 33, in io
decopack = zlib.decompress(decpack, zlib.MAX_WBITS)
zlib.error: Error -5 while decompressing data: incomplete or truncated stream
I ran into the same problem while trying to decompress a file using zlib with Python 2.7. The issue had to do with the size of the stream (or file input) exceeding the size that could be stored in memory. (My PC has 16 GB of memory, so it was not exceeding the physical memory size, but the buffer default size is 16384.)
The easiest fix was to change the code from:
import zlib
f_in = open('my_data.zz', 'rb')
comp_data = f_in.read()
data = zlib.decompress(comp_data)
To:
import zlib
f_in = open('my_data.zz', 'rb')
comp_data = f_in.read()
zobj = zlib.decompressobj() # obj for decompressing data streams that won’t fit into memory at once.
data = zobj.decompress(comp_data)
It handles the stream by buffering it and feeding in into the decompressor in manageable chunks.
I hope this helps to save you time trying to figure out the problem. I had help from my friend Jordan! I was trying all kinds of different window sizes (wbits).
Edit: Even with the below working on partial gz files for some files when I decompressed I got empty byte array and everything I tried would always return empty though the function was successful. Eventually I resorted to running gunzip process which always works:
def gunzip_string(the_string):
proc = subprocess.Popen('gunzip',stdout=subprocess.PIPE,
stdin=subprocess.PIPE, stderr=subprocess.DEVNULL)
proc.stdin.write(the_body)
proc.stdin.close()
body = proc.stdout.read()
proc.wait()
return body
Note that the above can return a non-zero error code indicating that the input string is incomplete but it still performs the decompression and hence the stderr being swallowed. You may wish to check errors to allow for this case.
/edit
I think the zlib decompression library is throwing an exception because you are not passing in a complete file just a 65536 chunk ss.recv(65536). If you change from this:
decopack = zlib.decompress(decpack, -zlib.MAX_WBITS)
to
decompressor = zlib.decompressobj(-zlib.MAX_WBITS)
decopack = decompressor(decpack)
it should work as that way can handle streaming.
A the docs say
zlib.decompressobj - Returns a decompression object, to be used for decompressing data streams that won’t fit into memory at once.
or even if it does fit into memory you might just want to do the beginning of the file
Try this:
decopack = zlib.decompressobj().decompress(decpack, zlib.MAX_WBITS)
I want to use my raspi cam modul to scan QR codes.
For detecting and decoding qr codes I want to use zbar.
My current code:
import io
import time
import picamera
import zbar
import Image
if len(argv) < 2: exit(1)
# Create an in-memory stream
my_stream = io.BytesIO()
with picamera.PiCamera() as camera:
camera.start_preview()
# Camera warm-up time
time.sleep(2)
camera.capture(my_stream, 'jpeg')
scanner = zbar.ImageScanner()
scanner.parse_config('enable')
pil = Image.open(argv[1]).convert('L')
width, height = pil.size
raw = pil.tostring()
my_stream = zbar.Image(width, height, 'Y800', raw)
scanner.scan(image)
for symbol in image:
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
As you may see, I want to create a picture stream, send this stream to zbar to check if a qr code is contained in the picture.
I am not able to run this code, this error is the result:
Segmentation fault
------------------ (program exited with code: 139) Press return to continue
I don't find any solution how to fixx this bug, any idea?
Kind regards;
The shortage of all the other answers is that they have a large amount of DELAY - for example, what they are scanning and displaying to the screen was actually a frame taken several seconds ago and so on.
This is due to the slow CPU of Raspberry Pi. So the frame-rate is much bigger than the rate our software can read and scan.
With lots of effort, I finally made this code, which have LITTLE DELAY. So when you give it a QRCode/BarCode, it will give you a result in less than a second.
The trick I use is explained in the code.
import cv2
import cv2.cv as cv
import numpy
import zbar
import time
import threading
'''
LITTLE-DELAY BarCodeScanner
Author: Chen Jingyi (From FZYZ Junior High School, China)
PS. If your pi's V4L is not available, the cv-Window may have some error sometimes, but other parts of this code works fine.
'''
class BarCodeScanner(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.WINDOW_NAME = 'Camera'
self.CV_SYSTEM_CACHE_CNT = 5 # Cv has 5-frame cache
self.LOOP_INTERVAL_TIME = 0.2
cv.NamedWindow(self.WINDOW_NAME, cv.CV_WINDOW_NORMAL)
self.cam = cv2.VideoCapture(-1)
def scan(self, aframe):
imgray = cv2.cvtColor(aframe, cv2.COLOR_BGR2GRAY)
raw = str(imgray.data)
scanner = zbar.ImageScanner()
scanner.parse_config('enable')
#print 'ScanZbar', time.time()
width = int(self.cam.get(cv.CV_CAP_PROP_FRAME_WIDTH))
height = int(self.cam.get(cv.CV_CAP_PROP_FRAME_HEIGHT))
imageZbar = zbar.Image(width, height,'Y800', raw)
scanner.scan(imageZbar)
#print 'ScanEnd', time.time()
for symbol in imageZbar:
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
def run(self):
#print 'BarCodeScanner run', time.time()
while True:
#print time.time()
''' Why reading several times and throw the data away: I guess OpenCV has a `cache-queue` whose length is 5.
`read()` will *dequeue* a frame from it if it is not null, otherwise wait until have one.
When the camera has a new frame, if the queue is not full, the frame will be *enqueue*, otherwise be thrown away.
So in this case, the frame rate is far bigger than the times the while loop is executed. So when the code comes to here, the queue is full.
Therefore, if we want the newest frame, we need to dequeue the 5 frames in the queue, which is useless because it is old. That's why.
'''
for i in range(0,self.CV_SYSTEM_CACHE_CNT):
#print 'Read2Throw', time.time()
self.cam.read()
#print 'Read2Use', time.time()
img = self.cam.read()
self.scan(img[1])
cv2.imshow(self.WINDOW_NAME, img[1])
cv.WaitKey(1)
#print 'Sleep', time.time()
time.sleep(self.LOOP_INTERVAL_TIME)
cam.release()
scanner = BarCodeScanner()
scanner.start()
In the line
scanner.scan(image)
you're using a variable that hasn't appeared in the code before. Because zbar is written in C, it doesn't catch that the variable is undefined, and the library tries to read garbage data as if it were an image. Hence, the segfault. I'm guessing you meant my_stream instead of image.
i'm using QR decoding on raspberry for my project. I solved it by using
subprocces module.
Here is my function for QR decoding:
import subprocess
def detect():
"""Detects qr code from camera and returns string that represents that code.
return -- qr code from image as string
"""
subprocess.call(["raspistill -n -t 1 -w 120 -h 120 -o cam.png"],shell=True)
process = subprocess.Popen(["zbarimg -D cam.png"], stdout=subprocess.PIPE, shell=True)
(out, err) = process.communicate()
qr_code = None
# out looks like "QR-code: Xuz213asdY" so you need
# to remove first 8 characters plus whitespaces
if len(out) > 8:
qr_code = out[8:].strip()
return qr_code
You can easy add parameters to function such as img_widt and img_height
and change this part of code
"raspistill -n -t 1 -w 120 -h 120 -o cam.png"
to
"raspistill -n -t 1 -w %d -h %d -o cam.png" % (img_width, img_height)
if you want different size of image for decoding.
After reading this, I was able to come up with a pythonic solution involving OpenCV.
First, you build OpenCV on the Pi by following these instructions. That will probably take several hours to complete.
Now reboot the Pi and use the following script (assuming you have python-zbar installed) to get the QR/barcode data:
import cv2
import cv2.cv as cv
import numpy
import zbar
class test():
def __init__(self):
cv.NamedWindow("w1", cv.CV_WINDOW_NORMAL)
# self.capture = cv.CaptureFromCAM(camera_index) #for some reason, this doesn't work
self.capture = cv.CreateCameraCapture(-1)
self.vid_contour_selection()
def vid_contour_selection(self):
while True:
self.frame = cv.QueryFrame(self.capture)
aframe = numpy.asarray(self.frame[:,:])
g = cv.fromarray(aframe)
g = numpy.asarray(g)
imgray = cv2.cvtColor(g,cv2.COLOR_BGR2GRAY)
raw = str(imgray.data)
scanner = zbar.ImageScanner()
scanner.parse_config('enable')
imageZbar = zbar.Image( self.frame.width, self.frame.height,'Y800', raw)
scanner.scan(imageZbar)
for symbol in imageZbar:
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
cv2.imshow("w1", aframe)
c = cv.WaitKey(5)
if c == 110: #pressing the 'n' key will cause the program to exit
exit()
#
p = test()
Note: I had to turn the Raspi Camera's lens counterclockwise about 1/4 - 1/3 of a turn before zbar was able to detect the QR/barcodes.
With the above code, whenever zbar detects a QR/barcode, the decoded data is printed in the console. It runs continuously, only stopping if the n key is pressed
For anyone that is still looking for a solutions to this...
This code is ugly but it works from a regular webcam pretty well, haven't tried the Pi camera yet. I'm new to python so this is the best I could come up with that worked in both Python2 and 3.
Make a bash script called kill.sh and make it executable... (chmod -x)
#kill all running zbar tasks ... call from python
ps -face | grep zbar | awk '{print $2}' | xargs kill -s KILL
Then do a system call from python like so...
import sys
import os
def start_cam():
while True:
#Initializes an instance of Zbar to the commandline to detect barcode data-strings.
p=os.popen('/usr/bin/zbarcam --prescale=300x200','r')
#Barcode variable read by Python from the commandline.
print("Please Scan a QRcode to begin...")
barcode = p.readline()
barcodedata = str(barcode)[8:]
if barcodedata:
print("{0}".format(barcodedata))
#Kills the webcam window by executing the bash file
os.system("/home/pi/Desktop/kill.sh")
start_cam()
Hopefully this helps people with the same questions in the future!
Quite a late response, but I ran into a number of issues while trying to get Zbar working. Though I was using a USB webcam, but I had to install multiple libraries before i could get to install zbar. I installed fswebcam, python-zbar, libzbar-dev and finally ran setup.py.
More importantly, the zbar from sourceforge did not work for me, but the one from github, which has a Python wrapper worked for me.
I documented my steps by steps at http://techblog.saurabhkumar.com/2015/09/scanning-barcodes-using-raspberry-pi.html if it might help
Just a small modified from Dan2theR, because I don't want to create another shell file.
import sys
import os
p = os.popen('/usr/bin/zbarcam --prescale=300x300 --Sdisable -Sqrcode.enable', 'r')
def start_scan():
global p
while True:
print('Scanning')
data = p.readline()
qrcode = str(data)[8:]
if(qrcode):
print(qrcode)
try:
start_scan()
except KeyboardInterrupt:
print('Stop scanning')
finally:
p.close()