I am trying to use if statement to display some video streams parameter(bitrate) in case it exists or displays a message( cannot measure bitrate) if there is no bit rate. and prevent my GUI from crashing.
where is the wrong in my code, or how can I make it better?
import json
import shlex
import subprocess
cmd = "ffprobe -v quiet -print_format json -show_streams"
args = shlex.split(cmd)
myurl = "udp://#239.168.2.6:2113"# this is my video stream
args.append(myurl)
ffprobeOutput = subprocess.check_output(args).decode('utf-8')
ffprobeOutput = json.loads(ffprobeOutput)
video_stream = next((stream for stream in ffprobeOutput['streams'] if
stream['codec_type'] == 'video'))
if int(video_stream['bit_rate']) == True:
bit_rate1=int(video_stream['bit_rate'])
print(bit_rate1)
else:
print('can not measure bitrate')
my video stream has a bitrate parameter but my if statement always give me the else (print('can not measure bitrate'))
Hello Stack community,
I'm reading frames from an IP-camera stream and storing them in a list to later on create a video file.
I'm using the python OpenCV library and it works well but ..
The frames that are sent from the IP camera should have a h264 compression but when i check the size of the frames they are 25 MB for a 4K stream. I run out of memory quickly.
This is not the code, but similar to it:
import cv2
cap = cv2.VideoCapture(0)
list = []
while(cap.isOpened()):
ret, frame = cap.read()
if ret==True:
frame = cv2.flip(frame,0)
list.append(frame)
cap.release()
out = cv2.VideoWriter('output.avi', -1, 20.0, (640,480))
for frm in list:
out.write(frm)
out.release()
cv2.destroyAllWindows()
It seems like ret, frame = cap.read() unpacks the frame ?
This generates extra processing each loop and is uneccessary for my intentions with the script, is there a way to retrieve frames without unpacking them ?
Sorry in advance for my probable ignorance.
I built a test sample for reading h264 stream into memory using ffmpeg-python.
The sample reads the data from a file (I don't have a camera for testing it).
I also tested the code reading from RTSP stream.
Here is the code (please read the comments):
import ffmpeg
import threading
import io
in_filename = 'test_vid.264' # Input file for testing (".264" or ".h264" is a convention for elementary h264 video stream file)
## Build synthetic video, for testing:
################################################
# ffmpeg -y -r 10 -f lavfi -i testsrc=size=192x108:rate=1 -c:v libx264 -crf 23 -t 50 test_vid.264
width, height = 192, 108
(
ffmpeg
.input('testsrc=size={}x{}:rate=1'.format(width, height), f='lavfi')
.output(in_filename, vcodec='libx264', crf=23, t=50)
.overwrite_output()
.run()
)
################################################
# Use ffprobe to get video frames resolution
###############################################
# p = ffmpeg.probe(in_filename, select_streams='v');
# width = p['streams'][0]['width']
# height = p['streams'][0]['height']
###############################################
# Stream the video as array of bytes (simulate the stream from the camera for testing)
###############################################
## https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md
#sreaming_process = (
# ffmpeg
# .input(in_filename)
# .video # Video only (no audio).
# .output('pipe:', format='h264')
# .run_async(pipe_stdout=True) # Run asynchronous, and stream to stdout
#)
###############################################
# Read from stdout in chunks of 16K bytes
def reader():
chunk_len_in_byte = 16384 # I don't know what is the optimal chunk size
in_bytes = chunk_len_in_byte
# Read until number of bytes read are less than chunk_len_in_byte
# Also stop after 10000 chucks (just for testing)
chunks_counter = 0
while (chunks_counter < 10000):
in_bytes = process.stdout.read(chunk_len_in_byte) # Read 16KBytes from PIPE.
stream.write(in_bytes) # Write data to In-memory bytes streams
chunks_counter += 1
if len(in_bytes) < chunk_len_in_byte:
break
# Use public RTSP Streaming for testing
# in_stream = "rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov"
# Execute ffmpeg as asynchronous sub-process.
# The input is in_filename, and the output is a PIPE.
# Note: you should replace the input from file to camera (I might forgot an argument that tells ffmpeg to expect h264 input stream).
process = (
ffmpeg
.input(in_filename) #.input(in_stream)
.video
.output('pipe:', format='h264')
.run_async(pipe_stdin=True, pipe_stdout=True)
)
# Open In-memory bytes streams
stream = io.BytesIO()
thread = threading.Thread(target=reader)
thread.start()
# Join thread, and wait for processes to end.
thread.join()
try:
process.wait(timeout=5)
except sp.TimeoutExpired:
process.kill() # Kill subprocess in case of a timeout (there might be a timeout because input stream still lives).
#sreaming_process.wait() # sreaming_process is used
stream.seek(0) #Seek to beginning of stream.
# Write result to "in_vid.264" file for testing (the file is playable).
with open("in_vid.264", "wb") as f:
f.write(stream.getvalue())
In case you find it useful, I may add some more background descriptions before the code.
Please let me know if the code is working with a camera, and what you had to modify.
I'm working on an experiment concerned with spatial sound perception. In this experiment, different sounds should be simultaneously presented from up to eight speakers. For this purpose, I would like to create a Python code for OS X (10.10.5) can read from a multichannel sound file and send each channel of this sound file to a designated speaker (via an appropriate hardware device).
I came across a rather convenient solution for the "present from multiple speakers"-part of the problem: Following this post, it can be easily done for mono/stereo files by adding more entries to the channel_map in PyAudio. The (slightly modified) code looks like this:
import pyaudio
import wave
import sys
chunk = 4096
PyAudio = pyaudio.PyAudio
if len(sys.argv) < 2:
print("Plays a wave file.\n\nUsage: %s filename.wav" % sys.argv[0])
sys.exit(-1)
wf = wave.open(sys.argv[1], 'rb')
p = PyAudio()
channel_map = (0, 1, -1, -1, -1, -1, -1, -1)
try:
stream_info = pyaudio.PaMacCoreStreamInfo(
flags=pyaudio.PaMacCoreStreamInfo.paMacCorePlayNice, # default
channel_map=channel_map)
except AttributeError:
print("Sorry, couldn't find PaMacCoreStreamInfo. Make sure that "
"you're running on Mac OS X.")
sys.exit(-1)
print("Stream Info Flags:", stream_info.get_flags())
print("Stream Info Channel Map:", stream_info.get_channel_map())
print("channels",wf.getnchannels())
print('sample width',wf.getsampwidth())
stream = p.open(
format=p.get_format_from_width(wf.getsampwidth()),
channels=wf.getnchannels(),
rate=wf.getframerate(),
output=True,
output_host_api_specific_stream_info=stream_info)
data = wf.readframes(chunk)
while data != '':
stream.write(data)
data = wf.readframes(chunk)
stream.stop_stream()
stream.close()
p.terminate()
However, I wonder whether this also works with multichannel sound files in PyAudio? Is it possible to read from a multichannel (i.e 8-channel) sound file and to send specific channels to different output devices (e.g. speakers) with PyAudio?
If yes, can someone provide an example of how it can be done? I don't mind digging some more into libraries/modules but providing an example for the code in question would help me (as a novice) a lot.
If PyAudio is not the right choice, I would really appreciate any further recommendations/ideas/comments on how it can be done.
Thanks a lot!
Malte
I want to process mms video stream with OpenCV using Python.
The stream comes from an IP camera I have no control over (traffic monitor).
The stream is available as mms or mmst schemes -
mms://194.90.203.111/cam2
plays on both VLC and Windows Media Player.
mmst://194.90.203.111/cam2
works only on VLC.
I've tried to change the scheme to HTTP by re-streaming with FFmpeg and VLC but it didn't work.
As far as I understand, mms is using Windows Media Video to encode the stream. No luck adding '.mjpeg' at the end of the URI. I've yet to find what types of streaming are accepted by OpenCV.
Here's my code -
import cv2, platform
#import numpy as np
cam = "mms://194.90.203.111/cam2"
#cam = 0 # Use local webcam.
cap = cv2.VideoCapture(cam)
if not cap:
print("!!! Failed VideoCapture: invalid parameter!")
while(True):
# Capture frame-by-frame
ret, current_frame = cap.read()
if type(current_frame) == type(None):
print("!!! Couldn't read frame!")
break
# Display the resulting frame
cv2.imshow('frame',current_frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# release the capture
cap.release()
cv2.destroyAllWindows()
The
What am I missing? What type of video streams can OpenCV capture?
Is there an elegant solution without scheme change or transcoding?
Thanks!
Python ver 2.7.8, OpenCV's ver 2.4.9, Both x86. Win7 x64
Solved using FFmpeg and FFserver. Note FFserver only works on Linux.
The solution uses python code from here as suggested by Ryan.
Flow is as follows -
Start FFserver background process using the desired configuration
(mjpeg in this case).
FFmpeg input is the mmst stream, output stream
to localhost.
Run python script to open the localhost stream and
decode frame by frame.
Run FFserver
ffserver -d -f /etc/ffserver.conf
On a second terminal run FFmpeg
ffmpeg -i mmst://194.90.203.111/cam2 http://localhost:8090/cam2.ffm
The Python code. In this case, the code will open a window with the video stream.
import cv2, platform
import numpy as np
import urllib
import os
cam2 = "http://localhost:8090/cam2.mjpeg"
stream=urllib.urlopen(cam2)
bytes=''
while True:
# to read mjpeg frame -
bytes+=stream.read(1024)
a = bytes.find('\xff\xd8')
b = bytes.find('\xff\xd9')
if a!=-1 and b!=-1:
jpg = bytes[a:b+2]
bytes= bytes[b+2:]
frame = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8),cv2.CV_LOAD_IMAGE_COLOR)
# we now have frame stored in frame.
cv2.imshow('cam2',frame)
# Press 'q' to quit
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
ffserver.config -
Port 8090
BindAddress 0.0.0.0
MaxClients 10
MaxBandWidth 50000
CustomLog -
#NoDaemon
<Feed cam2.ffm>
File /tmp/cam2.ffm
FileMaxSize 1G
ACL allow 127.0.0.1
ACL allow localhost
</Feed>
<Stream cam2.mjpeg>
Feed cam2.ffm
Format mpjpeg
VideoFrameRate 25
VideoBitRate 10240
VideoBufferSize 20480
VideoSize 320x240
VideoQMin 3
VideoQMax 31
NoAudio
Strict -1
</Stream>
<Stream stat.html>
Format status
# Only allow local people to get the status
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255
</Stream>
<Redirect index.html>
URL http://www.ffmpeg.org/
</Redirect>
Note that this ffserver.config needs more fine tuning, but they work rather well and produce a frame very close to source with only a little frame freeze.
I want to use my raspi cam modul to scan QR codes.
For detecting and decoding qr codes I want to use zbar.
My current code:
import io
import time
import picamera
import zbar
import Image
if len(argv) < 2: exit(1)
# Create an in-memory stream
my_stream = io.BytesIO()
with picamera.PiCamera() as camera:
camera.start_preview()
# Camera warm-up time
time.sleep(2)
camera.capture(my_stream, 'jpeg')
scanner = zbar.ImageScanner()
scanner.parse_config('enable')
pil = Image.open(argv[1]).convert('L')
width, height = pil.size
raw = pil.tostring()
my_stream = zbar.Image(width, height, 'Y800', raw)
scanner.scan(image)
for symbol in image:
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
As you may see, I want to create a picture stream, send this stream to zbar to check if a qr code is contained in the picture.
I am not able to run this code, this error is the result:
Segmentation fault
------------------ (program exited with code: 139) Press return to continue
I don't find any solution how to fixx this bug, any idea?
Kind regards;
The shortage of all the other answers is that they have a large amount of DELAY - for example, what they are scanning and displaying to the screen was actually a frame taken several seconds ago and so on.
This is due to the slow CPU of Raspberry Pi. So the frame-rate is much bigger than the rate our software can read and scan.
With lots of effort, I finally made this code, which have LITTLE DELAY. So when you give it a QRCode/BarCode, it will give you a result in less than a second.
The trick I use is explained in the code.
import cv2
import cv2.cv as cv
import numpy
import zbar
import time
import threading
'''
LITTLE-DELAY BarCodeScanner
Author: Chen Jingyi (From FZYZ Junior High School, China)
PS. If your pi's V4L is not available, the cv-Window may have some error sometimes, but other parts of this code works fine.
'''
class BarCodeScanner(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.WINDOW_NAME = 'Camera'
self.CV_SYSTEM_CACHE_CNT = 5 # Cv has 5-frame cache
self.LOOP_INTERVAL_TIME = 0.2
cv.NamedWindow(self.WINDOW_NAME, cv.CV_WINDOW_NORMAL)
self.cam = cv2.VideoCapture(-1)
def scan(self, aframe):
imgray = cv2.cvtColor(aframe, cv2.COLOR_BGR2GRAY)
raw = str(imgray.data)
scanner = zbar.ImageScanner()
scanner.parse_config('enable')
#print 'ScanZbar', time.time()
width = int(self.cam.get(cv.CV_CAP_PROP_FRAME_WIDTH))
height = int(self.cam.get(cv.CV_CAP_PROP_FRAME_HEIGHT))
imageZbar = zbar.Image(width, height,'Y800', raw)
scanner.scan(imageZbar)
#print 'ScanEnd', time.time()
for symbol in imageZbar:
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
def run(self):
#print 'BarCodeScanner run', time.time()
while True:
#print time.time()
''' Why reading several times and throw the data away: I guess OpenCV has a `cache-queue` whose length is 5.
`read()` will *dequeue* a frame from it if it is not null, otherwise wait until have one.
When the camera has a new frame, if the queue is not full, the frame will be *enqueue*, otherwise be thrown away.
So in this case, the frame rate is far bigger than the times the while loop is executed. So when the code comes to here, the queue is full.
Therefore, if we want the newest frame, we need to dequeue the 5 frames in the queue, which is useless because it is old. That's why.
'''
for i in range(0,self.CV_SYSTEM_CACHE_CNT):
#print 'Read2Throw', time.time()
self.cam.read()
#print 'Read2Use', time.time()
img = self.cam.read()
self.scan(img[1])
cv2.imshow(self.WINDOW_NAME, img[1])
cv.WaitKey(1)
#print 'Sleep', time.time()
time.sleep(self.LOOP_INTERVAL_TIME)
cam.release()
scanner = BarCodeScanner()
scanner.start()
In the line
scanner.scan(image)
you're using a variable that hasn't appeared in the code before. Because zbar is written in C, it doesn't catch that the variable is undefined, and the library tries to read garbage data as if it were an image. Hence, the segfault. I'm guessing you meant my_stream instead of image.
i'm using QR decoding on raspberry for my project. I solved it by using
subprocces module.
Here is my function for QR decoding:
import subprocess
def detect():
"""Detects qr code from camera and returns string that represents that code.
return -- qr code from image as string
"""
subprocess.call(["raspistill -n -t 1 -w 120 -h 120 -o cam.png"],shell=True)
process = subprocess.Popen(["zbarimg -D cam.png"], stdout=subprocess.PIPE, shell=True)
(out, err) = process.communicate()
qr_code = None
# out looks like "QR-code: Xuz213asdY" so you need
# to remove first 8 characters plus whitespaces
if len(out) > 8:
qr_code = out[8:].strip()
return qr_code
You can easy add parameters to function such as img_widt and img_height
and change this part of code
"raspistill -n -t 1 -w 120 -h 120 -o cam.png"
to
"raspistill -n -t 1 -w %d -h %d -o cam.png" % (img_width, img_height)
if you want different size of image for decoding.
After reading this, I was able to come up with a pythonic solution involving OpenCV.
First, you build OpenCV on the Pi by following these instructions. That will probably take several hours to complete.
Now reboot the Pi and use the following script (assuming you have python-zbar installed) to get the QR/barcode data:
import cv2
import cv2.cv as cv
import numpy
import zbar
class test():
def __init__(self):
cv.NamedWindow("w1", cv.CV_WINDOW_NORMAL)
# self.capture = cv.CaptureFromCAM(camera_index) #for some reason, this doesn't work
self.capture = cv.CreateCameraCapture(-1)
self.vid_contour_selection()
def vid_contour_selection(self):
while True:
self.frame = cv.QueryFrame(self.capture)
aframe = numpy.asarray(self.frame[:,:])
g = cv.fromarray(aframe)
g = numpy.asarray(g)
imgray = cv2.cvtColor(g,cv2.COLOR_BGR2GRAY)
raw = str(imgray.data)
scanner = zbar.ImageScanner()
scanner.parse_config('enable')
imageZbar = zbar.Image( self.frame.width, self.frame.height,'Y800', raw)
scanner.scan(imageZbar)
for symbol in imageZbar:
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
cv2.imshow("w1", aframe)
c = cv.WaitKey(5)
if c == 110: #pressing the 'n' key will cause the program to exit
exit()
#
p = test()
Note: I had to turn the Raspi Camera's lens counterclockwise about 1/4 - 1/3 of a turn before zbar was able to detect the QR/barcodes.
With the above code, whenever zbar detects a QR/barcode, the decoded data is printed in the console. It runs continuously, only stopping if the n key is pressed
For anyone that is still looking for a solutions to this...
This code is ugly but it works from a regular webcam pretty well, haven't tried the Pi camera yet. I'm new to python so this is the best I could come up with that worked in both Python2 and 3.
Make a bash script called kill.sh and make it executable... (chmod -x)
#kill all running zbar tasks ... call from python
ps -face | grep zbar | awk '{print $2}' | xargs kill -s KILL
Then do a system call from python like so...
import sys
import os
def start_cam():
while True:
#Initializes an instance of Zbar to the commandline to detect barcode data-strings.
p=os.popen('/usr/bin/zbarcam --prescale=300x200','r')
#Barcode variable read by Python from the commandline.
print("Please Scan a QRcode to begin...")
barcode = p.readline()
barcodedata = str(barcode)[8:]
if barcodedata:
print("{0}".format(barcodedata))
#Kills the webcam window by executing the bash file
os.system("/home/pi/Desktop/kill.sh")
start_cam()
Hopefully this helps people with the same questions in the future!
Quite a late response, but I ran into a number of issues while trying to get Zbar working. Though I was using a USB webcam, but I had to install multiple libraries before i could get to install zbar. I installed fswebcam, python-zbar, libzbar-dev and finally ran setup.py.
More importantly, the zbar from sourceforge did not work for me, but the one from github, which has a Python wrapper worked for me.
I documented my steps by steps at http://techblog.saurabhkumar.com/2015/09/scanning-barcodes-using-raspberry-pi.html if it might help
Just a small modified from Dan2theR, because I don't want to create another shell file.
import sys
import os
p = os.popen('/usr/bin/zbarcam --prescale=300x300 --Sdisable -Sqrcode.enable', 'r')
def start_scan():
global p
while True:
print('Scanning')
data = p.readline()
qrcode = str(data)[8:]
if(qrcode):
print(qrcode)
try:
start_scan()
except KeyboardInterrupt:
print('Stop scanning')
finally:
p.close()