I'm making a program that lets me record video using the rapsberry pi camera lib for python, there's one small issue, when you run camera = picamera.PiCamera() the camera is enabled and being used until the end of the program, what I would like to do is only enable it when recording and stop when recording is done but still keep my program active.
What I need:
How do I create a global variable for the picamera and how do I terminate it.
Part of my code that's relevant:
camera = picamera.PiCamera()
camera.resolution = (1920, 1080)
filename = ""
#Start recording video into raw file
def start_record():
print("Starting recording");
reset_tmp()
global filename
filename = "vid/" + str(int(time.time()));
camera.start_recording(filename+".h264");
#Stop recording and convert h264 raw file to mp4 and remove raw file
def stop_record():
print("Stopping recording");
reset_tmp()
global filename
camera.stop_recording()
os.system("MP4Box -fps 30 -add "+filename+".h264"+" "+filename+".mp4");
os.system("rm "+filename+".h264");
Updated version of code, functional
for those looking for the solution of the title you must use the del keyword to get rid of variables but the picamera library has a function called .close() to terminate the object, here's my fixed code:
camera = None
filename = ""
#Start recording video into raw file
def start_record():
print("Starting recording");
reset_tmp()
global filename
filename = "vid/" + str(int(time.time()));
global camera
camera = picamera.PiCamera()
camera.resolution = (1920, 1080)
camera.start_recording(filename+".h264");
#Stop recording and convert h264 raw file to mp4 and remove raw file
def stop_record():
print("Stopping recording");
reset_tmp()
global filename
global camera
camera.stop_recording()
camera.close()
os.system("MP4Box -fps 30 -add "+filename+".h264"+" "+filename+".mp4");
os.system("rm "+filename+".h264");
No need for globals here. Just use return values:
import os
import subprocess
import time
import picamera
def start_record(resolution=(1920, 1080)):
"""Start recording video into raw file"""
print("Starting recording")
camera = picamera.PiCamera()
camera.resolution = resolution
reset_tmp()
filename = os.path.join('vid', '{}.h264'.format(int(time.time())))
camera.start_recording(filename)
return camera, filename
def stop_record(camera, filename):
"""Stop recording and convert h264 raw file to mp4 and remove raw file"""
print("Stopping recording")
reset_tmp()
camera.stop_recording()
mp4_fn = os.path.splitext(filename)[0] + '.mp4'
subprocess.call(['MP4Box', '-fps', '30', '-add', mp4_fn])
os.remove(filename)
Now call the start function:
camera, filename = start_record()
and later the stop function:
stop_record(camera, filename)
How about:
camera = None
camera.resolution = (1920, 1080)
filename = ""
#Start recording video into raw file
def start_record():
print("Starting recording");
reset_tmp()
global filename
filename = "vid/" + str(int(time.time()));
global camera
camera = picamera.PiCamera()
camera.start_recording(filename+".h264");
The basic idea would be to move the code to start the camera into a funktion - where global can be used to modify a global variable.
Related
I'm looking to record a Twitch Livestream by feeding it the direct livestream url using streamlink.streams(url) (which returns a .m3u8 url). With this, I have no problem reading the stream and even writing a few images from it, but when it comes to writing it as a video, I get errors.
P.S.: Yes, I know there's other options like Streamlink and yt-dwl, but I want to operate solely in python, not using CLI... which I believe those two are only dealing with (for recording).
Here's what I currently have:
if streamlink.streams(url):
stream = streamlink.streams(url)['best']
stream = str(stream).split(', ')
stream = stream[1].strip("'")
cap = cv2.VideoCapture(stream)
gst_out = "appsrc ! video/x-raw, format=BGR ! queue ! nvvidconv ! omxh264enc ! h264parse ! qtmux ! filesink location=stream "
out = cv2.VideoWriter(gst_out, cv2.VideoWriter_fourcc(*'mp4v'), 30, (1920, 1080))
while True:
_, frame = cap.read()
out.write(frame)
For this code, I get this error msg:
[tls # 0x1278a74f0] Error in the pull function.
And if I remove gst_out and feed stream instead as well as moving cap and out into the while loop like so:
if streamlink.streams(url):
stream = streamlink.streams(url)['best']
stream = str(stream).split(', ')
stream = stream[1].strip("'")
while True:
cap = cv2.VideoCapture(stream)
_, frame = cap.read()
out = cv2.VideoWriter(stream, cv2.VideoWriter_fourcc(*'mp4v'), 30, (1920, 1080))
out.write(frame)
I get:
OpenCV: FFMPEG: tag 0x7634706d/'mp4v' is not supported with codec id 12 and format 'hls / Apple HTTP Live Streaming'
What am I missing here?
The fist part uses GStreamer syntax, and OpenCV for Python is most likely not built with GStreamer.
The answer is going to be focused on the second part (also because I don't know GStreamer so well).
There are several issues:
cap = cv2.VideoCapture(stream) should be before the while True loop.
out = cv2.VideoWriter(stream, cv2.VideoWriter_fourcc(*'mp4v'), 30, (1920, 1080)) should be before the while True loop.
The first argument of cv2.VideoWriter should be MP4 file name, and not stream.
For getting a valid output file, we have to execute out.release() after the loop, but the loop may never end.
It is recommended to get frame size and rate of the input video, and set VideoWriter accordingly:
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
video_file_name = 'output.mp4'
out = cv2.VideoWriter(video_file_name, cv2.VideoWriter_fourcc(*'mp4v'), fps, (width, height)) # Open video file for writing
It is recommended to break the loop if ret is False:
ret, frame = cap.read()
if not ret:
break
One option to end the recording is when user press Esc key.
Break the loop if cv2.waitKey(1) == 27.
cv2.waitKey(1) is going to work only after executing cv2.imshow.
A simple solution is executing cv2.imshow every 30 frames (for example).
if (frame_counter % 30 == 0):
cv2.imshow('frame', frame) # Show frame every 30 frames (for testing)
if cv2.waitKey(1) == 27: # Press Esc for stop recording (cv2.waitKey is going to work only when cv2.imshow is used).
break
Complete code sample:
from streamlink import Streamlink
import cv2
def stream_to_url(url, quality='best'):
session = Streamlink()
streams = session.streams(url)
if streams:
return streams[quality].to_url()
else:
raise ValueError('Could not locate your stream.')
url = 'https://www.twitch.tv/noraexplorer' # Need to login to twitch.tv first (using the browser)...
quality='best'
stream_url = stream_to_url(url, quality) # Get the video URL
cap = cv2.VideoCapture(stream_url, cv2.CAP_FFMPEG) # Open video stream for capturing
# Get frame size and rate of the input video
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
video_file_name = 'output.mp4'
out = cv2.VideoWriter(video_file_name, cv2.VideoWriter_fourcc(*'mp4v'), fps, (width, height)) # Open video file for writing
frame_counter = 0
while True:
ret, frame = cap.read()
if not ret:
break
if (frame_counter % 30 == 0):
cv2.imshow('frame', frame) # Show frame every 30 frames (for testing)
out.write(frame) # Write frame to output.mp4
if cv2.waitKey(1) == 27: # Press Esc for stop recording (cv2.waitKey is going to work only when cv2.imshow is used).
break
frame_counter += 1
cap.release()
out.release()
cv2.destroyAllWindows()
Testing the setup using FFplay and subprocess module:
from streamlink import Streamlink
import subprocess
def stream_to_url(url, quality='best'):
session = Streamlink()
streams = session.streams(url)
if streams:
return streams[quality].to_url()
else:
raise ValueError('Could not locate your stream.')
#url = 'https://www.twitch.tv/noraexplorer' # Need to login to twitch.tv first (using the browser)...
url = 'https://www.twitch.tv/valorant'
quality='best'
stream_url = stream_to_url(url, quality) # Get the video URL
subprocess.run(['ffplay', stream_url])
Update:
Using ffmpeg-python for reading the video, and OpenCV for recording the video:
In cases where cv2.VideoCapture is not working, we may use FFmpeg CLI as sub-process.
ffmpeg-python module is Python binding for FFmpeg CLI.
Using ffmpeg-python is almost like using subprocess module, it used here mainly for simplifying the usage of FFprobe.
Using FFprobe for getting video frames resolution and framerate (without using OpenCV):
p = ffmpeg.probe(stream_url, select_streams='v');
width = p['streams'][0]['width']
height = p['streams'][0]['height']
r_frame_rate = p['streams'][0]['r_frame_rate'] # May return 60000/1001
if '/' in r_frame_rate:
fps = float(r_frame_rate.split("/")[0]) / float(r_frame_rate.split("/")[1]) # Convert from 60000/1001 to 59.94
elif r_frame_rate != '0':
fps = float(r_frame_rate)
else:
fps = 30 # Used as default
Getting the framerate may be a bit of a challenge...
Note: ffprobe CLI should be in the execution path.
Start FFmpeg sub-process with stdout as pipe:
ffmpeg_process = (
ffmpeg
.input(stream_url)
.video
.output('pipe:', format='rawvideo', pix_fmt='bgr24')
.run_async(pipe_stdout=True)
)
Note: ffmpeg CLI should be in the execution path.
Reading a frame from the pipe, and convert it from bytes to NumPy array:
in_bytes = ffmpeg_process.stdout.read(width*height*3)
frame = np.frombuffer(in_bytes, np.uint8).reshape([height, width, 3])
Closing FFmpeg sub-process:
Closing stdout pipe ends FFmpeg (with "broken pipe" error).
ffmpeg_process.stdout.close()
ffmpeg_process.wait() # Wait for the sub-process to finish
Complete code sample:
from streamlink import Streamlink
import cv2
import numpy as np
import ffmpeg
def stream_to_url(url, quality='best'):
session = Streamlink()
streams = session.streams(url)
if streams:
return streams[quality].to_url()
else:
raise ValueError('Could not locate your stream.')
#url = 'https://www.twitch.tv/noraexplorer' # Need to login to twitch.tv first (using the browser)...
url = 'https://www.twitch.tv/valorant'
quality='best'
stream_url = stream_to_url(url, quality) # Get the video URL
#subprocess.run(['ffplay', stream_url]) # Use FFplay for testing
# Use FFprobe to get video frames resolution and framerate.
################################################################################
p = ffmpeg.probe(stream_url, select_streams='v');
width = p['streams'][0]['width']
height = p['streams'][0]['height']
r_frame_rate = p['streams'][0]['r_frame_rate'] # May return 60000/1001
if '/' in r_frame_rate:
fps = float(r_frame_rate.split("/")[0]) / float(r_frame_rate.split("/")[1]) # Convert from 60000/1001 to 59.94
elif r_frame_rate != '0':
fps = float(r_frame_rate)
else:
fps = 30 # Used as default
#cap = cv2.VideoCapture(stream_url, cv2.CAP_FFMPEG) # Open video stream for capturing
#width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
#height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
#fps = int(cap.get(cv2.CAP_PROP_FPS))
################################################################################
# Use FFmpeg sub-process instead of using cv2.VideoCapture
################################################################################
ffmpeg_process = (
ffmpeg
.input(stream_url, an=None) # an=None applies -an argument (used for ignoring the input audio - it is not required, just more elegant).
.video
.output('pipe:', format='rawvideo', pix_fmt='bgr24')
.run_async(pipe_stdout=True)
)
################################################################################
video_file_name = 'output.mp4'
out = cv2.VideoWriter(video_file_name, cv2.VideoWriter_fourcc(*'mp4v'), fps, (width, height)) # Open video file for writing
frame_counter = 0
while True:
#ret, frame = cap.read()
in_bytes = ffmpeg_process.stdout.read(width*height*3) # Read raw video frame from stdout as bytes array.
if len(in_bytes) < width*height*3: #if not ret:
break
frame = np.frombuffer(in_bytes, np.uint8).reshape([height, width, 3]) # Convert bytes array to NumPy array.
if (frame_counter % 30 == 0):
cv2.imshow('frame', frame) # Show frame every 30 frames (for testing)
out.write(frame) # Write frame to output.mp4
if cv2.waitKey(1) == 27: # Press Esc for stop recording (cv2.waitKey is going to work only when cv2.imshow is used).
break
frame_counter += 1
#cap.release()
ffmpeg_process.stdout.close() # Close stdout pipe (it also closes FFmpeg).
out.release()
cv2.destroyAllWindows()
ffmpeg_process.wait() # Wait for the sub-process to finish
Note:
In case you care about the quality of the recorded video, using cv2.VideoWriter is not the best choice...
at the moment I am reading an ip cameras live image by using the following code:
def livestream(self):
print("start")
stream = urlopen('http://192.168.4.1:81/stream')
bytes = b''
while True:
try:
bytes += stream.read(1024)
a = bytes.find(b'\xff\xd8')
b = bytes.find(b'\xff\xd9')
if a != -1 and b != -1:
jpg = bytes[a:b+2]
bytes = bytes[b+2:]
getliveimage = cv2.imdecode(np.frombuffer(jpg, dtype=np.uint8), cv2.IMREAD_COLOR)
livestreamrotated1 = cv2.rotate(getliveimage, cv2.ROTATE_90_CLOCKWISE) #here I am rotating the image
print(type(livestreamrotated1)) #type at this point is <class 'numpy.ndarray'>
cv2.imshow('video',livestreamrotated1)
if cv2.waitKey(1) ==27: # if user hit esc
exit(0) # exit program
except Exception as e:
print(e)
print("failed at this point")
Now I want to integrate the result-image into Kivy-GUI and want to get rid of the while-loop since it freezes my GUI. Unfortunately the loop is necessary to recreate the image byte-by-byte. I would like to use cv2.VideoCapture instead and schedule this multiple times per second. This is not working at all, I am not able to capture the image from the live stream this way...where am I wrong?
cap = cv2.VideoCapture('http://192.168.4.1:81/stream?dummy.jpg')
ret, frame = cap.read()
cv2.imshow('stream',frame)
I read in some other post that a file-ending like "dummy.jpg" would be necessary at this point, but it is still not working, the program freezes.
Please help. Thank you in advance!
If you want to decouple your reading loop from your GUI loop you can use multithreading to separate the code. You can have a thread running your livestream function and dumping the image out to a global image variable where your GUI loop can pick it up and do whatever to it.
I can't really test out the livestream part of the code, but something like this should work. The read function is an example of how to write a generic looping function that will work with this code.
import cv2
import time
import threading
import numpy as np
# generic threading class
class Reader(threading.Thread):
def __init__(self, func, *args):
threading.Thread.__init__(self, target = func, args = args);
self.start();
# globals for managing shared data
g_stop_threads = False;
g_lock = threading.Lock();
g_frame = None;
# reads frames from vidcap and stores them in g_frame
def read():
# grab globals
global g_stop_threads;
global g_lock;
global g_frame;
# open vidcap
cap = cv2.VideoCapture(0);
# loop
while not g_stop_threads:
# get a frame from camera
ret, frame = cap.read();
# replace the global frame
if ret:
with g_lock:
# copy so that we can quickly drop the lock
g_frame = np.copy(frame);
# sleep so that someone else can use the lock
time.sleep(0.03); # in seconds
# your livestream func
def livestream():
# grab globals
global g_stop_threads;
global g_lock;
global g_frame;
# open stream
stream = urlopen('http://192.168.4.1:81/stream')
bytes = b''
# process stream into opencv image
while not g_stop_threads:
try:
bytes += stream.read(1024)
a = bytes.find(b'\xff\xd8')
b = bytes.find(b'\xff\xd9')
if a != -1 and b != -1:
jpg = bytes[a:b+2]
bytes = bytes[b+2:]
getliveimage = cv2.imdecode(np.frombuffer(jpg, dtype=np.uint8), cv2.IMREAD_COLOR)
livestreamrotated1 = cv2.rotate(getliveimage, cv2.ROTATE_90_CLOCKWISE) #here I am rotating the image
# acquire lock and replace image
with g_lock:
g_frame = livestreamrotated1;
# sleep to allow other threads to get the lock
time.sleep(0.03); # in seconds
except Exception as e:
print(e)
print("failed at this point")
def main():
# grab globals
global g_stop_threads;
global g_lock;
global g_frame;
# start a thread
# reader = Reader(read);
reader = Reader(livestream);
# show frames from g_frame
my_frame = None;
while True:
# grab lock
with g_lock:
# show
if not g_frame is None:
# copy # we copy here to dump the lock as fast as possible
my_frame = np.copy(g_frame);
# now we can do all the slow manipulation / gui stuff here without the lock
if my_frame is not None:
cv2.imshow("Frame", my_frame);
# break out if 'q' is pressed
if cv2.waitKey(1) == ord('q'):
break;
# stop the threads
g_stop_threads = True;
if __name__ == "__main__":
main();
Let me start by saying that I am very new to Python and Raspberry Pi.
I've "made"(more like copied from a lot of diff. sources and compiled) a module on windows to capture images from a web cam on key press and save it in a folder(code attached). It is working fine on windows and repeats the loop but throws an error on Raspberry Pi after the first loop.
Code for windows:-
# Import Modules #######################################################################################################
from datetime import datetime
import cv2
import time
import queue
import threading
# Module Level Variables ###############################################################################################
inpath = "D:\\Python Projects\\OCR Trial2\\Input\\Training Data\\"
outpath = "D:\\Python Projects\\OCR Trial2\\Output\\"
intpath = "D:\\Python Projects\\OCR Trial2\\Intermediate\\"
file_Prefix = 'IMG100'
file_Extension = '.png'
# Class Definitions ####################################################################################################
class VideoCapture:
def __init__(self, name):
self.cap = cv2.VideoCapture(name)
self.q = queue.Queue()
t = threading.Thread(target=self._reader)
t.daemon = True
t.start()
def _reader(self):
while True:
ret, frame = self.cap.read()
if not ret:
break
if not self.q.empty():
try:
self.q.get_nowait()
except queue.Empty:
pass
self.q.put(frame)
def read(self):
return self.q.get()
# Functions ############################################################################################################
def main():
while True:
try:
windowName = "Live Video Feed"
cv2.namedWindow(windowName)
if cv2.waitKey(1) == ord("c"):
time.sleep(1)
now = datetime.now()
formatted_time = now.strftime('%Y-%m-%d %H-%M-%S.%f')[:-3]
cam = VideoCapture(0 + cv2.CAP_DSHOW)
frame1 = cam.read()
cv2.imshow(windowName,frame1)
cv2.imwrite(intpath + file_Prefix + formatted_time + file_Extension, frame1)
print(formatted_time)
else:
continue
except:
pass
# Execute Code #########################################################################################################
if __name__ == "__main__":
main()
Output for Windows:-
2021-01-06 17-20-05.255
2021-01-06 17-20-07.404
2021-01-06 17-20-08.601
2021-01-06 17-20-10.766
2021-01-06 17-20-12.408
Process finished with exit code -1
Code for Raspberry Pi:-
# Import Modules #######################################################################################################
from datetime import datetime
import cv2
import time
import queue
import threading
# Module Level Variables ###############################################################################################
intpath = "/home/pi/Python Images/"
file_Prefix = 'IMG100'
file_Extension = '.png'
# Class Definitions ####################################################################################################
class VideoCapture:
def __init__(self, name):
self.cap = cv2.VideoCapture(name)
self.q = queue.Queue()
t = threading.Thread(target=self._reader)
t.daemon = True
t.start()
def _reader(self):
while True:
ret, frame = self.cap.read()
if not ret:
break
if not self.q.empty():
try:
self.q.get_nowait()
except queue.Empty:
pass
self.q.put(frame)
def read(self):
return self.q.get()
# Functions ############################################################################################################
def main():
while True:
try:
windowName = "Live Video Feed"
cv2.namedWindow(windowName)
if cv2.waitKey(1) == ord("c"):
time.sleep(1)
now = datetime.now()
formatted_time = now.strftime('%Y-%m-%d %H-%M-%S.%f')[:-3]
cam = VideoCapture(0)
frame1 = cam.read()
cv2.imshow(windowName,frame1)
cv2.imwrite(intpath + file_Prefix + formatted_time + file_Extension, frame1)
print(formatted_time)
else:
continue
except:
pass
# Execute Code #########################################################################################################
if __name__ == "__main__":
main()
Output for Raspberry Pi :-
2021-01-06 17-07-59.501
[ WARN:4] global /tmp/pip-wheel-qd18ncao/opencv-python/opencv/modules/videoio/src/cap_v4l.cpp (893) open VIDEOIO(V4L2:/dev/video0): can't open camera by index
Process finished with exit code 137 (interrupted by signal 9: SIGKILL)
Open CV module on Raspberry Pi was installed by PIP and not manually compiled. General OpenCV functions like Video capture and imshow work fine on Raspberry Pi and it captures the first photo successfully but cannot capture the second one.
Please suggest what could be the problem, what can I try next.
Edit 1 - Added this is the whole error after printing the exception:-
/home/pi/PycharmProjects/pythonProject/venv/bin/python "/home/pi/PycharmProjects/pythonProject/Image Capture.py"
2021-01-07 15-07-36.555
[ WARN:4] global /tmp/pip-wheel-qd18ncao/opencv-python/opencv/modules/videoio/src/cap_v4l.cpp (893) open VIDEOIO(V4L2:/dev/video0): can't open camera by index
Traceback (most recent call last):
File "/home/pi/PycharmProjects/pythonProject/Image Capture.py", line 72, in <module>
main()
File "/home/pi/PycharmProjects/pythonProject/Image Capture.py", line 59, in main
frame1 = cam.read()
File "/home/pi/PycharmProjects/pythonProject/Image Capture.py", line 42, in read
return self.q.get()
File "/usr/lib/python3.7/queue.py", line 170, in get
self.not_empty.wait()
File "/usr/lib/python3.7/threading.py", line 296, in wait
waiter.acquire()
KeyboardInterrupt
Process finished with exit code 1
Your mistake can be cam = VideoCapture(0) inside loop.
You should create it only once - before loop.
If you try to use it second time (for example in loop) then system can't access it before it still use previous cam = VideoCapture(0).
I made this script:
import cv2
import pyaudio
import wave
import threading
import time
import subprocess
import os
import keyboard
class VideoRecorder():
# Video class based on openCV
def __init__(self):
self.open = True
self.fps = 6 # fps should be the minimum constant rate at which the camera can
self.fourcc = "MJPG" # capture images (with no decrease in speed over time; testing is required)
self.frameSize = (640,480) # video formats and sizes also depend and vary according to the camera used
self.video_filename = "temp_video.avi"
self.video_cap = cv2.VideoCapture(0)
self.video_writer = cv2.VideoWriter_fourcc(*self.fourcc)
self.video_out = cv2.VideoWriter(self.video_filename, self.video_writer, self.fps, self.frameSize)
self.frame_counts = 1
self.start_time = time.time()
# Video starts being recorded
def record(self):
timer_start = time.time()
timer_current = 0
while(self.open==True):
ret, video_frame = self.video_cap.read()
if self.frame_counts > 10:
break
if (ret==True):
self.video_out.write(video_frame)
self.frame_counts += 1
print self.frame_counts
time.sleep(0.16)
else:
#threading.Thread(target=self.stop).start()
break
# 0.16 delay -> 6 fps
time.sleep(1)
self.video_out.release()
cv2.VideoCapture(0).release()
cv2.destroyAllWindows()
dwhuiadhuiahdwia = raw_input("Testtidhwuia?")
# Finishes the video recording therefore the thread too
def stop(self):
print "You made it"
if self.open==True:
self.open=False
self.video_out.release()
self.video_cap.release()
cv2.destroyAllWindows()
hduwahduiwahdiu = raw_input("Press enter to continue...")
else:
pass
# Launches the video recording function using a thread
def start(self):
video_thread = threading.Thread(target=self.record)
video_thread.start()
def start_video_recording():
global video_thread
video_thread = VideoRecorder()
video_thread.start()
def stop_AVrecording():
frame_counts = video_thread.frame_counts
elapsed_time = time.time() - video_thread.start_time
recorded_fps = frame_counts / elapsed_time
print "total frames " + str(frame_counts)
print "elapsed time " + str(elapsed_time)
print "recorded fps " + str(recorded_fps)
video_thread.stop()
# Makes sure the threads have finished
time.sleep(1)
video_thread.stop()
print ".."
start_video_recording()
duiwhaiudhwauidhwa = raw_input("hello")
It should record a video with the camera for about 10 seconds, then save it and then turn off the camera and close the script when the user presses enter.
But it doesn't really work.
It will record a video, and it does save the video but the camera only turns off when I close the script (the camera is on, but isn't recording.)
I know this because the led next to my camera doesn't turn off when I'm prompted to press enter to continue, but does when I press enter and the script closes.
I found this, and I haven't tested it yet. But if I were to use that solution and it worked, I'd have to do it on every computer I run the script on manually.
I'm using the picamera library to create a PiCamerIcrcularIO stream to write the recording data to. I can't seem to get a reliable method to always create a video file with a 6 second length to it. Depending on when I copy the video data to disk I can end up with anywhere between 3 to 6 seconds of video. Here is the code I am using to test this with:
"""Testing the video recording settings"""
import datetime as dt
import os
from random import randint
import subprocess
import sys
import picamera
class Camera(object):
"""Camera device"""
def __init__(self):
self.device = None # type: picamera.PiCamera
self.stream = None # type: picamera.PiCameraCircularIO
def initialize_camera(self):
"""Initializes the camera to 1280x720 and start recording video to a circular buffer"""
print "initializing camera"
self.device = picamera.PiCamera()
# set the camera's resolution (1280, 720)
self.device.resolution = (1280, 720)
# set the camera's framerate
self.device.framerate = 24
self.stream = picamera.PiCameraCircularIO(self.device, seconds=8)
self.device.start_recording(self.stream, format='h264')
return
def stop_camera(self):
"""Stop the recording and turn the camera off"""
if self.device is not None:
if self.device.recording:
self.device.stop_recording()
self.device.close()
if self.stream is not None:
self.stream = None
return
def capture_video(self):
"""Store the video from the circular buffer to disk"""
try:
if self.device is None:
# get the camera set up
self.initialize_camera()
video_name = dt.datetime.now().strftime(
'%m-%d-%Y_%H:%M:%S').replace(':', '-') + '.h264'
video_path = "/home/pi"
# copy the buffer to disk
print os.path.join(video_path, video_name)
self.stream.copy_to(os.path.join(video_path, video_name), seconds=6)
# add the MP4 wrapper to the h264 file using the MP4Box program
mp4_command = "MP4Box -add '{0}' '{1}.mp4' -fps 24".format(
os.path.join(video_path, video_name),
os.path.join(video_path, os.path.splitext(os.path.basename(video_name))[0]))
subprocess.call([mp4_command], shell=True)
# remove the original h264 file
os.remove(os.path.join(video_path, video_name))
except BaseException as err:
print 'error: {0} on line {1}\r'.format(err, sys.exc_info()[-1].tb_lineno)
return
def main():
"""Main program"""
camera = Camera()
camera.initialize_camera()
camera.device.wait_recording(timeout=randint(9, 20))
camera.capture_video()
camera.stop_camera()
return
if __name__ == '__main__':
main()
sys.exit(0)
I'm using the random integer to simulate different potential points that I could be attempting to capture the recorded data to disk. I have also tried extending/reducing the seconds parameter on the PiCameraCircularIO object and specifically adding the intra_period, quality, or bitrate parameters to the start_recording function without seeming to affect the overall result. Is there something silly I'm overlooking?