I've got a Raspberry pi compute module setup with two PiNoIR camera modules.
I'm trying to capture 2 video streams to be used on a raspberry pi 3 at a later date. Here is the configuration:
Frame rate = 30
resolution = 640x480
language = Python (with picamera)
chosen libraries = Opencv 3.1.0
The idea originally was to use the hardware device (e.g. /dev/videoX), but the bcm2835_v4l2 kernel module currently only supports 1 camera, and I'm not experienced with developing kernel modules do try and get it to support 2 cameras.
I've tried using the test code from the picamera docs, but this only works for a single camera. I know that defining PiCamera(0) or PiCamera(1) will select either camera, however I do not know how to get it to record two streams together.
I've tried the below, and I've tried using this guide for working with python and opencv. The guide is only focused around one camera, and I need both working.
!/usr/bin/python
import picamera
import time
cameraOne = picamera.PiCamera(0)
cameraTwo = picamera.PiCamera(1)
cameraOne.resolution = (640,480)
cameraTwo.resolution = (640,480)
cameraOne.framerate = 30
cameraOne.framerate = 30
cameraOne.start_recording('CameraOne.mjpg')
cameraTwo.start_recording('CameraTwo.mjpg')
counter = 0
while 1:
cameraOne.wait_recording(0.1)
cameraTwo.wait_recording(0.1)
counter += 1
if counter == 30:
break
cameraOne.stop_recording()
cameraTwo.stop_recording()
The code snippet above generates two 10 second videos with a single frame from each camera
I'm not sure where to go from here, as I'm not well versed in python, and I'm more experienced in C++, hence the want for hardware device control (e.g. /dev/videoX).
All I require is the ability to record the streams of both cameras simultaneously to be used in processing stereo vision.
If you can provide me with either a pure python-picamera solution, or an opencv integrated solution, I would be very thankful.
Just as an update, I've still not gotten very far on this, and could really use some help.
Related
I am using Python 3.9 and Open-CV (cv2) to read frames from a video stream and save them as JPGs.
My program seems to run OK. It captures the video stream fine, obtains frames, and saves them as JPGs.
However, the frames it is obtaining from the stream are out-of-date - sometimes by several minutes. The clock in the video stream is running accurately, but the clock displays in the JPGs are all identical (to the second - but one or more minutes prior to the datetime in the program's "print()" output (and the saved JPG file time), and moving objects that were in view at the time they were saved are missing completely.
Strangely:
The JPG images are not identical in size. They grow by 10K - 20K as the sequence progresses. Even though they look identical to the eye, they show significant difference when compared using CV2 - but no difference if compared using PIL (which is about 10 - 15 times slower for image comparisons).
The camera can be configured to send a snapshot by email when it detects motion. These snapshots are up-to-date, and show moving objects that were in frame at the time (but no clock display). Enabling or disabling this facility has no effect on the out-of-date issue with JPGs extracted from the video stream. And, sadly, the snapshots are only about 60K, and too low resolution for our purposes (which is an AI application that needs images to be 600K or more).
The camera itself is ONVIF - and things like PTZ work nicely from Python code. Synology Surveillance Station works really well with it in every aspect. This model has reasonably good specs - zoom and good LPR anti-glare functionality. It is made in China - but I don't want to be 'a poor workman who blames his tools'.
Can anyone spot something in the program code that may be causing this?
Has anyone encountered this issue, and can suggest a work-around or different library / methodology?
(And if it is indeed an issue with this brand / model of camera, you are welcome to put in a plug for a mid-range LPR camera that works well for you in an application like this.)
Here is the current program code:
import datetime
from time import sleep
import cv2
goCapturedStream = None
# gcCameraLogin, gcCameraURL, & gcPhotoFolder are defined in the program, but omitted for simplicity / obfuscation.
def CaptureVideoStream():
global goCapturedStream
print(f"CaptureVideoStream({datetime.datetime.now()}): Capturing video stream...")
goCapturedStream = cv2.VideoCapture(f"rtsp://{gcCameraLogin}#{gcCameraURL}:554/stream0")
if not goCapturedStream.isOpened(): print(f"Error: Video Capture Stream was not opened.")
return
def TakePhotoFromVideoStream(pcPhotoName):
llResult = False ; laFrame = None
llResult, laFrame = goCapturedStream.read()
print(f"TakePhotoFromVideoStream({datetime.datetime.now()}): Result is {llResult}, Frame data type is {type(laFrame)}, Frame length is {len(laFrame)}")
if not ".jpg" in pcPhotoName.lower(): pcPhotoName += ".jpg"
lcFullPathName = f"{gcPhotoFolder}/{pcPhotoName}"
cv2.imwrite(lcFullPathName, laFrame)
def ReleaseVideoStream():
global goCapturedStream
goCapturedStream.release()
goCapturedStream = None
# Main Program: Obtain sequence of JPG images from captured video stream
CaptureVideoStream()
for N in range(1,7):
TakePhotoFromVideoStream(f"Test{N}.jpg")
sleep(2) # 2 seconds
ReleaseVideoStream()
Dan Masek's suggestions were very valuable.
The program (now enhanced significantly) saves up-to-date images correctly, when triggered by the camera's inbuilt motion detection (running in a separate thread and communicating through global variables).
The key tricks were:
A much faster loop reading the frames (and discarding most of them). I reduced the sleep to 0.1 (and even further to 0.01), and saved relatively few frames to JPG files only when required
Slowing down the frame rate on the camera (from 25 to 10 fps - even tried 5 at one point). This meant that the camera didn't get ahead of the software and send unpredictable frames.
I have a system with 3 actual webcams and one "webcam"(actually a gvUSB2 - a USB converter from RCA jacks). When using capture software like OBS, I can access all cameras at the same time (although I do notice occasional glitching). When I try to do the same with openCV, the result depends on which cameras are plugged in, but it seems like I can only use openCV to open the gvUSB2 camera if only 1 other camera is plugged in. I don't get an error message if it fails, rather when I access the gvUSB2 slot I get a duplicate of another camera. Using Python3.7 and freshly installed openCV.
I've tried moving around the USB slots. The drivers should be up to date for the "webcam". Like I said, using capture software I am able to collect data from all cameras simultaneously, while I can't capture from the "webcam" at all in openCV.
My test program is very simple, I just rotate through the camera index values here(ie, 0, 1, 2, 3):
import cv2
import sys
print(sys.argv[1])
s_video = cv2.VideoCapture(int(sys.argv[1]))
while True:
ret, img = s_video.read()
cv2.imshow("Stream Video",img)
key = cv2.waitKey(1) & 0xff
if key == ord('q'):
break
When the above code is run with 3 traditional webcams I am able to access all of them. However if the convertor "webcam" is installed, I get a duplicate image. IE, for all 3 traditional cams I will have 0, 1 and 2 showing the traditional images, and then 3 will be a duplicate of 2. On the other hand with just 1 traditional webcam installed, 0 will be the traditional webcam and 1 will be the correct image of the converter "webcam".
My thinking:
It seems like I am not overwhelming my USB system, because it can handle all the traditional webcams, and the resolution of the converter is much lower than the traditional ones (704x480 IIRC). My guess is a driver problem with the converter, and unfortunately since I am up to date, I may be out of luck. The counter evidence for this is that a capture program like OBS IS capable of reading from all webcams, suggesting that this may be a problem with openCV (or more likely, how I am using it). I can't find any google posts coming within 10 miles of this problem, so I'm pretty stuck. Any ideas?
I'm currently using the sony QX1 for wireless transfers for large images. The camera is being triggered over the USB port. Pictures from the camera are being transferred with URLLib to a raspberry pi. (I can't use the api to trigger the camera. It has to be from this external source.)
The camera is triggered around every 2.5 seconds. Through timing testing it seems like I'm able to get the larger picture back to the pi at ~ 3.2 seconds per image.
I've noticed that when the camera is triggered my transfer is terminated. I'm assuming this has to do with the embedded design of the camera itself and there isn't a way to get around this but please correct me if I'm wrong!
Does the camera support the range header? Basically I grab the image size from the header. I'm trying to grab the beginning X bytes until the camera triggers again then grab the next X bytes until I get the entire image.
Thanks for the help and let me know if I need to give a deeper explanation of what is going on here.
I don't know about the range header, but it will still not allow you to take more pictures than your downloadspeed allows (unless you have some larger than 2.5 seconds intervals now and then).
Maybe you can reduce the image resolution to a size that fits into the 2.5 sec interval? Or (just some thinking outside of the box:-) use 2 QX1's switching, so you get a 5 second interval for each...
I'm recording video with a Raspberry Pi 2 and camera module in a Python script, using the picamera package. See minimal example below:
import picamera
import time
with picamera.PiCamera(resolution=(730, 1296), framerate=49) as camera:
camera.rotation=270
camera.start_preview()
time.sleep(0.5)
camera.start_recording('test.h264')
time.sleep(3)
camera.stop_recording()
camera.stop_preview()
Results
The result is a video with bad encoding:
first frame is ok
in the next 59 frames the scene is barely visible, almost all green or purple (not clear what's making change between the two colors)
frame number 61 is ok
Basically only the I-frames are correctly encoded. This was clarified experimenting different values of the intra_period parameter of the start_recording function.
Notes and attempts already made
First and foremost, I was using the same code to correctly record video in the past on the same Raspberry Pi and camera. It's not clear to me if the problem appeared when reinstalling the complete image, during updates, installation of other packages...
Also:
if I don't set the resolution parameter and rotation, the camera works fine
several video players have been tested on the same and on other machines, and processing frame by frame with OpenCV, it is really a problem in the video file
mjpeg format works fine
the same problem happens setting sensor_mode=5
Questions
The main question is how to correctly record video at a set resolution, by correction of the code above or a workaround.
Secondary question: I'm curious to know what could cause such behaviour.
I'm using openCV via python on linux (ubuntu 12.04), and I have a logitech c920 from which I'd like to grab images. Cheese is able to grab frames up to really high resolutions, but whenever I try to use openCV, I only get 640x480 images. I have tried:
import cv
cam = cv.CaptureFromCAM(-1)
cv.SetCaptureProperty(cam,cv.CV_CAP_PROP_FRAME_WIDTH,1920)
cv.SetCaptureProperty(cam,cv.CV_CAP_PROP_FRAME_WIDTH,1080)
but this yields output of "0" after each of the last two lines, and when I subsequently grab a frame via:
image = cv.QueryFrame(cam)
The resulting image is still 640x480.
I've tried installing what seemed to be related tools via (outside of python):
sudo apt-get install libv4l-dev v4l-utils qv4l2 v4l2ucp
and I can indeed apparently manipulate the camera's settings (again, outside of python) via:
v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=1
v4l2-ctl --set-parm=30
and observe that:
v4l2-ctl -V
indeed suggests that something has been changed:
Format Video Capture:
Width/Height : 1920/1080
Pixel Format : 'H264'
Field : None
Bytes per Line : 3840
Size Image : 4147200
Colorspace : sRGB
But when I pop into the python shell, the above code behaves exactly the same as before (printing zeros when trying to set the properties and obtaining an image that is 640x480).
Being able to bump up the resolution of the capture is pretty mission critical for me, so I'd greatly appreciate any pointers anyone can provide.
From the docs,
The function cvSetCaptureProperty sets the specified property of video capturing. Currently the function supports only video files: CV_CAP_PROP_POS_MSEC, CV_CAP_PROP_POS_FRAMES, CV_CAP_PROP_POS_AVI_RATIO .
NB This function currently does nothing when using the latest CVS download on linux with FFMPEG (the function contents are hidden if 0 is used and returned).
I had the same problem as you. Ended up going into the OpenCV source and changing the default parameters in modules/highgui/src/cap_v4l.cpp, lines 245-246 and rebuilding the project.
#define DEFAULT_V4L_WIDTH 1920
#define DEFAULT_V4L_HEIGHT 1080
This is for OpenCV 2.4.8
It seems to be variable by cammera.
AFIK, Logitech cameras have particularly bad linux support (though It;s gotten better) Most of their issues are with advanced features like focus control. i would advise sticking with basic cameras (IE manual focus Logitech cameras) just to play it safe.
My built in laptop camera has no issue and displays at normal resolution.
My external logitech pro has issues initalizing.
However, I can overcome the resolution issue with these two lines.
Yes, they are the same as you used.
cv.SetCaptureProperty(self.capture,cv.CV_CAP_PROP_FRAME_WIDTH, 1280)
cv.SetCaptureProperty(self.capture,cv.CV_CAP_PROP_FRAME_HEIGHT, 720)
My Logitech still throws errors but the resolution is fine.
Please make sure the resolution you set is a supported by your camera or v4l will yell at you. If I set an unsupported native resolution, I have zero success.
Not sure if it works, but you can try to force the parameters to your values after you instantiate camera object:
import cv
cam = cv.CaptureFromCAM(-1)
os.system("v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=1")
os.system("v4l2-ctl --set-parm=30")
image = cv.QueryFrame(cam)
That's a bit hacky, so expect a crash.
## Sets up the camera to capture video
cap = cv2.VideoCapture(device)
width = 1280
height = 720
#set the width and height
cap.set(3,width)
cap.set(4,height)