I have a system with 3 actual webcams and one "webcam"(actually a gvUSB2 - a USB converter from RCA jacks). When using capture software like OBS, I can access all cameras at the same time (although I do notice occasional glitching). When I try to do the same with openCV, the result depends on which cameras are plugged in, but it seems like I can only use openCV to open the gvUSB2 camera if only 1 other camera is plugged in. I don't get an error message if it fails, rather when I access the gvUSB2 slot I get a duplicate of another camera. Using Python3.7 and freshly installed openCV.
I've tried moving around the USB slots. The drivers should be up to date for the "webcam". Like I said, using capture software I am able to collect data from all cameras simultaneously, while I can't capture from the "webcam" at all in openCV.
My test program is very simple, I just rotate through the camera index values here(ie, 0, 1, 2, 3):
import cv2
import sys
print(sys.argv[1])
s_video = cv2.VideoCapture(int(sys.argv[1]))
while True:
ret, img = s_video.read()
cv2.imshow("Stream Video",img)
key = cv2.waitKey(1) & 0xff
if key == ord('q'):
break
When the above code is run with 3 traditional webcams I am able to access all of them. However if the convertor "webcam" is installed, I get a duplicate image. IE, for all 3 traditional cams I will have 0, 1 and 2 showing the traditional images, and then 3 will be a duplicate of 2. On the other hand with just 1 traditional webcam installed, 0 will be the traditional webcam and 1 will be the correct image of the converter "webcam".
My thinking:
It seems like I am not overwhelming my USB system, because it can handle all the traditional webcams, and the resolution of the converter is much lower than the traditional ones (704x480 IIRC). My guess is a driver problem with the converter, and unfortunately since I am up to date, I may be out of luck. The counter evidence for this is that a capture program like OBS IS capable of reading from all webcams, suggesting that this may be a problem with openCV (or more likely, how I am using it). I can't find any google posts coming within 10 miles of this problem, so I'm pretty stuck. Any ideas?
Related
I've got a Raspberry pi compute module setup with two PiNoIR camera modules.
I'm trying to capture 2 video streams to be used on a raspberry pi 3 at a later date. Here is the configuration:
Frame rate = 30
resolution = 640x480
language = Python (with picamera)
chosen libraries = Opencv 3.1.0
The idea originally was to use the hardware device (e.g. /dev/videoX), but the bcm2835_v4l2 kernel module currently only supports 1 camera, and I'm not experienced with developing kernel modules do try and get it to support 2 cameras.
I've tried using the test code from the picamera docs, but this only works for a single camera. I know that defining PiCamera(0) or PiCamera(1) will select either camera, however I do not know how to get it to record two streams together.
I've tried the below, and I've tried using this guide for working with python and opencv. The guide is only focused around one camera, and I need both working.
!/usr/bin/python
import picamera
import time
cameraOne = picamera.PiCamera(0)
cameraTwo = picamera.PiCamera(1)
cameraOne.resolution = (640,480)
cameraTwo.resolution = (640,480)
cameraOne.framerate = 30
cameraOne.framerate = 30
cameraOne.start_recording('CameraOne.mjpg')
cameraTwo.start_recording('CameraTwo.mjpg')
counter = 0
while 1:
cameraOne.wait_recording(0.1)
cameraTwo.wait_recording(0.1)
counter += 1
if counter == 30:
break
cameraOne.stop_recording()
cameraTwo.stop_recording()
The code snippet above generates two 10 second videos with a single frame from each camera
I'm not sure where to go from here, as I'm not well versed in python, and I'm more experienced in C++, hence the want for hardware device control (e.g. /dev/videoX).
All I require is the ability to record the streams of both cameras simultaneously to be used in processing stereo vision.
If you can provide me with either a pure python-picamera solution, or an opencv integrated solution, I would be very thankful.
Just as an update, I've still not gotten very far on this, and could really use some help.
I'm recording video with a Raspberry Pi 2 and camera module in a Python script, using the picamera package. See minimal example below:
import picamera
import time
with picamera.PiCamera(resolution=(730, 1296), framerate=49) as camera:
camera.rotation=270
camera.start_preview()
time.sleep(0.5)
camera.start_recording('test.h264')
time.sleep(3)
camera.stop_recording()
camera.stop_preview()
Results
The result is a video with bad encoding:
first frame is ok
in the next 59 frames the scene is barely visible, almost all green or purple (not clear what's making change between the two colors)
frame number 61 is ok
Basically only the I-frames are correctly encoded. This was clarified experimenting different values of the intra_period parameter of the start_recording function.
Notes and attempts already made
First and foremost, I was using the same code to correctly record video in the past on the same Raspberry Pi and camera. It's not clear to me if the problem appeared when reinstalling the complete image, during updates, installation of other packages...
Also:
if I don't set the resolution parameter and rotation, the camera works fine
several video players have been tested on the same and on other machines, and processing frame by frame with OpenCV, it is really a problem in the video file
mjpeg format works fine
the same problem happens setting sensor_mode=5
Questions
The main question is how to correctly record video at a set resolution, by correction of the code above or a workaround.
Secondary question: I'm curious to know what could cause such behaviour.
The quality of video recording that is required for our project is not met by the webcams. Is it possible to use high megapixel digital cameras (Sony, Canon, Olympus) with OpenCV ?
How to talk to the digital cameras using OpenCV (and specifically using Python)
Install Drivers for required camera, connect it, and use cv2.VideoCapture(int). Here, instead of 0, use a different integer according to the camera. By default, 0 is for the inbuilt webcam.
e.g.: cv2.VideoCapture(1)
I want to build a webcam based 3D scanner, since I'm going to use a lot of webcams I doing tests before.
I have orderer 3 exact camera that I will drive in python to take snapshot at the same time.
Obviously the bus is going to be saturated when there will be 50 of them.
What I want to know is if the camera are able to hold the picture until they are transfered to the computer.
To simulate this behavior I'd like to slow down the USB bus and make a snapshot with 3 camera,
I'm under windows 7 pro, is this possible?
Thanks.
PS : couldn't I saturate the USB BUS by pluggin some USB external harddrive and doing some file transfert?
What I want to know is if the camera are able to hold the picture until they are transfered to the computer.
That depends on the camera model, but since you mention in your post you are using "webcams", then the answer is almost certainly no. You could slow down the requests you make to the camera to take a picture though.
This sequence of events is possible:
wait
request camera takes picture
camera returns picture as normal
wait
This sequence of events is not possible (with webcams at least)
wait
request camera takes picture
wait
camera returns picture at a significantly later time that you want
to have control over
wait
If you need the functionality displayed in the last sequence I provide (a controllable time between capture and readout of the picture) you will need to upgrade to a better camera, such as a machine vision camera. These cameras usually cost considerably more than webcams and are unlikely to interface over USB (though you might find some that do).
You might be able to find some other solution to your problem (for instance what happens if you request 50 photos from 50 cameras nd saturate the USB bus? Do the webcams you have buffer the data well enough so that it achieves your ultimate goal, or does this affect the quality of the picture?)
This question already has an answer here:
How to capture multiple camera streams with OpenCV?
(1 answer)
Closed 10 months ago.
I am using Opencv 3 and python 3.6 for my project work. I want to set up multiple cameras at a time to see video feed from all of them at once. I want to do facial recognition using it. But there is no good way to do this. Here is one link which I followed but nothing happens: Reading from two cameras in OpenCV at once
I have tried this blog post as well but it only can capture one image at a time from video and cannot show the live video.
https://www.pyimagesearch.com/2016/01/18/multiple-cameras-with-the-raspberry-pi-and-opencv/
Previously people have done this with C++ but with python it seems difficult to me.
the below code works and i've tested it, so if u're using two cameras 1 a webcam and another is a usb cam then, (adjust videocapture numbers if both are usb cam)
import cv2
cap1 = cv2.VideoCapture(0)
cap2 = cv2.VideoCapture(1)
while 1:
ret1, img1 = cap1.read()
ret2, img2 = cap2.read()
if ret1 and ret2:
cv2.imshow('img1',img1)
cv2.imshow('img2',img2)
k = cv2.waitKey(100)
if k == 27: #press Esc to exit
break
cap1.release()
cap2.release()
cv2.destroyAllWindows()
my experience with R_Pi & 2 cams showed the limitation was the GPU on the R_Pi.
I used setup to allocate more GPU memory to 512Mb.
It would slow down with more than 10 fps with 2 cams.
Also, the USB ports restricted the video stream.
One solution is to put each camera on it's own usb controller. I did this using a 4 channel PCIe card. The card must have a separate controller for each port. I'm just finishing a project where I snap images from 4 ELP usb cameras, combine the images into one, and write it to disk. I spent days trying to make it work. I found examples for two cameras that worked with my laptop camera and an external camera but not two external cameras. The internal camera is on a different usb controller than the external ports...