I'm using openCV on OS X with my external webcam (Microsoft Cinema HD Lifecam) and its performance is very low even with the simplest camera readout code.
import cv2
cap = cv2.VideoCapture(1)
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow("Output", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
I tried the same webcam with Photo Booth and it works well with high FPS. Also, I tried the same code with the built-in Facetime camera of my mac and it worked pretty fast. So, it seems like that I have some kind of configuration issue in OpenCV.
Has somebody ever experienced something like this?
Thanks for your answers.
It seems like I could solve my problem.
I just had to decrease the resolution of the camera.
cap = cv2.VideoCapture(0)
cap.set(3,640)
cap.set(4,480)
I think Photo Booth sets the resolution automatically in order to increase the speed or the readout, however, one has to set this manually in OpenCV. Not sure about correctness of this explanation tough.
Try to enforce specific reader implementation, see here. Options to try CAP_QT and CAP_AVFOUNDATION, full list is here . Note, that OpenCV has to be built to support reader implementations.
Related
I have this cam in test (https://www.amazon.de/gp/product/B01JLU20C0/ref=ppx_yo_dt_b_asin_title_o04_s00?ie=UTF8&psc=1) for a stero vision project.
The module has two cameras which are connected to the computer via a USB port. With this I would like to test depth detection for a project. If I only take photos, it works very well. Only the live stream doesn't work the same for both cameras. I already tried all possible resolutions, unfortunately no success. Does anyone have an idea?
THX
Windows 10, Python 3.7, CV4
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 320)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 120)
# Second Cam
cap2 = cv2.VideoCapture(0)
cap2.set(cv2.CAP_PROP_FRAME_WIDTH, 320)
cap2.set(cv2.CAP_PROP_FRAME_HEIGHT, 120)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
ret2, frame2 = cap2.read()
# Display the resulting frame
cv2.imshow('frame 1',frame)
cv2.imshow('frame 2',frame2)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
I have now switched to two other cameras, each of which is connected specifically to USB. Now everything works fine. I'll install them in a housing and then Stereo Vision really gets going. As soon as I'm finished there will be everything on GitHub and Youtube..but it will take some time ;-)
I am trying to create a video monitoring system on my Raspberry Pi. The OS version is the latest Raspbian Buster.
I decided to move recently on Buster since it has just come out. Before I was on Stretch. It was working really fine. The only problem I got was that when I tried to increase my window size, the FPS was decreasing leading to a small latency. So that is why I moved to Buster to see if it was working better or if was a limitation due to the Raspberry Pi.
So here is my code:
import cv2
cap = cv2.VideoCapture()
ret, im = cap.read()
while True:
ret, im = cap.read()
cv2.namedWindow('frame', cv2.WINDOW_NORMAL)
cv2.imshow('frame', im)
key = cv2.waitKey(10)
This is what my expected output should looks like:
This is my actual output:
Do you have any idea of what the problem could be? And how to solve it?
I have a Raspberry Pi running Raspbian 9. I have OpenCV 4.0.1 installed. I have a USB Webcam attached. The Raspberry is headless, I connect with ssh <user>#<IP> -X. The goal is to get a real-time video stream on my client computer.
The issue is that there is a considerable lag of around 2 seconds. Also the stream playback is unsteady, this means slow and then quick again.
My guess is that SSH just cannot keep up with the camera's default 30fps. I therefore try to reduce the frame-rate as I could live with a lower frame-rate as long as there's no noticeable lag. My own attempts to reduce the frame rate have not worked.
My code, commented the parts that I tried myself to reduce the frame-rate but did not work.
import cv2
#import time
cap = cv2.VideoCapture(0)
#cap.set(cv2.CAP_PROP_FPS, 5)
while(True):
ret, frame = cap.read()
#time.sleep(1)
#cv2.waitKey(100)
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
what I tried to reduce the frame rate:
I tried to set cap.set(cv2.CAP_PROP_FPS, 5) (also tried 10 or 1). If I then print(cap.get(cv2.CAP_PROP_FPS)) it gives me the frame-rate I just set but it has no effect on the playback.
I tried to use time.sleep(1) in the while loop but it has no effect on the video.
I tried to use a second cv2.waitKey(100) in the while loop as suggested here on Quora: https://qr.ae/TUSPyN , but this also has no effect.
edit 1 (time.wait and waitKey indeed work):
As pointed out in the comment, time.sleep(1) and cv2.waitKey(1000) should both work and indeed they did after all. It was necessary to put these at the end of the while loop, after cv2.imshow().
As pointed out in the first comment, it might be better to choose a different setup for streaming media, which is what I am looking at now to get rid of the lag.
edit 2 (xpra instead of ssh -X):
We found out that even after all attempts to reduce the frame rate, ssh -X was sill laggy. We found xpra to be a lot quicker, i.e. not requiring any lowering of the frame rate or resolution and not having noticeable lag.
I'm making kivy app to recognize character with camera on real-time.
However, there is no document except recognizing face.
I think there is a way because picamera is almost doing similar thing (creating opencv file from camera).
Would someone tell me to achieve this?
* PS *
I'm on the way to capture image when camera sees the number, but I don't know how to make that trigger.
Now this is the code, and I want to know when to break true sentence.
cap = cv2.VideoCapture(0)
while(True):
ret, image = cap.read()
cv2.imshow('image', image)
results = tes.image_to_string(image, boxes=True)
if results:
break
cap.release()
cv2.destroyAllWindows()
print(results)
but this one is too slow
How about using pytesseract 0.2.4 for that purpose? It looks like best solution, according to dr. Google
I have a (fairly cheap) webcam which produces images which are far lighter than it should be. The camera does have brightness correction - the adjustments are obvious when moving from light to dark - but it is consistently far to bright.
I am looking for a way to reduce the brightness without iterating over the entire frame (OpenCV Python bindings on a Raspberry Pi). Does that exist? Or better, is there a standard way of sending hints to a webcam to reduce the brightness?
import cv2
# create video capture
cap = cv2.VideoCapture(0)
window = cv2.namedWindow("output", 1)
while True:
# read the frames
_,frame = cap.read()
cv2.imshow("output",frame)
if cv2.waitKey(33)== 27:
break
# Clean up everything before leaving
cv2.destroyAllWindows()
cap.release()
I forgot Raspberry Pi is just running a regular OS. What an awesome machine. Thanks for the code which confirms that you just have a regular cv2 image.
Simple vectorized scaling (without playing with each pixel) should be simple. Below just scales every pixel. It would be easy to add a few lines to normalize the image if it has a major offset.
import numpy
#...
scale = 0.5 # whatever scale you want
frame_darker = (frame * scale).astype(numpy.uint8)
#...
Does that look like the start of what you want?
The standard way to adjust webcam parameters is the VideoCapture set() method (providing your camera supports the interface. Most do in my experience). This avoids the performance overhead of processing the image yourself.
VideoCapture::set
CV_CAP_PROP_BRIGHTNESS or CV_CAP_PROP_SATURATION would appear to be what you want.