I'm trying to read frames from the webcam on a Macbook Pro with OpenCV.
Opening a cv2.VideoCapture(0) turns the camera indicator green (and all other indexes are out of range), and .isOpened() returns True. However, using .read() on the videocapture object returns (False, Null).
I've tried both the pip and brew versions of OpenCV, and I've tried using both Terminal.app and Kitty, yet the same issue occurs (both have camera perms in System Preferences).
Not really sure what's going on here. Delaying doesn't help either.
Example:
>>> import cv2
>>> vc = cv2.VideoCapture(0)
>>> vc.isOpened()
True
>>> vc.read()
(False, None)
Apparently I didn't realize that the camera was not working at all.
The solution ended up being to reset the SMC.
Related
I'm trying to use the OpenCV Stitcher class for putting two images together. I ran the simple example provided in the answer to this question with the same koala images, but it returns (1, None) every time. I've tried this on opencv-python version 3.4, 4.2, and 4.4, and all have the same result.
I've tried replacing the stitcher initializer with something else, (cv2.Stitcher.create, cv2.Stitcher_create, cv2.createStitcher), but nothing seems to work. If it helps, I'm on Mac Catalina, using Python 3.7. Thanks!
Try changing the default pano confidence threshold using setPanoConfidenceThresh(). By default it's 1.0, and apparently it results in the stitcher thinking that it has failed.
Here is the full example that works for me. I used that pair of koala images as well, and I am on opencv 4.2.0:
stitcher = cv2.Stitcher.create(cv2.Stitcher_PANORAMA)
stitcher.setPanoConfidenceThresh(0.0) # might be too aggressive for real examples
foo = cv2.imread("/path/to/image1.jpg")
bar = cv2.imread("/path/to/image2.jpg")
status, result = stitcher.stitch((foo,bar))
assert status == 0 # Verify returned status is 'success'
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
I think in this particular case cv2.Stitcher_SCANS is a better mode (transformation between images is just a translation), but either SCANS or PANORAMA works.
Try rescaling the images to 0.6 and 0.6 , by using the resize. For some reason I got the result only at those values.
I am an "advanced beginner" in Python, but a relative newbie with the Raspberry Pi...
What I'm trying to do:
I'm trying to capture a frame from the RTSP stream from a Wyze Cam V2 and save the image to a file. My code works - most of the time. But sometimes it fails for long periods of time. After much experimentation and trial and error I have determined that it is more likely to fail when the camera is in the dark! This seems very consistent.
My Code:
This is not the code from my actual project - it is the code I have been using to troubleshoot.
import cv2
import imageio
class Camera:
def __init__(self, ipaddress):
self.ipaddress = ipaddress
print("About to create VideoStream")
self.vs = cv2.VideoCapture(ipaddress, cv2.CAP_FFMPEG)
self.vs.set(cv2.CAP_PROP_BUFFERSIZE, 3)
if self.vs.isOpened():
print("Successfully created")
self.vs.release()
else:
print("Unable to create")
def capture(self):
self.vs.open(self.ipaddress)
success, frame = self.vs.read()
self.vs.release()
if success:
print("Capture Success")
return frame
else:
print("Failed to capture")
print("VideoCapture isOpen is " + str(self.vs.isOpened()))
return None
def is_opened(self):
return self.vs.isOpened()
# In actual code CAMNAME is the camera's name, PASSWORD is the password
# and XXX.XXX.X.XXX is the ip address
camera = Camera("rtsp://CAMNAME:PASSWORD#XXX.XXX.X.XXX/live")
leave = False
while not leave:
frame = camera.capture()
if frame is None:
print("Frame is none")
print("VideoCapture isOpen is " + str(camera.is_opened()))
else:
print("Successful capture - writing to file")
frame_color = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
imageio.imwrite("test.jpg", frame_color)
response = input("Capture again? ")
if len(response) == 0:
response = "y"
if response[0] == 'n':
leave = True
What I Have Tried:
The code works fine when run on Windows 10. So not directly a problem with the code. In fact, when the Pi is having trouble capturing, doing it at the same time on Windows works. So definitely the issue is the Pi interacting with OpenCV or the camera.
Since I had ++ trouble installing OpenCV:
I have tried re-installing it on a fresh install of the OS with PIP (sudo pip install opencv-contrib-python==4.1.0.25).
I tried to build OpenCV from scratch - took 2.5 days and failed miserably - likely I screwed up somewhere in the process, but don't feel like spending another 2 days doing this.
Finally I downloaded a Raspbian image with OpenCV pre-compiled (https://medium.com/#aadeshshah/pre-installed-and-pre-configured-raspbian-with-opencv-4-1-0-for-raspberry-pi-3-model-b-b-9c307b9a993a). All these install methods resulted in the same issues...
I have tried opening the VideoCapture without specifying cv2.CAP_FFMPEG. I feel like it was more reliable with this option.
I have tried leaving out the change in BUFFERSIZE. I'm not sure this line of code has any effect.
What I am Using:
Raspberry Pi Model B, Rev 2, 512 kb
Raspbian Stretch - though I have had the same issues with Buster.
Wyze Cam II "beta" firmware that provides RTSP support.
Python3
OpenCV 4.1.0 (cv2.version)
What happens:
I have been troubleshooting this intermittent problem for some time, and just today realized it always works with the garage (where the camera is located) is light, and fails when it is dark. (Which made the late night troubleshooting sessions so frustrating!)
I have had many problems in the past, but now the issue seems to be that if the garage (where the camera is located) is dark, the VideoCapture object will not be created (.isOpened() == False) or the read() method will return False, None.
I used to have a problem with read() returning an old image. I can tell it is old because the camera timestamps the captures. This is why I am always opening and closing the VideoCapture - I would rather it not return an image than return the wrong/old image.
In the past, with slightly different settings, I would get warnings on the screen either during the creation of the VideoCapture object, or during the read() command. These are usually along the lines of "[h264 # 0x1ea1780] error while decoding MB 78 67, bytestream -15". I have gotten different warnings but I don't have examples right now. If I get a warning, I often get a bad image.
I have also gotten images that are distorted - that bottom of the image (sometimes a few lines, sometimes more than half of the image) looks like it it is the same line of data over and over.
I am currently trying to do some video classification and am using anaconda along with jupyter notebook to do my training of data. However, I am encountering an error in jupyter notebook where I can't read in my video frames using cv2.VideoCapture but somehow it does work in my conda environment's terminal.
This is my file structure,
This is the error I'm currently encountering,
Terminal in the same anaconda environment works fine,
I did read somewhere that it might be due to an issue with conda and ffmepg but I have tried many solutions suggested by others to solve that issue including downloading opencv from opencv.org itself and setting the environment path variables instead of using conda install but it still doesn't work.
Does anyone have any idea on how to solve this issue?
Forgive me if I'm wrong, but I noticed that you are not using the same filename in your two tests. I was stuck at the same point, till I realized the path and filename was not the same between my "terminal test" and the jupyter notebook test.
I confirmed jupyter could access the file.
Windows attribute test:
!attrib data/TownCentreXVID.avi
Bash file test. See using bash commands in jupyter notebook for details:
!file data/TownCentreXVID.avi
Then tried again, and had no issues getting the same results from jupyter.
OpenCV in Python enables you to grab frames from the webcam/ or from a video file (like in your case) as Numpy array, modify it and then display it using OpenCV's cv2.imshow(). To do this OpenCV will create a window and push the frames there. However, this will not work in a IPython notebook.
To display in the jupyter notebook or any other IPython notebook, you will have to use the function
IPython.display.Image(data)
and not OpenCV's imshow().
Here is a chunk of code you can use:
cam = cv2.VideoCapture(0)
d = IPython.display.display("", display_id=1)
d2 = IPython.display.display("", display_id=2)
while True:
try:
t1 = time.time()
frame = get_frame(cam)
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
im = array_to_image(frame)
d.update(im)
t2 = time.time()
s = f"""{int(1/(t2-t1))} FPS"""
d2.update( IPython.display.HTML(s) )
except KeyboardInterrupt:
print()
cam.release()
IPython.display.clear_output()
print ("Stream stopped")
break
def get_frame(cam):
# Capture frame-by-frame
ret, frame = cam.read()
#flip image for natural viewing
frame = cv2.flip(frame, 1)
return frame
#Use 'jpeg' instead of 'png' (~5 times faster)
def array_to_image(a, fmt='jpeg'):
#Create binary stream object
f = BytesIO()
#Convert array to binary stream object
PIL.Image.fromarray(a).save(f, fmt)
return IPython.display.Image(data=f.getvalue())
I use WinPython to write my python programs. I need to solve the task of detecting faces in a video stream. I have installed opencv-python to WinPython using this command:
pip install opencv-python==3.4.0.12
When I run the following code, I get a False:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
ret, img = cap.read()
print(ret)
What I am doing wrong?
Seems like legit function result. As you can see, from documentation VideoCapture::read function returns retval and image, in case there was image to return. Apparently, "False" value of the ret variable in your code means that there was no image.
Edit:
I looked up documentation and here's what i've found:
"If no frames has been grabbed (camera has been disconnected, or there are no more frames in video file), the methods return false and the functions return NULL pointer."
I wrapped opencv today with simplecv python interface. After going through the official SimpleCV Cookbook I was able to successfully Load, Save, and Manipulate images. Thus, I know the library is being loaded properly.
However, under the Using a Camera, Kinect, or Virtual Camera heading I was unsuccessful in running some commands. In particular, mycam = Camera() worked, but img = mycam.getImage() produced the following error:
In [35]: img = mycam.getImage().save()
OpenCV Error: Bad argument (Array should be CvMat or IplImage) in cvGetSize, file /home/jordan/OpenCV-2.2.0/modules/core/src/array.cpp, line 1237
---------------------------------------------------------------------------
error Traceback (most recent call last)
/home/simplecv/<ipython console> in <module>()
/usr/local/lib/python2.7/dist-packages/SimpleCV-1.1-py2.7.egg/SimpleCV/Camera.pyc in getImage(self)
332
333 frame = cv.RetrieveFrame(self.capture)
--> 334 newimg = cv.CreateImage(cv.GetSize(frame), cv.IPL_DEPTH_8U, 3)
335 cv.Copy(frame, newimg)
336 return Image(newimg, self)
error: Array should be CvMat or IplImage
I'm running Ubuntu Natty on a HP TX2500 tablet. It has a built in webcam, (CyberLink Youcam?) Has anybody seen this error before? I've been all over the web today looking for a solution, but nothing seems to be doing the trick.
Update 1: I tested cv.QueryFrame(capture) using the code found here in a separate Stack Overflow question and it worked; so I've pretty much nailed this down to a webcam issue.
Update 2: In fact, I get the exact same errors on a machine that doesn't even have a webcam! It's looking like the TX2500 is not compatible...
since the error raised from Camera.py of SimpleCV, you need to debug the getImage() method. If you can edit it:
def getImage(self):
if (not self.threaded):
cv.GrabFrame(self.capture)
frame = cv.RetrieveFrame(self.capture)
import pdb # <-- add this line
pdb.set_trace() # <-- add this line
newimg = cv.CreateImage(cv.GetSize(frame), cv.IPL_DEPTH_8U, 3)
cv.Copy(frame, newimg)
return Image(newimg, self)
then run your program, it will be paused as pdb.set_trace(), here you can inspect the type of frame, and try to figure out how get the size of frame.
Or you can do the capture in your code, and inspect the frame object:
mycam = Camera()
cv.GrabFrame(mycam.capture)
frame = cv.RetrieveFrame(mycam.capture)
To answer my own question...
I bought a Logitech C210 today and the problem disappeared.
I'm now getting warnings:
Corrupt JPEG data: X extraneous bytes before marker 0xYY.
However, I am able to successfully push a video stream to my web-browser via JpegStreamer(). If I cannot solve this error, I'll open a new thread.
Thus, for now, I'll blame the TX2500.
If anybody finds a fix in the future, please post.
Props to #HYRY for the investigation. Thanks.
I'm geting the camera with OpenCV
from opencv import cv
from opencv import highgui
from opencv import adaptors
def get_image()
cam = highgui.cvCreateCameraCapture(0)
im = highgui.cvQueryFrame(cam)
# Add the line below if you need it (Ubuntu 8.04+)
#im = opencv.cvGetMat(im)
return im
Anthony, one of the SimpleCV developers here.
Also instead of using image.save(), this function writes the file/video to disk, you instead probably want to use image.show(). You can save if you want, but you need to specify a file path like image.save("/tmp/blah.png")
So you want to do:
img = mycam.getImage()
img.show()
As for that model of camera I'm not sure if it works or not. I should note that we also wrapper different camera classes not just OpenCV, this is because OpenCV has a problem with webcams over 640x480, we now can do high resolution cameras.
Also I should mention, which I didn't realize, is that OpenCV less than 2.3 is broken with webcams on Ubuntu 11.04 and up. I didn't realize this as I was running Ubuntu 10.10 before, by the looks of your output you are using python 2.7 which makes me think you are on Ubuntu 11.04 or higher. Anyway, we have a fix for this problem. It is now pushed up into the master, it basically does a check to see if OpenCV is working, if not it will fall back to pygame.
This fix will also be in the 1.2 release of SimpleCV (It's in the master branch now)