I'm working on Ubuntu 16.04 LTS and trying to read and external camera (Leopard Imaging - Autonomous Camera)
My code is simple:
import cv2
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
cv2.imwrite("output.jpg",frame)
cap.release()
When I run this with python 2.7 and OpenCV 3.4.1 I got the correct output:
When I run this with python3.7.10 and OpenCV 4.5.3 I got the following output:
I've also run the same code with OpenCV 4.5.3 but capturing images from an storaged video (mp4) and the first frame of the video is showing correctly.
Related
I keep running into this error and can't fix it. I spoke with many people and they are not sure what to do. My code is below. This is very simple code that should open my webcam and display the live video. I am using python 3.8.0 on a M1 Mac 64 bit Ventura 13.2 using VsCode with the latest version of openCv, Mediapipe, and numpy. I have tried different IDE's and no luck.
import cv2
import mediapipe as mp
import numpy as np
mp_drawing = mp.solutions.drawing_utils
mp_pose = mp.solutions.pose
#VIDEO FEED
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret, frame = cap.read()
cv2.imshow('Mediapipe Feed', frame)
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
I got this code to work the first time I ran it. I ran it a few times, then connected my laptop via HDMI to a TV that has a camera built into it and then disconnected my laptop from the TV and now my code doesn't work. I think it has something to do with not being able to find the camera on my laptop but I can't figure it out. Any help would be great! I have tried changing the argument inside the .VideoCapture() from -10 to 10 and still no luck.
I had conflicting packages. I installed a bunch of packages via homebrew and pip3. I installed everything I am not using or going to use and that solved the issue. OpenCv works now, but I can't install mediapipe or mediapipe-silicon for my M1 mac. If I fix this, I will let everyone know.
This seems to be a problem caused by the M1 CPU with macOS. You need to compile OpenCV by yourself or download a compiled OpenCV by others for M1. Then the mediapipe for sure. Welcome to the world of Mx CPU.
W10, Python 3.6, OpenCV 4.3.0, Spyder 4.0.1
I'm using the code from this post to take snap shots from my Intel(R) Realsense(TM) 3D Camera (Front F200).
The problem is that it is defaulting to Depth mode and not RGB.
Is there a way of switching between the two modes from my code?
How trigger the coral dev board camera from opencv
cv.VideoCapture(0)
i am using this command to trigger the camera in opencv python. unfortunately it is giving error. not triggering the camera.
please let me know code for opencv python in coral dev board.
If you are using usb camera you should use;
cv.VideoCapture(1)
I'm from the coral team, just wanted to update on the issue here so that other can reference.
This is not caused by OpenCV, I have a full code with errors and it looks fine.
The usage of the library is correct.
The error is due to cv not able to read any data off from /dev/video0. At this stage I believe it's just bad connection between the camera sensors and the camera board. A command to take a picture from the camera failed.
I have installed PyOpenNI on my computer, and I want to record RGB videos with my camera.
On this link, https://github.com/jmendeth/PyOpenNI/blob/master/examples/record.py, it shows how to record depth video.
But I don't need depth video. I need image video. And I couldn't find any API tutorial for that.
How can I record an image video with this damn OpenNI?
Thanks,
If you need only RGB video, why are you going for pyopenni or openni? Even if you haven't installed openni drivers, the computer treats a kinect as a standard usb camera. I tried that after a fresh install of opencv and it worked. The code contained a image display program for standard usb camera.
It's possible that my problem is simply a Python 3 OpenCV bug, but I don't know. I have 32 bit Python version 3.4.3 installed in Windows 10. I have OpenCV 3.0.0 32 bit installed from this website http://www.lfd.uci.edu/~gohlke/pythonlibs/ (opencv_python‑3.0.0‑cp34‑none‑win32.whl).
I also have numpy 1.10.0b1 beta installed from that site.
I've tested out the same basic program flow below using OpenCV with Java and it works. For that reason I figure this may just be a Python bug issue. What happens is that the call to drawContours in the code below produces this error:
OpenCV Error: Image step is wrong (Step must be a multiple of esz1) in cv::setSize, file ......\opencv-3.0.0\modules\core\src\matrix.cpp, line 300
The test image I am using is 1168 x 1400 pixels.
Here is the code:
import cv2
import numpy as np
img = cv2.imread('test.jpg')
imgray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, threshImg = cv2.threshold(imgray,127, 255,cv2.THRESH_BINARY)
can = cv2.Canny(threshImg,100,200)
contImg, contours, hierarchy = cv2.findContours(can,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(img, contours,-1,(0,255,0))
cv2.imwrite('test write.jpg', img)
*******EDIT********
**I just solved the problem by installing numpy version 1.9.2 instead of the 1.10 beta.****
This has to do with development and beta releases of NumPy using relaxed strides. This is done to force detection of subtle bugs in third party libraries that make unnecessary assumptions about the strides of arrays.
Thanks to that the issue was detected a while back and is now fixed in the development version of OpenCV, see the relevant PR, but it will likely take some time until it makes it to a proper OpenCV release.
Regardless of that being fixed, as soon as the final version of NumPy 1.10 is released, you should be able to safely switch to it, even with the buggy current OpenCV version, as relaxed strides will be deactivated.
I solved the issue by installing numpy 1.9.2 instead of the new 1.10 beta version.