W10, Python 3.6, OpenCV 4.3.0, Spyder 4.0.1
I'm using the code from this post to take snap shots from my Intel(R) Realsense(TM) 3D Camera (Front F200).
The problem is that it is defaulting to Depth mode and not RGB.
Is there a way of switching between the two modes from my code?
Related
How trigger the coral dev board camera from opencv
cv.VideoCapture(0)
i am using this command to trigger the camera in opencv python. unfortunately it is giving error. not triggering the camera.
please let me know code for opencv python in coral dev board.
If you are using usb camera you should use;
cv.VideoCapture(1)
I'm from the coral team, just wanted to update on the issue here so that other can reference.
This is not caused by OpenCV, I have a full code with errors and it looks fine.
The usage of the library is correct.
The error is due to cv not able to read any data off from /dev/video0. At this stage I believe it's just bad connection between the camera sensors and the camera board. A command to take a picture from the camera failed.
I am trying openCV+Yolo3. I am using Mac with this config:
MacBook Pro (Retina, 15-inch, Mid - 2015)
Graphics Intel Iris Pro 1536 MB
Update - OS info:
macOS Catalina version 10.15.2
I checked Apple website and it says this MacBook supports openCL 1.2: https://support.apple.com/en-ca/HT202823
My program uses opencv-contrib-python 4.1.2. And the code snippet is:
net = cv2.dnn.readNetFromDarknet(model_configuration, model_weights)
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_OPENCL)
I also tried DNN_TARGET_OPENCL_FP16. BTW, I use the common pre-trained yolo3 cfg and weights and coco.names.
The problem is, my program cannot use GPU on my Mac. When I run a video through it, the inference time is 300+ ms per frame and from Activity Monitor I can see that the GPU usage is 0.0% while CPU is 70%+. I don't know why I can't use GPU via openGL on the Mac. Is there any trick I miss?
When I convert an image from PIL to OpenCV, colors change slightly.
from PIL import Image
import cv2
import numpy as np
image = cv2.imread('1.jpg')
image1=Image.open('1.jpg')
image1 = cv2.cvtColor(np.array(image1), cv2.COLOR_RGB2BGR)
print(image[79])
print(image1[79])
The first four rows are:
[144 151 148]
[101 108 105]
[121 128 125]
[108 118 112]
and
[140 152 146]
[ 97 109 103]
[117 129 123]
[104 118 112]
I thought the indexing may be off by one but it isn't. Is there a way to fix this?
This is the image (but it's the same on others too):
That suggests PIL and OpenCV are using different versions of libjpeg or using it with different parameters. Also it seems OpenCV tries to use libjpeg-turbo if you don't ask it not to do so, see the code here: https://github.com/opencv/opencv/blob/master/cmake/OpenCVFindLibsGrfmt.cmake#L39
This behaviour is dependent on different incompatible versions of libjpeg used by cv2 and PIL/Pillow, as #fireant already pointed out.
For example, when I try to run this code with older versions of Pillow (like 3.4.2), it generates same output. In my tests, Pillow 3.4.2 and older (oldest version tried was 2.2.0) all use libjpeg 8, while Pillow 4.0.0 and newer use libjpeg 9.2.
OpenCV, on the other hand, might use different versions on different systems:
On Microsoft Windows* OS and MacOSX*, the codecs shipped with an OpenCV image (libjpeg, libpng, libtiff, and libjasper) are used by default.
On Linux*, BSD flavors and other Unix-like open-source operating systems, OpenCV looks for codecs supplied with an OS image. Install the relevant packages (do not forget the development files, for example, "libjpeg-dev", in Debian* and Ubuntu*) to get the codec support or turn on the OPENCV_BUILD_3RDPARTY_LIBS flag in CMake.
So on Debian/Ubuntu systems opencv might use libjpeg-turbo which comes with the OS. (My machine, specifically, had version 8 installed.)
The way to fix this is to ensure that both Pillow and OpenCV use same libjpeg version.
You could try this:
if you have relatively new PIL/Pillow, downgrade it to version <= 3.4.2 (this is what worked for me)
pip install Pillow==3.4.2
if you have old Pillow version, you could try to upgrade it to version >= 4.0.0
If that doesn't help, your solution could be either of two:
recompiling your OpenCV with same libjpeg flavor as used by Pillow.
reinstalling Pillow from source using same libjpeg version as used by OpenCV.
libjpeg version may by different
Convert the original .jpg image to .bmp image
ffmpeg -i 1.jpg 1.bmp
than the opencv output and PIL output will be the same
Since OpenCV reads the image in as BGR format with cv2.imread() we need to convert it back to RGB before giving it to PIL.
Here is an example of reading an image with OpenCv and without change saving it with PIL:
image = cv2.imread('test.jpg')
pil_img = PIL.Image.fromarray(image)
pil_img.save('pil_img.jpg', 'JPEG')
The 'test.jpg' Image:
enter image description here
The 'pil_img.jpg' Image:
enter image description here
To correct this we need to use cv2.cvtColor to change the image to RGB
correct_img = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
pil_img = PIL.Image.fromarray(correct_img)
pil_img.save('pil_correct_img.jpg', 'JPEG')
Result: enter image description here
I have installed PyOpenNI on my computer, and I want to record RGB videos with my camera.
On this link, https://github.com/jmendeth/PyOpenNI/blob/master/examples/record.py, it shows how to record depth video.
But I don't need depth video. I need image video. And I couldn't find any API tutorial for that.
How can I record an image video with this damn OpenNI?
Thanks,
If you need only RGB video, why are you going for pyopenni or openni? Even if you haven't installed openni drivers, the computer treats a kinect as a standard usb camera. I tried that after a fresh install of opencv and it worked. The code contained a image display program for standard usb camera.
It's possible that my problem is simply a Python 3 OpenCV bug, but I don't know. I have 32 bit Python version 3.4.3 installed in Windows 10. I have OpenCV 3.0.0 32 bit installed from this website http://www.lfd.uci.edu/~gohlke/pythonlibs/ (opencv_python‑3.0.0‑cp34‑none‑win32.whl).
I also have numpy 1.10.0b1 beta installed from that site.
I've tested out the same basic program flow below using OpenCV with Java and it works. For that reason I figure this may just be a Python bug issue. What happens is that the call to drawContours in the code below produces this error:
OpenCV Error: Image step is wrong (Step must be a multiple of esz1) in cv::setSize, file ......\opencv-3.0.0\modules\core\src\matrix.cpp, line 300
The test image I am using is 1168 x 1400 pixels.
Here is the code:
import cv2
import numpy as np
img = cv2.imread('test.jpg')
imgray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, threshImg = cv2.threshold(imgray,127, 255,cv2.THRESH_BINARY)
can = cv2.Canny(threshImg,100,200)
contImg, contours, hierarchy = cv2.findContours(can,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(img, contours,-1,(0,255,0))
cv2.imwrite('test write.jpg', img)
*******EDIT********
**I just solved the problem by installing numpy version 1.9.2 instead of the 1.10 beta.****
This has to do with development and beta releases of NumPy using relaxed strides. This is done to force detection of subtle bugs in third party libraries that make unnecessary assumptions about the strides of arrays.
Thanks to that the issue was detected a while back and is now fixed in the development version of OpenCV, see the relevant PR, but it will likely take some time until it makes it to a proper OpenCV release.
Regardless of that being fixed, as soon as the final version of NumPy 1.10 is released, you should be able to safely switch to it, even with the buggy current OpenCV version, as relaxed strides will be deactivated.
I solved the issue by installing numpy 1.9.2 instead of the new 1.10 beta version.