i hava problem in fire detection
my code is :
ret, frame = cap.read()
lab_image = cv2.cvtColor(frame, cv2.COLOR_BGR2LAB)
L , a , b = cv2.split(lab_image)
ret,thresh_L = cv2.threshold(L,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
ret,thresh_a = cv2.threshold(a,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
ret,thresh_b = cv2.threshold(b,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
thresh_image = cv2.merge((thresh_L, thresh_a, thresh_b))
dilation = cv2.dilate(thresh_image, None, iterations=2)
gray = cv2.cvtColor(thresh_image,cv2.COLOR_
(cnts, _) = cv2.findContours(dilation.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
for c in cnts:
if cv2.contourArea(c) < args["min_area"]:
continue
(x,y,w,h) = cv2.boundingRecy(c)
cv2.rectangle(frame,(x,y),(x+w, y+h), (0,255,0), 2)
cv2.imshow('frame1',frame)
and when i run this program , see this error
FindContours support only 8uC1 and 32sC1 images in function cvStartFindContours
please help me .
tnx
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
Use this line on your image to convert it from BGR to grayscale (8UC1) format before finding contours. FindContours function only supports a grayscale image format.
In my solution I had to convert the dtype into uint8.
Yes, my image was binary image(single channel), however in my code somehow the thresh_image was changed into float32 data type. But cv2.findContours() cannot handle float32.
So I had to explicitly convert float32 --> uint8.
thresh_image = thresh_image.astype(np.uint8)
For completion, the 8UC1 format is 8 byte, unsigned, single channel.
In addition to cv2 grayscale, single-channel uint8 format will also be valid, in case anyone is building the image outside of cv2 functions and encounters this error.
The documention of findContours is clearly saying that it can afford to take single channel images as inputs(i.e 8uc1 and 32sc1) But you are sending 3 channel image.here is the documentation of findcontours http://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html#findcontours
I got this error and found the cause:
I created: gray_img = np.zeros((width,height,3),np.uint8)
In gray_img, depth = 3, it doesn't match findContours.
Then I recreated: gray_img = np.zeros((width,height),np.uint8)
it worked.
Related
I've ran into very strange behavior with cv2.findContours where it both fails and succeeds executing code on two arrays which are identical.
First, I do some arbitrary image processing on an image and then receive my output, which is in grayscale. We assign the output to the variable estimate. estimate.shape = (512, 640), as it is a grayscale image. I then proceed to save this image to disk with cv2.imwrite('estimate.png', estimate), and this is where my code stars misbehaving.
First of all, I try to read the image from disk and process it using cv2.findContours() according to the documentation on OpenCV. The code looks as follows,and it executes successfully:
im = cv2.imread('estimate.png')
imgray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 127, 255, 0)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
This is working as expected. But now, let's try cv2.findContours() directly on the variable estimate, without needlessly saving to disk and reading from it:
ret, thresh = cv2.threshold(estimate, 127, 255, 0)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
>>> error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\thresh.cpp:1659: error: (-210:Unsupported format or combination of formats) in function 'cv::threshold'
Okay, the natural assumption here is that imgray is different from estimate. Let us check the data:
type(imgray)
>>> numpy.ndarray
type(estimate)
>>> numpy.ndarray
imgray.shape
>>> (512, 640)
estimate.shape
>>> (512, 640)
np.all(estimate == imgray)
>>> True
Okay, so the arrays are identical in shape and values. What is happening here? We're applying cv2.findContours() to the exact same numpy.ndarray. It fails when the array is created directly, and it succeeds if it is created via cv2.imread?
I'm using opencv-python=4.6.0.66 and python=3.9.12
Finally solved this. Turns out, despite looking the same, the problem was in the actual types of arrays.
cv2.findContours() takes an unsigned 8-bit integer array. Therefore, estimate needs to be converted: estimate = estimate.astype(np.uint8).
This can be checked using estimate.dtype and imgray.dtype.
I have the following function to pre-process an image for Tesseract OCR, in most of the image the text is white, there can be green, red and purple text too. I want to be able to read all of that, but when I apply the thresholding during the pre-processing the red text is gone. Is there a way to avoid this? It doesn't happen with the green text unless it's dark green
def pre_process_img(img):
open_cv_image = numpy.array(img)
# Convert RGB to BGR
open_cv_image = open_cv_image[:, :, ::-1].copy()
img_gray = cv2.cvtColor(numpy.array(img), cv2.COLOR_BGR2GRAY)
img_gray = cv2.resize(img_gray, None, fx=3, fy=3, interpolation=cv2.INTER_CUBIC)
img_inverted = 255 - img_gray
ret, thresh1 = cv2.threshold(img_inverted, 127, 255, cv2.THRESH_BINARY)
# [DEBUG] show pre processed image
# cv2.imshow("inverted", thresh1)
# cv2.waitKey(0)
return thresh1
In this function img is a PIL.Image.Image image, I convert it to an OpenCV image and apply preprocessing (turning into greyscale, rezising, inverting and binary thresholding). With psm 11 on Tesseract it has given a good enough result.
Btw If you have any suggestion to improve my pre_process_img function I'm open to listen. I'm new to OpenCV and I just stuck with the thing that gave me the best result from everything I've tried
This is my image here
Convert from BGR to HSV colorspace in Python/OpenCV. Then simply threshold the value channel. Here is the value channel. You will see that all text is white (in this case).
I am having the following image which has been obtained after performing clustering on the original image. I have tried to define a threshold as shown in the following script.
if __name__ == '__main__':
img_bgr = cv2.imread('./data/frame.png')
img_hsv = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2HSV)
lower_white = np.array([50,50,150], dtype=np.uint8)
upper_white = np.array([255,255,255], dtype=np.uint8)
mask_white = cv2.inRange(img_hsv , lower_white, upper_white)
The obtained result is not very satisfactory. Please, how can it be further improved? Any suggestions and comments would be highly appreciated. The image can be found here
I'm trying to detect hand with OpenCV on Python.
I am working on this thresholded image:
And that's contour drawed state:
I am trying to detect hand, but contour is too big, it captures my whole body.I need it like this:
My code:
import cv2
orImage = cv2.imread("f.png")
image = cv2.cvtColor(orImage,cv2.COLOR_BGR2GRAY)
image = cv2.blur(image,(15,15))
(_,img_th) = cv2.threshold(image,96,255,1)
(contours,_) = cv2.findContours(img_th, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for c in contours:
if cv2.contourArea(c) > 15:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image,(x-20,y-20),(x+w+20,y+h+20),(0,255,0),2)
cv2.drawContours(image,contours,-1,(255,0,0),2)
cv2.imwrite("hi.jpg",image)
Thanks!
I have a solution (I got some help from HERE), it has many other wonderful tutorials on image processing exclusively for OpenCV users.)
I first converted the image you have uploaded to HSV color space:
HSV = cv2.cvtColor(orimage, cv2.COLOR_BGR2HSV)
I then set an approximate range for skin detection once the image is converted to HSV color space:
l = np.array([0, 48, 80], dtype = "uint8")
u = np.array([20, 255, 255], dtype = "uint8")
I then applied this range to the HSV image:
skinDetect = cv2.inRange(HSV, l, u)
This is what I obtained (I also resized the image to make it smaller):
Now you can find the biggest contour in this image followed by morphological operations to obtain the hand perfectly.
Hope this helps.
I am working in python on openCV 3.0. In order to find the largest white pixel region, first of all thresholded gray image to binary image.
import cv2
import numpy as np
img = cv2.imread('graimage.png')
img = cv2.resize(img,(400,500))
gray = img.copy()
(thresh, im_bw) = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY )
derp,contours,hierarchy = cv2.findContours(im_bw,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
cnts = max(cnts, key=cv2.contourArea)
But it shows error as follows.
cv2.error: ..../opencv/modules/imgproc/src/contours.cpp:198: error: (-210) [Start]FindContours supports only CV_8UC1 images when mode != CV_RETR_FLOODFILL otherwise supports CV_32SC1 images only in function cvStartFindContours.
It looks like this was answered in the comments, but just to mark the question as answered:
CV_8UC1 means 8-bit pixels, unsigned, and only one channel, so grayscale. It looks like you're reading it in with 3 color channels, or CV_8UC3. You can check the image type by printing img.dtype and img.shape. The dtype should be uint8, and the shape should be (#, #), indicating two dimensions. I'm guessing you'll see that shape prints (#, #, 3) for your image as-is, indicating three color channels.
As #user3515225 said, you can fix that by reading the image in as grayscale using cv2.imread('img.png', cv2.IMREAD_GRAYSCALE). That assumes you have no use for color anywhere else, though. If you want a separate grayscale copy of the image, then replace gray = img.copy() with gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) instead.