Related
I have an image like this:
after I applied some processings e.g. cv2.Canny(), it looks like this now:
As you can see that the black lines become hollow.
I have tried erosion and dilation, but if I do them many times, the 2 entrances will be closed(meaning become connected line or closed contour).
How could I make those lines solid like the below image while keep the 2 entrances not affected?
Update 1
I have tested the following answers with a few of photos, but the code seems customized to only be able to handle this one particular picture. Due to the restriction of SOF, I cannot upload photos larger than 2MB, so I uploaded them into my Microsoft OneDrive folder for your convenience to test.
https://1drv.ms/u/s!Asflam6BEzhjgbIhgkL4rt1NLSjsZg?e=OXXKBK
Update 2
I picked up #fmw42's post as answer as his answer is the most detailed one. It doesn't answer my question but points out the correct way to process maze which is my ultimate goal. I like his approach of answering questions, firstly tells you what each step should do so that you have a clear idea about how to do the task, then provide the full code example from beginning to end. Very helpful.
Due to the limitation of SOF, I can only pick up one answer. If multiple answers are allowed, I would also pick up Shamshirsaz.Navid's answer. His answer not only points to the correct direction to solve the issue, but also the explanation with visualization really works well for me~! I guess it works equally well for all people who are trying to understand why each line of code is needed. Also he follows up my questions in comments, this makes the SOF a bit interactive :)
The Threshold track bar in Ann Zen's answer is also a very useful tip for people to quickly find out a optimal value.
Here is one way to process the maze and rectify it in Python/OpenCV.
Read the input
Convert to gray
Threshold
Use morphology close to remove the thinnest (extraneous) black lines
Invert the threshold
Get the external contours
Keep on those contours that are larger than 1/4 of both the width and height of the input
Draw those contours as white lines on black background
Get the convex hull from the white contour lines image
Draw the convex hull as white lines on black background
Use GoodFeaturesToTrack to get the 4 corners from the white hull lines image
Sort the 4 corners by angle relative to the centroid so that they are ordered clockwise: top-left, top-right, bottom-right, bottom-left
Set these points as the array of conjugate control points for the input
Use 1/2 the dimensions of the input to define the array of conjugate control points for the output
Compute the perspective transformation matrix
Warp the input image using the perspective matrix
Save the results
Input:
import cv2
import numpy as np
import math
# load image
img = cv2.imread('maze.jpg')
hh, ww = img.shape[:2]
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
# use morphology to remove the thin lines
kernel = cv2.getStructuringElement(cv2.MORPH_RECT , (5,1))
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# invert so that lines are white so that we can get contours for them
thresh_inv = 255 - thresh
# get external contours
contours = cv2.findContours(thresh_inv, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
# keep contours whose bounding boxes are greater than 1/4 in each dimension
# draw them as white on black background
contour = np.zeros((hh,ww), dtype=np.uint8)
for cntr in contours:
x,y,w,h = cv2.boundingRect(cntr)
if w > ww/4 and h > hh/4:
cv2.drawContours(contour, [cntr], 0, 255, 1)
# get convex hull from contour image white pixels
points = np.column_stack(np.where(contour.transpose() > 0))
hull_pts = cv2.convexHull(points)
# draw hull on copy of input and on black background
hull = img.copy()
cv2.drawContours(hull, [hull_pts], 0, (0,255,0), 2)
hull2 = np.zeros((hh,ww), dtype=np.uint8)
cv2.drawContours(hull2, [hull_pts], 0, 255, 2)
# get 4 corners from white hull points on black background
num = 4
quality = 0.001
mindist = max(ww,hh) // 4
corners = cv2.goodFeaturesToTrack(hull2, num, quality, mindist)
corners = np.int0(corners)
for corner in corners:
px,py = corner.ravel()
cv2.circle(hull, (px,py), 5, (0,0,255), -1)
# get angles to each corner relative to centroid and store with x,y values in list
# angles are clockwise between -180 and +180 with zero along positive X axis (to right)
corner_info = []
center = np.mean(corners, axis=0)
centx = center.ravel()[0]
centy = center.ravel()[1]
for corner in corners:
px,py = corner.ravel()
dx = px - centx
dy = py - centy
angle = (180/math.pi) * math.atan2(dy,dx)
corner_info.append([px,py,angle])
# function to define sort key as element 2 (i.e. angle)
def takeThird(elem):
return elem[2]
# sort corner_info on angle so result will be TL, TR, BR, BL order
corner_info.sort(key=takeThird)
# make conjugate control points
# get input points from corners
corner_list = []
for x, y, angle in corner_info:
corner_list.append([x,y])
print(corner_list)
# define input points from (sorted) corner_list
input = np.float32(corner_list)
# define output points from dimensions of image, say half of input image
width = ww // 2
height = hh // 2
output = np.float32([[0,0], [width-1,0], [width-1,height-1], [0,height-1]])
# compute perspective matrix
matrix = cv2.getPerspectiveTransform(input,output)
# do perspective transformation setting area outside input to black
result = cv2.warpPerspective(img, matrix, (width,height), cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT, borderValue=(0,0,0))
# save output
cv2.imwrite('maze_thresh.jpg', thresh)
cv2.imwrite('maze_contour.jpg', contour)
cv2.imwrite('maze_hull.jpg', hull)
cv2.imwrite('maze_rectified.jpg', result)
# Display various images to see the steps
cv2.imshow('thresh', thresh)
cv2.imshow('contour', contour)
cv2.imshow('hull', hull)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Thresholded Image after morphology:
Filtered Contours on black background:
Convex hull and 4 corners on input image:
Result from perspective warp:
You can try a simple threshold to detect the lines of the maze, as they are conveniently black:
import cv2
img = cv2.imread("maze.jpg")
gray = cv2.cvtColor(img, cv2.BGR2GRAY)
_, thresh = cv2.threshold(gray, 60, 255, cv2.THRESH_BINARY)
cv2.imshow("Image", thresh)
cv2.waitKey(0)
Output:
You can adjust the threshold yourself with trackbars:
import cv2
cv2.namedWindow("threshold")
cv2.createTrackbar("", "threshold", 0, 255, id)
img = cv2.imread("maze.jpg")
while True:
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
t = cv2.getTrackbarPos("", "threshold")
_, thresh = cv2.threshold(gray, t, 255, cv2.THRESH_BINARY)
cv2.imshow("Image", thresh)
if cv2.waitKey(1) & 0xFF == ord("q"): # If you press the q key
break
Canny is an edge detector. It detects the lines along which color changes. A line in your input image has two such transitions, one on each side. Therefore you see two parallel lines on each side of a line in the image. This answer of mine explains the difference between edges and lines.
So, you shouldn’t be using an edge detector to detect lines in an image.
If a simple threshold doesn't properly binarize this image, try using a local threshold ("adaptive threshold" in OpenCV). Another thing that works well for images like these is applying a top hat filter (for this image, it would be a closing(img) - img), where the structuring element is adjusted to the width of the lines you want to find. This will result in an image that is easy to threshold and will preserve all lines thinner than the structuring element.
Check this:
import cv2
import numpy as np
im=cv2.imread("test2.jpg",1)
#convert 2 gray
mask=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
#convert 2 black and white
mask=cv2.threshold(mask,127,255,cv2.THRESH_BINARY)[1]
#remove thin lines and texts and then remake main lines
mask=cv2.dilate(mask,np.ones((5, 5), 'uint8'))
mask=cv2.erode(mask,np.ones((4, 4), 'uint8'))
#smooth lines
mask=cv2.medianBlur(mask,3)
#write output mask
cv2.imwrite("mask2.jpg",mask)
From now on, everything can be done. You can delete extra blobs, you can extract lines from the original image according to the mask, and things like that.
Median:
Median changes are not much for this project. And it can be safely removed. But I prefer it because it rounds the ends of the lines a bit. You have to zoom in a lot to see the pixels. But this technique is usually used to remove salt/pepper noise.
Erode Kernel:
In the case of the kernel, the larger the number, the thicker the lines. Well, this is not always good. Because it causes the path lines to stick to the arrow and later it becomes difficult to separate the paths from the arrow.
Update:
It does not matter if part of the Maze is cleared. The important thing is that from this mask you can draw a rectangle around this shape and create a new mask for this image.
Make a white rectangle around these paths in a new mask. Completely whiten the inside of the mask with FloodFill or any other technique. Now you have a new mask that can take the whole shape out of the original image. Now in the next step you can correct Perspective.
I am new to OpenCV and Python and I made a program that finds contours with area that is above 500 and saves them into a new image I used boundingRect as advised on the internet, it runs and does the job well but I got a problem with an output of an image. It seems that noises near beside the region of interest are also saved. As you can see in the image below, there are some tiny shapes near beside the ROI. The output is good for other images its just that I want to get rid of noises like this. Is there a way to remove those kind of noises in the output?
Here is the output of the program I made:
Here is the input image:
Hide with contouring
This solution uses cv2.drawContours() to simply draw black contours over the noise. I ran the black and white sample image through a few iterations of dilation, filtered contours by area, and then drew black contour lines over the noise. I used the threshold feature because there turned out to be a good bit of minuscule noise in what initially appeared to be a simple black and white image.
Input:
Code:
import cv2
thresh_value = 10
img = cv2.imread("cells_BW.jpg")
img = cv2.medianBlur(img, 5)
dilation = cv2.dilate(img,(3,3),iterations = 3)
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
(T, thresh) = cv2.threshold(img_gray, thresh_value, 255, cv2.THRESH_BINARY)
_, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contours = [i for i in contours if cv2.contourArea(i) < 5000]
cv2.drawContours(img, contours, -1, (0,0,0), 10, lineType=8)
cv2.imwrite("cells_BW_CLEAN.jpg", img)
Output:
There could be several solutions depends on the assumption on the input data.
Probable Methods
If the ROI has a significantly different color than others,
1-1. You can threshold the input image using RGB before finding the contour.
If the area of the object you want to find is significantly bigger that others,
2-1. Fill the holes like this example
2-2. Calculate the size of the blobs, and exclude all the blobs except the largest one (example to calculate the size of blobs).
If there has intersection point between the contours of multiple objects, Method 2 surely fail to segment the region of single cell.
I'm trying to find the contours of this image, but the method findContours only returns 1 contour, the contour is highlighted in image 2. I'm trying to find all external contours like these circles where the numbers are inside. What am i doing wrong? What can i do to accomplish it?
image 1:
image 2:
Below is the relevant portion of my code.
thresh = cv2.threshold(image, 0, 255,
cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
When i change cv2.RETR_EXTERNAL to cv2.RETR_LIST it seems to detect the same contour twice or something like this. Image 3 shows when the border of circle is first detected and then it is detected again as shows image 4. I'm trying to find only outer borders of these circles. How can i accomplish that?
image 3
image 4
The problem is the flag cv2.RETR_EXTERNAL that you used in the function call. As described in the OpenCV documentation, this only returns the external contour.
Using the flag cv2.RETR_LIST you get all contours in the image. Since you try to detect rings, this list will contain the inner and the outer contour of these rings.
To filter the outer boundary of the circles, you could use cv2.contourArea() to find the larger of two overlapping contours.
I am not sure this is really what you expect nevertheless in case like this there is many way to help findContours to do its job.
Here is a way I use frequently.
Convert your image to gray
Ig = cv2.cvtColor(I,cv2.COLOR_BGR2GRAY)
Thresholding
The background and foreground values looklike quite uniform in term of colours but locally they are not so I apply an thresholding based on Otsu's method in order to binarise the intensities.
_,It = cv2.threshold(Ig,0,255,cv2.THRESH_OTSU)
Sobel magnitude
In order to extract only the contours I process the magnitude of the Sobel edges detector.
sx = cv2.Sobel(It,cv2.CV_32F,1,0)
sy = cv2.Sobel(It,cv2.CV_32F,0,1)
m = cv2.magnitude(sx,sy)
m = cv2.normalize(m,None,0.,255.,cv2.NORM_MINMAX,cv2.CV_8U)
thinning (optional)
I use the thinning function which is implemented in ximgproc.
The interest of the thining is to reduce the contours thickness to as less pixels as possible.
m = cv2.ximgproc.thinning(m,None,cv2.ximgproc.THINNING_GUOHALL)
Final Step findContours
_,contours,hierarchy = cv2.findContours(m,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
disp = cv2.merge((m,m,m)
disp = cv2.drawContours(disp,contours,-1,hierarchy=hierarchy,color=(255,0,0))
Hope it help.
I think an approach based on SVM or a CNN might be more robust.
You can find an example here.
This one may also be interesting.
-EDIT-
I found a let say easier way to reach your goal.
Like previously after loading the image applying a threshold ensure that the image is binary.
By reversing the image using a bitwise not operation the contours become white over a black background.
Applying cv2.connectedComponentsWithStats return (among others) a label matrix in which each connected white region in the source has been assign a unique label.
Then applying findContours based on the labels it is possible give the external contours for every areas.
import numpy as np
import cv2
from matplotlib import pyplot as plt
I = cv2.imread('/home/smile/Downloads/ext_contours.png',cv2.IMREAD_GRAYSCALE)
_,I = cv2.threshold(I,0.,255.,cv2.THRESH_OTSU)
I = cv2.bitwise_not(I)
_,labels,stats,centroid = cv2.connectedComponentsWithStats(I)
result = np.zeros((I.shape[0],I.shape[1],3),np.uint8)
for i in range(0,labels.max()+1):
mask = cv2.compare(labels,i,cv2.CMP_EQ)
_,ctrs,_ = cv2.findContours(mask,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
result = cv2.drawContours(result,ctrs,-1,(0xFF,0,0))
plt.figure()
plt.imshow(result)
P.S. Among the outputs return by the function findContours there is a hierachy matrix.
It is possible to reach the same result by analyzing that matrix however it is a little bit more complex as explain here.
Instead of finding contours, I would suggest applying the Hough circle transform using the appropriate parameters.
Finding contours poses a challenge. Once you invert the binary image the circles are in white. OpenCV finds contours both along the outside and the inside of the circle. Moreover since there are letters such as 'A' and 'B', contours will again be found along the outside of the letters and within the holes. You can find contours using the appropriate hierarchy criterion but it is still tedious.
Here is what I tried by finding contours and using hierarchy:
Code:
#--- read the image, convert to gray and obtain inverse binary image ---
img = cv2.imread('keypad.png', 1)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, binary = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV|cv2.THRESH_OTSU)
#--- find contours ---
_, contours, hierarchy = cv2.findContours(binary, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
#--- copy of original image ---
img2 = img.copy()
#--- select contours having a parent contour and append them to a list ---
l = []
for h in hierarchy[0]:
if h[0] > -1 and h[2] > -1:
l.append(h[2])
#--- draw those contours ---
for cnt in l:
if cnt > 0:
cv2.drawContours(img2, [contours[cnt]], 0, (0,255,0), 2)
cv2.imshow('img2', img2)
For more info on contours and their hierarchical relationship please refer this
UPDATE
I have a rather crude way to ignore unwanted contours. Find the average area of all the contours in list l and draw those that are above the average:
Code:
img3 = img.copy()
a = 0
for j, i in enumerate(l):
a = a + cv2.contourArea(contours[i])
mean_area = int(a/len(l))
for cnt in l:
if (cnt > 0) & (cv2.contourArea(contours[cnt]) > mean_area):
cv2.drawContours(img3, [contours[cnt]], 0, (0,255,0), 2)
cv2.imshow('img3', img3)
You can select only the outer borders by this function:
def _select_contours(contours, hierarchy):
"""select contours of the second level"""
# find the border of the image, which has no father
father_i = None
for i, h in enumerate(hierarchy):
if h[3] == -1:
father_i = i
break
# collect its sons
new_contours = []
for c, h in zip(contours, hierarchy):
if h[3] == father_i:
new_contours.append(c)
return new_contours
Note that you should use cv2.RETR_TREE in cv2.findContours() to get the contours and hierarchy.
I want to detect the text area of images using python 2.7 and opencv 2.4.9
and draw a rectangle area around it. Like shown in the example image below.
I am new to image processing so any idea how to do this will be appreciated.
There are multiple ways to go about detecting text in an image.
I recommend looking at this question here, for it may answer your case as well. Although it is not in python, the code can be easily translated from c++ to python (Just look at the API and convert the methods from c++ to python, not hard. I did it myself when I tried their code for my own separate problem). The solutions here may not work for your case, but I recommend trying them out.
If I were to go about this I would do the following process:
Prep your image:
If all of your images you want to edit are roughly like the one you provided, where the actual design consists of a range of gray colors, and the text is always black. I would first white out all content that is not black (or already white). Doing so will leave only the black text left.
# must import if working with opencv in python
import numpy as np
import cv2
# removes pixels in image that are between the range of
# [lower_val,upper_val]
def remove_gray(img,lower_val,upper_val):
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_bound = np.array([0,0,lower_val])
upper_bound = np.array([255,255,upper_val])
mask = cv2.inRange(gray, lower_bound, upper_bound)
return cv2.bitwise_and(gray, gray, mask = mask)
Now that all you have is the black text the goal is to get those boxes. As stated before, there are different ways of going about this.
Stroke Width Transform (SWT)
The typical way to find text areas: you can find text regions by using stroke width transform as depicted in "Detecting Text in Natural Scenes with Stroke Width Transform " by Boris Epshtein, Eyal Ofek, and Yonatan Wexler. To be honest, if this is as fast and reliable as I believe it is, then this method is a more efficient method than my below code. You can still use the code above to remove the blueprint design though, and that may help the overall performance of the swt algorithm.
Here is a c library that implements their algorithm, but it is stated to be very raw and the documentation is stated to be incomplete. Obviously, a wrapper will be needed in order to use this library with python, and at the moment I do not see an official one offered.
The library I linked is CCV. It is a library that is meant to be used in your applications, not recreate algorithms. So this is a tool to be used, which goes against OP's want for making it from "First Principles", as stated in comments. Still, useful to know it exists if you don't want to code the algorithm yourself.
Home Brewed Non-SWT Method
If you have meta data for each image, say in an xml file, that states how many rooms are labeled in each image, then you can access that xml file, get the data about how many labels are in the image, and then store that number in some variable say, num_of_labels. Now take your image and put it through a while loop that erodes at a set rate that you specify, finding external contours in the image in each loop and stopping the loop once you have the same number of external contours as your num_of_labels. Then simply find each contours' bounding box and you are done.
# erodes image based on given kernel size (erosion = expands black areas)
def erode( img, kern_size = 3 ):
retval, img = cv2.threshold(img, 254.0, 255.0, cv2.THRESH_BINARY) # threshold to deal with only black and white.
kern = np.ones((kern_size,kern_size),np.uint8) # make a kernel for erosion based on given kernel size.
eroded = cv2.erode(img, kern, 1) # erode your image to blobbify black areas
y,x = eroded.shape # get shape of image to make a white boarder around image of 1px, to avoid problems with find contours.
return cv2.rectangle(eroded, (0,0), (x,y), (255,255,255), 1)
# finds contours of eroded image
def prep( img, kern_size = 3 ):
img = erode( img, kern_size )
retval, img = cv2.threshold(img, 200.0, 255.0, cv2.THRESH_BINARY_INV) # invert colors for findContours
return cv2.findContours(img,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) # Find Contours of Image
# given img & number of desired blobs, returns contours of blobs.
def blobbify(img, num_of_labels, kern_size = 3, dilation_rate = 10):
prep_img, contours, hierarchy = prep( img.copy(), kern_size ) # dilate img and check current contour count.
while len(contours) > num_of_labels:
kern_size += dilation_rate # add dilation_rate to kern_size to increase the blob. Remember kern_size must always be odd.
previous = (prep_img, contours, hierarchy)
processed_img, contours, hierarchy = prep( img.copy(), kern_size ) # dilate img and check current contour count, again.
if len(contours) < num_of_labels:
return (processed_img, contours, hierarchy)
else:
return previous
# finds bounding boxes of all contours
def bounding_box(contours):
bBox = []
for curve in contours:
box = cv2.boundingRect(curve)
bBox.append(box)
return bBox
The resulting boxes from the above method will have space around the labels, and this may include part of the original design, if the boxes are applied to the original image. To avoid this make regions of interest via your new found boxes and trim the white space. Then save that roi's shape as your new box.
Perhaps you have no way of knowing how many labels will be in the image. If this is the case, then I recommend playing around with erosion values until you find the best one to suit your case and get the desired blobs.
Or you could try find contours on the remaining content, after removing the design, and combine bounding boxes into one rectangle based on their distance from each other.
After you found your boxes, simply use those boxes with respect to the original image and you will be done.
Scene Text Detection Module in OpenCV 3
As mentioned in the comments to your question, there already exists a means of scene text detection (not document text detection) in opencv 3. I understand you do not have the ability to switch versions, but for those with the same question and not limited to an older opencv version, I decided to include this at the end. Documentation for the scene text detection can be found with a simple google search.
The opencv module for text detection also comes with text recognition that implements tessaract, which is a free open-source text recognition module. The downfall of tessaract, and therefore opencv's scene text recognition module is that it is not as refined as commercial applications and is time consuming to use. Thus decreasing its performance, but its free to use, so its the best we got without paying money, if you want text recognition as well.
Links:
Documentation OpenCv
Older Documentation
The source code is located here, for analysis and understanding
Honestly, I lack the experience and expertise in both opencv and image processing in order to provide a detailed way in implementing their text detection module. The same with the SWT algorithm. I just got into this stuff this past few months, but as I learn more I will edit this answer.
Here's a simple image processing approach using only thresholding and contour filtering:
Obtain binary image. Load image, convert to grayscale, Gaussian blur, and adaptive threshold
Combine adjacent text. We create a rectangular structuring kernel then dilate to form a single contour
Filter for text contours. We find contours and filter using contour area. From here we can draw the bounding box with cv2.rectangle()
Using this original input image (removed red lines)
After converting the image to grayscale and Gaussian blurring, we adaptive threshold to obtain a binary image
Next we dilate to combine the text into a single contour
From here we find contours and filter using a minimum threshold area (in case there was small noise). Here's the result
If we wanted to, we could also extract and save each ROI using Numpy slicing
Code
import cv2
# Load image, grayscale, Gaussian blur, adaptive threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (9,9), 0)
thresh = cv2.adaptiveThreshold(blur,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,11,30)
# Dilate to combine adjacent text contours
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9,9))
dilate = cv2.dilate(thresh, kernel, iterations=4)
# Find contours, highlight text areas, and extract ROIs
cnts = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
ROI_number = 0
for c in cnts:
area = cv2.contourArea(c)
if area > 10000:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 3)
# ROI = image[y:y+h, x:x+w]
# cv2.imwrite('ROI_{}.png'.format(ROI_number), ROI)
# ROI_number += 1
cv2.imshow('thresh', thresh)
cv2.imshow('dilate', dilate)
cv2.imshow('image', image)
cv2.waitKey()
EDIT: I bypassed the problem by adding a 2-bit frame to the image, then using my code, and finally cropping the image to delete the extra frame. It is an ugly solution, but it works!
I encountered a problem which I'm not sure if it´s a bug or just my lack of experience. I´ll try to summarize it as clearly as possible:
I get a binary image which contains the contours of the color image I want to analyse (pixels in white are the perimeter of the contours detected by my algorithm, the rest is black). Image is pretty complex as the objects I want to detect fill the image entirely (no "background").
I use findcontours with that image:
contours, hierarchy = cv2.findContours(image,cv2.RETR_CCOMP,cv2.CHAIN_APPROX_NONE)
Then a "for" loop detects the contours with an area lesser than "X" pixels and hierarchy[0][x][3] >= 0 (lets call the new array "contours_2")
I draw "contours_2" in a new image:
cv2.drawContours(image=image2, contours=contours_2, contourIdx=-1, color=(255,255,255), thickness=-1)
The problem is that drawcontours draws all the contours good, BUT it doesnt "fill" the contours that are in the border of the image (this is, the ones that have one boundary in the edge of the image). I tried setting the border pixels of the image (like a frame) at True but it doesnt work, as findcontours sets automatically those pixels to zero (its in the function description).
Using cv2.contourArea in the previous loop detects those contours well and returns normal values, so there is no way to know when a contour will be ignored by drawcontours and when it will be correctly filled. cv2.isContourConvex doesn´t work at all, as returns every contour as false.
I'm able to draw those "edge" contours using cv2.convexHull before drawcontours on them, but I need to only use it on edges (because it deforms the contours and I want to avoid that as much as possible). The problem is... I have no way to detect which contours are in the edge of the image, and which ones are not and work OK with drawcontours.
So I´d like to ask why drawContour behaves like that and if there is some way to make it fill the contours in the edge. Or, another solution would be to find how to detect which contours are in the borders of the image (so I can apply convexhull when needed).
If you have only 1 contour and you use contourIdx=-1 it will think each point in your contour is a separate contour. You need to wrap your single contour as a list (i.e contours=[contours_2] if there is only one contour in contours_2)
I have no problem filling the contour in the edge, aside from a small white line showing up at the edge.
import cv2
import numpy as np
rows=512
cols=512
# Create image with new colour for replacement
img= np.zeros((rows,cols,3), np.uint8)
img[:,:]= (0,0,0)
#img=cv2.copyMakeBorder(img,1,1,1,1, cv2.BORDER_CONSTANT, value=[0,0,0])
# Draw rectangle
img = cv2.rectangle(img,(400,0),(512,100),(255,255,255),-1)
imggray=cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find Contour
_, contours, hierarchy = cv2.findContours( imggray.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
cv2.drawContours(img, contours, 0, (0,0,255), -1)
cv2.imshow('image',img)
cv2.waitKey(0)
Change
cv2.drawContours(image=image2, contours=contours_2, contourIdx=-1, color=(255,255,255), thickness=-1)
to
cv2.drawContours(image=image2, contours=contours_2, contourIdx=-1, color=(255,255,255), thickness=2)