I am looking to fill the area under a boundary with white color. I basically have an image with the red boundary detected through the findContours method
I am now looking at filling the area below this detected red boundary with white color. This would allow me to distinguish between the area below the red boundary and the area above for a histogram computation.
Can someone help me with this? Open to suggestions outside OpenCV as well, if it's easier to implement.
You are drawing after findContours operation. you can get the points(x,y) you are painting and you can calculate it with a simple row to column ratio.
Related
I've got the following image:
I want to fill in the black hole and make it white. How can I fill in this hole?
Thank you
You could floodfill with white starting at the top-left corner, which will leave you with this - which should allow you to locate the "hole".
I have bordered artificially with red so you can see the extent of the image.
apply this method Anyone knows an algorithm for finding "shapes" in 2d arrays?
Images are basically arrays and you can apply this algorithm with little bit modification in order to find holes and set every black pixel in closed shape and set blacks to white
My task is to detect aeroplane condensation trails on the blue sky and deleting everything else from the picture, but I have to leave a 10-pixel wide area around the trails.
I have managed to draw the contours of the condensation trails based on colour using a mask and cv2.drawContours but I'm stuck with creating that 10-pixel sky blue area around it-basically I have to scale up the contour line.
Is it possible to scale up contours drawn by the cv2.drawContours command?
Since you already have a list of points on the contour, you can easily draw a 10-pixel thick line on it by using the line function between consecutive points (look at the thickness parameter). To fill the rest of the area inside the contour, look at the fillPoly function.
I am trying to count all the shaded and un-shaded rectangles in this grid using python. I tried contour detection in OpenCV and was not able to achieve this. I also tried the hough line transform and detected the lines in the image, but I am not able to figure out how to proceed further. Is there a better way of doing it? Can someone suggest a way to proceed?
As your image looks very clean, I would
threshold the image to select white regions: gray regions and black lines will be black
use findContours() to count white blobs
do another threshold to select black lines. Only black lines will be black, everything else white
XOR the two images: this way you should have the gray regions
use findContours() to count the gray blobs
EDIT:
The ellipse cuts some rectangles and this will affect your count. If you want to remove it, the thresholds are not enough (both the ellipse and the rectangle lines are black). A possible way to do it:
With Hough Lines you can detect the lines,
draw in a new image the vertical and horizontal lines (ignore diagonal lines as they may be part od the ellipse)
with boolean operations (and, xor or or) between the thresholded images and the lines image you should be able to keep only the lines and remove the ellipse
I have this image:
Here I have an image on a green background and an area marked with a red line within it. I want to calculate the area of the marked portion with respect to the Image.
I am cropping the image to remove the green background and calculating the area of the cropped Image. From here I don't know how to proceed.
I have noticed that Contour can be used for this but the problem is how do I draw the contour in this case.
I guess if I can create the contour and fill the marked area with some color, I can subtract it from the whole(cropped) image and get both the areas.
In your link, they use the method threshold with a colour in parameter. Basically it takes your source image and sets as white all pixels greater than this value, or black otherwise (This means that your source image needs to be a greyscale image). This threshold is what enables you to "fill the marked area" in order to make a contour detection possible.
However, I think you should try to use the method inRange on your cropped picture. It is pretty much the same as threshold, but instead of having one threshold, you have a minimum and a maximum boundary. If your pixel is in the range of colours given by your boundaries, then it will be set as white. If it isn't, then it will be set as black. I don't know if this will work, but if you try to isolate the "most green" colours in your range, then you might get your big white area on the top right.
Then you apply the method findContours on your binarized image. It will give you all the contours it found, so if you have small white dots on other places in your image it doesn't matter, you'll only have to select the biggest contour found by the method.
Be careful, if the range of inRange isn't appropriate, the big white zone you should find on top right might contain some noise, and it could mess with the detection of contours. To avoid that, you could blur your image and do some stuff like erosion/dilation. This way you might get a better detection.
EDIT
I'll add some code here, but it can't be used as is. As I said, I have no knowledge in Python so all I can do here is provide you the OpenCV methods with the parameters to provide.
Let's make also a review of the steps:
Binarize your image with inRange. You need to find appropriate values for your minimum and maximum boundaries. What you want to do here is isolate the green colours since it is mostly what composes the area inside your contour. I can't really suggest you something better than trial and error to find the best thresholds. Let's start with those min and max values : (0, 125, 0) and (255, 250, 255)
inRange(source_image, Scalar(0, 125, 0), Scalar(255, 250, 255), binarized_image)
Check your result with imshow
imshow("bin", binarized_image)
If you binarization is ok (you can detect the area you want quite well), apply findContours. I'm sorry I don't understand the syntax used in your tutorial nor in the documentation, but here are the parameters:
binarized_mat: your binarized image
contours: an array of arrays of Point which will contain all the contours detected. Each contour is stored as an array of points.
mode: you can choose whatever you want, but I'd suggest RETR_EXTERNAL in your case.
Get the array with the biggest size, since it might be the contour with the highest number of points (the largest one then).
Calculate the area inside
Hope this helps!
I have a black image with a big white spot on it and I want to calculate the area of this white spot. Which is the best way to calculate this ? I'm using OpenCV in Python.
To find the Area follow these steps:
Apply thresholding & Binarize the input image.
Find Contours.
Find the Area of Contours by using cv.ContourArea();
refer this example for further reference.