I'm try to build a robust coin detection on images using HoughCircle with these params :
blured_roi = cv2.GaussianBlur(region_of_interest, (23, 23), cv2.BORDER_DEFAULT)
rows = blured_roi.shape[0]
circles = cv2.HoughCircles(blured_roi, cv2.HOUGH_GRADIENT, 0.9, rows/8,
param1=30, param2=50, minRadius=30, maxRadius=90)
The region of interest could be (300x300) or (1080x1920). So the minRadius and maxRadius aren't really helping here. But they are approximately the size of my coin in each type of image shape.
So to achieve that I tried many things. First of all, using simple GRAY image with a GaussianBlur filter.
It works on most of cases but if my coin border are similar tint as the background, the GRAY scale image will not really help to detect the right radius of my circle take a look at this example :
In second time, I tried to use edges of the coin to detect circles with Canny Transform, but as you can see above the Canny filter not works as I hoped. So I applied a GaussianBlur of (13, 13).
I also know that there is another canny transform call inside the HoughCircle method, but I wanted to be sure I'll get edges of the coin, because I'm losing them while blurring with GaussianBlur.
Finally I tried using a threshed image, but I can't understand why it didn't works as well as excepted, because on this image we can hope there will never have any noises because it's only black&white (?), and the circle is almost perfect. And applied a GaussianBlur of (9, 9).
Here you can see it failed to detect the coin on the threshed image, but it works with the canny edges image. But in many other cases Hough Transform on edges detection give an unperfected result, and I feel confiant about threshed image which as you can see reveal a nice circle.
I would like to understand why it didn't works on the threshed image (like the example above), and what I could do to make it works.
EDIT 1: So I discovered the different type of BORDER to specify in the GaussianBlur method. I thought it would be really useful to improve the hough circle detection on the Threshed image, but it didn't goes as well as excepted.
Related
Is there a way with OpenCV to smooth the edges as shown in this small black and white image?
I tried cv2.blur and cv2.GaussianBlur but that just blurs the image. I want to smooth out the lines (black lines as shown). How could I do this?
You don't have to run GaussianBlur over the whole image, just over the parts you need. You may create your own masks, or use as a starting point something like this:
blur = cv2.GaussianBlur( img, (5,5), 0)
smooth = cv2.addWeighted( blur, 1.5, img, -0.5, 0)
Feel free to experiment with the parameters.
Use this, I find it very effective:
median = cv2.medianBlur(imgBinAll, 5)
You can find external contour of the region, then find convex hull, and plot it with antialiased line flag on new image. See fillConvexPoly method for python.
I'm facing some general problems regarding the edge detection in an image (the image should be irrelevant for my question).
I want the canny edge detector to ignore a certain pixel value. For example: It should only look for edges if the gray value is not 0. Otherwise there will be "false edges" detected.
I usually use the cv2.canny function which works quite fast and well. Problem is, it is not customizable. So I took this code of a custom canny edge detector (https://rosettacode.org/wiki/Canny_edge_detector#Python) in order to customize it. It works but it's calculating the edges way too slow (It takes several minutes, whereas the cv2.canny function takes a fraction of a second).
This is my first problem.
Is there another way to make the cv2.canny function "ignore" pixels of a certein value. Imagine somewhere in the picture is a area filled with black (soo the image below). I don't want the edge detector to detect the edge of this black area.
Once I have some clear edges detected in my image, I want to create masks based on those edges. I couldn't find any examples for this online. So if anyone knows where to find a good tutorial on how to create masks from edges it would be great if you could help me out.
Thanks in advance
Here's an approach:
Calculate your Canny as usual using the fast OpenCV function.
Now locate all the black pixels in the image - you can do that with _,thr = cv2.threshold(im,1,255,cv2.THRESH_BINARY) and dilate those areas by 1 pixel with morphology to allow edges to be offset a little as they often are.
Multiply the normal Canny image with the mask you created so that anything it found in the black areas gets multiplied by zero, i.e. lost.
I am very new to OpenCV(and to StackOverflow). I'm writing a program with OpenCV which takes a picture with an object (i.e. pen(rice, phone) put on paper) and calculates what percent does the object make of the picture.
Problem I'm facing with is when I threshold image (tried adaptive and otsu) photo is a little bit shadow around edges:
Original image
Resulted picture
And here's my code:
import cv2
img = cv2.imread("image.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
b,g,r = cv2.split(img)
th, thresh = cv2.threshold(b, 100, 255, cv2.THRESH_BINARY|cv2.THRESH_OTSU)
cv2.imwrite("image_bl_wh.png", thresh)
Tried to blur and morphology, but couldn't do it.
How can I make my program count that black parts around the picture as background and is there more better and easier way to do it?
P.S. Sorry for my English grammar mistakes.
This is not a programmatic solution but when you do automatic visual inspection it is the first thing you should try: Improve your set-up. The image is simply darker around the edges so increasing the brightness when recording the images should help.
If that's not an option you could consider having an empty image for comparison. What you are trying to do is background segmentation and there are better ways than simple color thresholding they do however usually require at least one image of the background or multiple images.
If you want a software only solution you should try an edge detector combined with morphological operators.
I need to identify the pixels where there is a change in colour. I googled for edge detection and line detection techniques but am not sure how or in what way can these be applied.
Here are my very naive attempts:
Applying Canny Edge Detection
edges = cv2.Canny(img,0,10)
with various parameters but it didn't work
Applying Hough Line Transform to detect lines in the document
The intent behind this exercise is that I have an ill-formed table of values in a pdf document with the background I have attached. If I am able to identify the row boundaries using colour matching as in this question, my problem will be reduced to identifying columns in the data.
Welcome to image processing. What you're trying to do here is basically trying to find the places where the change in color between neighboring pixels is big, thus where the derivative of pixel intensities in the y direction is substantial. In signal processing, those are called high frequencies. The most common detector for high frequencies in images is called Canny Edge Detector and you can find a very nice tutorial here, on the OpenCV website.
The algorithm is very easy to implement and requires just a few simple steps:
import cv2
# load the image
img = cv2.imread("sample.png")
# convert to grayscale
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# resize for the visualization purposes
img = cv2.resize(img, None, img, fx=0.4, fy=0.4)
# find edges with Canny
edges = cv2.Canny(img, 10, 20, apertureSize=3)
# show and save the result
cv2.imshow("edges", edges)
cv2.waitKey(0)
cv2.imwrite("result.png", edges)
Since your case is very straightforward you don't have to worry about the parameters in the Canny() function call. But if you choose to find out what they do, I recommend checking out how to implement a trackbar and use it for experimenting. The result:
Good luck.
I have an image Original Image, and I would like to find the contour that encloses the box in the image. The reason for doing this, is I would like to then crop the image to the bounding box, and then perform further image processing on this cropped image.
I have tried detecting Canny edges, however they seem not to be connecting as I want them to. Attached is an image of how the canny edges look. Canny edges
gray = img[:,:,1]
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(blurred, 20, 60)
What is the best way to find the bounding box from the original image?
Many thanks.
Let me know how I can make this question clearer if possible too!
I assume the following: (if this is not the case you should specify such things in your question)
You know the size of the box
The size is always the same
The perspective is always the same
The box is always completely within the field of fiew
The box is not rotated
Use a few scan lines across the image to find the transition from black background to box (in x and y)
Threshold exceeded, max gradient or whatever suits you best.
Discard outliers, use min and max coordinates to position the fixed size ROI over your box.
There are many other ways to find the center positon of that fixed ROI like
threshold, distance transform, maximum
or
threshold, blob search, centroid/contour
You could also do some contour matching.
I recommend you improve your setup so the background illumination does not exceed the box border (left/right is better than top/bottom). Then everything becomes easy.
Your edge image looks terrible btw. Check other methods or improve your Canny parameters.