Is there a way with OpenCV to smooth the edges as shown in this small black and white image?
I tried cv2.blur and cv2.GaussianBlur but that just blurs the image. I want to smooth out the lines (black lines as shown). How could I do this?
You don't have to run GaussianBlur over the whole image, just over the parts you need. You may create your own masks, or use as a starting point something like this:
blur = cv2.GaussianBlur( img, (5,5), 0)
smooth = cv2.addWeighted( blur, 1.5, img, -0.5, 0)
Feel free to experiment with the parameters.
Use this, I find it very effective:
median = cv2.medianBlur(imgBinAll, 5)
You can find external contour of the region, then find convex hull, and plot it with antialiased line flag on new image. See fillConvexPoly method for python.
Related
I would like to extract logos from golf balls for further image processing.
I have already tried different methods.
I wanted to use the grayscale value of the images to locate their location and then cut it out. Due to many different logos and a black border around the images, this method unfortunately failed.
as my next approach I thought that I first remove the black background and then repeat the procedure from 1. but also without success because there is a dark shadow in the lower left corner and this is also recognized as the "logo" with the grayscale method. Covering the border further on the outside is not a solution, because otherwise logos that are on the border will also be cut away or only half of them will be detected.
I used the edge detection algorithm Canny of the Open CV library. The detection looked very promising, but I was not able to extract only the logo from the detection, because the edge of the Golfball was also recognized.
Any solution is welcome. Please forgive my English. Also, I am quite a beginner in programming. Probably there is a very simple solution to my problem but I thank you in advance for your help.
Here are 2 example images first the type of images from which the logos should be extracted and then how the image should look like after extraction.
Thank you very much. Best regards T
This is essentially "adaptive" thresholding, except this approach doesn't need to threshold. It adapts to the illumination, leaving you with a perfectly fine grayscale image (or color, if extended to do that).
median blur (large kernel size) to estimate ball/illumination
division to normalize
illumination:
normalized (and scaled a bit):
thresholded with Otsu:
def process(im, r=80):
med = cv.medianBlur(im, 2*r+1)
with np.errstate(divide='ignore', invalid='ignore'):
normalized = np.where(med <= 1, 1, im.astype(np.float32) / med.astype(np.float32))
return (normalized, med)
normalized, med = process(ball1, 80)
# imshow(med)
# imshow(normalized * 0.8)
ret, thresh = cv.threshold((normalized.clip(0,1) * 255).astype('u1'), 0, 255, cv.THRESH_BINARY + cv.THRESH_OTSU)
# imshow(thresh)
Adaptive thresholding can do the trick.
I'm try to build a robust coin detection on images using HoughCircle with these params :
blured_roi = cv2.GaussianBlur(region_of_interest, (23, 23), cv2.BORDER_DEFAULT)
rows = blured_roi.shape[0]
circles = cv2.HoughCircles(blured_roi, cv2.HOUGH_GRADIENT, 0.9, rows/8,
param1=30, param2=50, minRadius=30, maxRadius=90)
The region of interest could be (300x300) or (1080x1920). So the minRadius and maxRadius aren't really helping here. But they are approximately the size of my coin in each type of image shape.
So to achieve that I tried many things. First of all, using simple GRAY image with a GaussianBlur filter.
It works on most of cases but if my coin border are similar tint as the background, the GRAY scale image will not really help to detect the right radius of my circle take a look at this example :
In second time, I tried to use edges of the coin to detect circles with Canny Transform, but as you can see above the Canny filter not works as I hoped. So I applied a GaussianBlur of (13, 13).
I also know that there is another canny transform call inside the HoughCircle method, but I wanted to be sure I'll get edges of the coin, because I'm losing them while blurring with GaussianBlur.
Finally I tried using a threshed image, but I can't understand why it didn't works as well as excepted, because on this image we can hope there will never have any noises because it's only black&white (?), and the circle is almost perfect. And applied a GaussianBlur of (9, 9).
Here you can see it failed to detect the coin on the threshed image, but it works with the canny edges image. But in many other cases Hough Transform on edges detection give an unperfected result, and I feel confiant about threshed image which as you can see reveal a nice circle.
I would like to understand why it didn't works on the threshed image (like the example above), and what I could do to make it works.
EDIT 1: So I discovered the different type of BORDER to specify in the GaussianBlur method. I thought it would be really useful to improve the hough circle detection on the Threshed image, but it didn't goes as well as excepted.
So I have an image processing task at hand which requires me to crop a certain portion of an image. I have no prior experience of OpenCV. I would like to know of a certain approach where I should be headed.
Sample Input Image:
Sample Output Image:
What I initially thought was to convert the image to a bitmap and remove pixels that are below or above a certain threshold. Since I am free to use OpenCV and Python, I would like to know of any automated algorithm that does so and if not, what should be the right approach for such a problem. Thank you.
Applying a simple threshold should get rid of the background, provided it's always darker than the foreground. If you use the Otsu thresholding algorithm, it should choose a good partition for you. Using your example as input, this gives:
Next you could compute the bounding box to select the region of the foreground. Provided the background is distinct enough and there are no holes, this gives you the resulting rect:
[619 x 96 from (0, 113)]
You can then use this rect to crop the original, to produce the desired result:
I wrote the code to solve this in C++. A rough translation into Python would look something like this:
import cv2 as cv
img = cv.imread(sys.argv[1])
grayscale = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
thresholded = cv.threshold(grayscale, 0, 255, cv.THRESH_OTSU)
imwrite("otsu.png", thresholded)
bbox = cv.boundingRect(thresholded)
x, y, w, h = bbox
print(bbox)
foreground = img[y:y+h, x:x+w]
imwrite("foreground.png", foreground)
This method is fast and simple. If you find you have some white holes in your background which enlarge the bounding box, try applying an erosion operator.
FWIW I very much doubt you would get results like this as predictably or reliably using NNs.
The thresholding seems like a good approach. An overkill would be a neural network but you probably don't have enough data to train (:D) anyways check out this link.
you should be able to do something like:
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
img = cv.imread('img.png')
gray = cv.cvtColor(img,cv.COLOR_BGR2GRAY)
ret, thresh = cv.threshold(gray,0,255,cv.THRESH_BINARY_INV+cv.THRESH_OTSU
NN would be a overkill! You can do edge detection and get the extreme horizontal lines as boundaries. Then crop only the roi within these two lines.
I have an image Original Image, and I would like to find the contour that encloses the box in the image. The reason for doing this, is I would like to then crop the image to the bounding box, and then perform further image processing on this cropped image.
I have tried detecting Canny edges, however they seem not to be connecting as I want them to. Attached is an image of how the canny edges look. Canny edges
gray = img[:,:,1]
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(blurred, 20, 60)
What is the best way to find the bounding box from the original image?
Many thanks.
Let me know how I can make this question clearer if possible too!
I assume the following: (if this is not the case you should specify such things in your question)
You know the size of the box
The size is always the same
The perspective is always the same
The box is always completely within the field of fiew
The box is not rotated
Use a few scan lines across the image to find the transition from black background to box (in x and y)
Threshold exceeded, max gradient or whatever suits you best.
Discard outliers, use min and max coordinates to position the fixed size ROI over your box.
There are many other ways to find the center positon of that fixed ROI like
threshold, distance transform, maximum
or
threshold, blob search, centroid/contour
You could also do some contour matching.
I recommend you improve your setup so the background illumination does not exceed the box border (left/right is better than top/bottom). Then everything becomes easy.
Your edge image looks terrible btw. Check other methods or improve your Canny parameters.
I'm testing some image processing to obtain minutiae from digital fingerprints. I'm doing so far:
Equalize histogram
Binarize
Apply Zhang-Suen algorithm for lines thinning (this is not working properly).
Try to determine corners in thinned image and show them.
So, the modifications I'm obtaining are:
However, I can't get to obtain possible corners in the last image, which belongs to thinned instance of Mat object.
This is code for trying to get corners:
corners_image = cornerHarris(thinned,1,1,0.04)
corners_image = dilate(corners_image,None)
But trying imshow on the resulting matrix will show something like:
a black image.
How should I determine corners then?
Actually cv::cornerHarris returns corener responses, not corners itself. Looks like responses on your image is too small.
If you want to visualize corners you may get responses which are larger some threshold parameter, then you may mark this points on original image as follows:
corners = cv2.cvtColor(thinned, cv2.COLOR_GRAY2BGR)
threshold = 0.1*corners_image.max()
corners [corners_image>threshold] = [0,0,255]
cv2.imshow('corners', corners)
Then you can call imshow and red points will correspond to corner points. Most likely you will need to tune threshold parameter to get results what you need.
See more details in tutorial.