Smoothing extremal points in handwritten digits - python

I am trying to recognize hand written digits. Say that I have the following image:
My target is to smooth the extremal features of the contours, and keep only the shape of the white trace like below:
I first applied cv2.THRESH_BINARY_INV to remove the noise.
Now I tried applying cv2.erode() with np.ones((5,5)) as the kernel, but the resulting figure still had the extremal points.
I think applying cv2.findContours() may help to get the desired shape, but I am going to end up with two contours, one for the inner and another for the outer part. Any ideas will be much appreciated!
Edit:
Thanks to #stateMachine, I managed to get a skeleton of the digit. I applied cv2.ximgproc.thinning(), followed by cv2.GaussianBlur() and cv2.MORPH_CLOSE. If the extremal points of this image can be smoothened a bit then it would be perfect. I am still open to any ideas :)

Maybe what you are looking for is the shape's skeleton. The skeleton is part of OpenCV's extended image processing module (pip install opencv-contrib-python). You can compute the skeleton of your image like this:
# Imports:
import cv2
# Image path
path = "D://opencvImages//"
fileName = "OKwfZ.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# To Grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Compute the skeleton:
skeleton = cv2.ximgproc.thinning(grayscaleImage, None, 1)
cv2.imshow("Skeleton", skeleton)
cv2.waitKey(0)
This is the result:
The skeleton normalizes the thickness of the image to 1 pixel. If you need a thicker line you can apply some dilations.

Related

How to crop images using OpenCV without knowing the exact coordinates?

I am trying to crop an image of a piece of card/paper or such so that the card/paper is in focus. I tried the below code but the problem is that it works only when the object in question is alone in the picture. If it is a blank background with nothing else in it- the cropping is flawless, otherwise it does not work as expected.
I am attempting create a system which crops different kinds of images and puts them through a classifier and then extracts text from them.
import cv2
import numpy as np
filenames = "img.jpg"
img = cv2.imread(filenames)
blurred = cv2.blur(img, (3,3))
canny = cv2.Canny(blurred, 50, 200)
## find the non-zero min-max coords of canny
pts = np.argwhere(canny>0)
y1,x1 = pts.min(axis=0)
y2,x2 = pts.max(axis=0)
## crop the region
cropped = img[y1:y2, x1:x2]
filename_cropped = filenames.split('.')
filename_cropped[0] = filename_cropped[0] + '_cropped'
filename_cropped = '.'.join(filename_cropped)
cv2.imwrite(filename_cropped, cropped)
An sample image that works is
Something that does not work is
Can anyone help with this?
The first image works because the entire images besides your target is empty. Canny will also give other results when there is more in the image.
If you are looking for those specific cards I suggest you try to use some colour filtering first. You can try to filer for the blue/purple hue of the card.
Increasing the canny threshold could also work, but you will always still be finding the hand as well in this image unless you add some colour filtering.
You can also try Sobel edge detection . This will probably highlight the instant edges of the card pretty well. But then again, it will also show the hand, so you can't just take all the Sobel/Canny outputs. You need to add processing before it that isolates the card, or after it that can find the rectangular shape of the card in the sobel/canny.

How does cv2.findContours() edit the image?

I have two questions.
I am working with openCv and python and I am trying to have an image's contours. I am succesfull at that but when I try to se what is the difference between when I use cv2.drawContorus() functions and directly edit image with cv2.findContours() without sending a copy of original image as the source parameter. I have tried on some images but I couldnt see anything even happenning.
I am trying to get the contours of a square I created with paint square tool. But when I try with cv2.CHAIN_APPROX_SIMPLE method, it gives me coordinates of 6 points which none of the combinations from them is suitable for my square. Why does it do like that?
Can someone explain?
Here is my code for both problems:
import cv2
import numpy as np
image = cv2.imread(r"C:\Users\fazil\Desktop\12.png")
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
gray = cv2.Canny(gray,75,200)
gray = cv2.threshold(gray,127,255,cv2.THRESH_BINARY_INV)[1]
cv2.imshow("s",gray)
contours, hiearchy = cv2.findContours(gray,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
print(contours[1])
cv2.drawContours(image,contours,1,(45,67,89),5)
cv2.imshow("k",gray)
cv2.imshow("j",image)
cv2.waitKey(0)
cv2.destroyAllWindows()

OpenCV - RETR_EXTERNAL not working after Otsu's binarization

I have applied the Otsu's binarization to one image and got this result
After that, I use this code to get boxes around the four main shapes:
img = cv.imread('test_bin.jpg', 0)
_, cnts, _ = cv.findContours(img.copy(), cv.RETR_EXTERNAL, cv.CHAIN_APPROX_NONE)
for cnt in cnts:
x,y,w,h = cv.boundingRect(cnt)
cv.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
cv.imwrite('test_cnt.jpg', img)
However, I'm not getting anything. It returns just one contour which I imagine it could be the full image itself. I saw it works for RETR_TREE, but I need it to work with RETR_EXTERNAL for the next operations. What is failing here?
As per the OpenCV contours documentation:
In OpenCV, finding contours is like finding white object from black
background. So remember, object to be found should be white and
background should be black.
But in your case, it is clearly the opposite of the requirements, so you just need to invert your image and it can be simply done as:
img = cv2.bitwise_not(img)
Also, note that:
For better accuracy, use binary images. So before finding contours,
apply threshold or canny edge detection.
I used your image and got following results, after inverting the image. If you want to remove the small boxes, then simply use cv2.threshold to get a binary image.

Aligning and stitching images based on defined feature using OpenCV

I would like to create a panoramic image by combining 2 images in which the same feature, a plus sign.
I've used cv2.xfeatures2d.SIFT_create() to find keypoints in the image however it doesn't find the plus symbol very well. Is there some way I can improve this by making it search specifically for a plus-shaped feature?
import cv2
image1 = cv2.imread('example_image.png')
sift = cv2.xfeatures2d.SIFT_create()
kp = sift.detect(grey_image1, None)
kp_image = cv2.drawKeypoints(grey_image1, kp, None)
def showimage(image, name="No name given"):
cv2.imshow(name, image)
cv2.waitKey(0)
cv2.destroyAllWindows()
return
showimage(kp_image)
The source image is here, second image to align is here. Here is the resulting image from the code above. This is an example of the desired output made using GIMP and manually aligning the two images (the second image will need to transformed to fit properly).`
NB I'm open to using other approaches outside of OpenCV/Python to solve this problem.

Detect object with openCV and python

I'm trying to detect the white dots in the following image using OpenCV and Python.
I tried using the function cv2.HoughCircles but without any success.
Do I need to use a different method?
This is my code:
import cv2, cv
import numpy as np
import sys
if len(sys.argv)>1:
filename = sys.argv[1]
else:
filename = 'p.png'
img_gray = cv2.imread(filename,cv2.CV_LOAD_IMAGE_GRAYSCALE)
if img_gray==None:
print "cannot open ",filename
else:
img = cv2.GaussianBlur(img_gray, (0,0), 2)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(img,cv2.cv.CV_HOUGH_GRADIENT,4,10,param1=200,param2=100,minRadius=3,maxRadius=100)
if circles:
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),1)
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
cv2.imshow('detected circles',cimg)
cv2.waitKey(0)
cv2.destroyAllWindows()
If you can reproduce a morphological reconstruction in OpenCV, you can easily build a h-dome transform which simplifies the task significantly. Otherwise, a simple threshold on a gaussian filtering might be enough too.
Binarize[FillingTransform[GaussianFilter[f, 2], 0.4, Padding -> 1]]
The gaussian filtering was done in the code above to effectively suppress the noise around the border of the input, which would remain after the h-dome transform otherwise.
Next there is the result of a simple threshold after a gaussian filtering (Binarize[GaussianFilter[f, 2], 0.5]) as well another result that is given by a direct binarization using Kapur's thresholding method (see the paper "A new method for gray-level picture thresholding using the entropy of the histogram" (which is no longer a new method, it is from 1985)):
The right image above has a lot of small points all over the border (which cannot be seen at this image resolution), but is fully automatic. From these 3 options, only the second one is already present in OpenCV.
I think a median filter will improve your image. Try to experiment with some kernels, 3x3 or 7x7. Then after that some (local) thresholding algorithm will get you shapes. You can either you HoughCircles, or just find contours and check them for roundness.
Convert the image to binary image using a suitable threshold technique (Otsu might help). Then use morphological operations like erosion to make circles smaller and then you can easily find their centers.

Categories

Resources