OpenCV (Python): Construct Rectangle from thresholded image - python

The image below shows an aerial photo of a house block (re-oriented with the longest side vertical), and the same image subjected to Adaptive Thresholding and Difference of Gaussians.
Images: Base; Adaptive Thresholding; Difference of Gaussians
The roof-print of the house is obvious (to the human eye) on the AdThresh image: it's a matter of connecting some obvious dots. In the sample image, finding the blue-bounded box below -
Image with desired rectangle marked in blue
I've had a crack at implementing HoughLinesP() and findContours(), but get nothing sensible (probably because there's some nuance that I'm missing). The python script-chunk that fails to find anything remotely like the blue box, is as follows:
import cv2
import numpy as np
from matplotlib import pyplot as plt
# read in full (RGBA) image - to get alpha layer to use as mask
img = cv2.imread('rotated_12.png', cv2.IMREAD_UNCHANGED)
grey = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Otsu's thresholding after Gaussian filtering
blur_base = cv2.GaussianBlur(grey,(9,9),0)
blur_diff = cv2.GaussianBlur(grey,(15,15),0)
_,thresh1 = cv2.threshold(grey,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
thresh = cv2.adaptiveThreshold(grey,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,11,2)
DoG_01 = blur_base - blur_diff
edges_blur = cv2.Canny(blur_base,70,210)
# Find Contours
(ed, cnts,h) = cv2.findContours(grey, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:4]
for c in cnts:
approx = cv2.approxPolyDP(c, 0.1*cv2.arcLength(c, True), True)
cv2.drawContours(grey, [approx], -1, (0, 255, 0), 1)
# Hough Lines
minLineLength = 30
maxLineGap = 5
lines = cv2.HoughLinesP(edges_blur,1,np.pi/180,20,minLineLength,maxLineGap)
print "lines found:", len(lines)
for line in lines:
cv2.line(grey,(line[0][0], line[0][1]),(line[0][2],line[0][3]),(255,0,0),2)
# plot all the images
images = [img, thresh, DoG_01]
titles = ['Base','AdThresh','DoG01']
for i in xrange(len(images)):
plt.subplot(1,len(images),i+1),plt.imshow(images[i],'gray')
plt.title(titles[i]), plt.xticks([]), plt.yticks([])
plt.savefig('a_edgedetect_12.png')
cv2.destroyAllWindows()
I am trying to set things up without excessive parameterisation. I'm wary of 'tailoring' an algorithm for just this one image since this process will be run on hundreds of thousands of images (with roofs/rooves of different colours which may be less distinguishable from background). That said, I would love to see a solution that 'hit' the blue-box target - that way I could at the very least work out what I've done wrong.
If anyone has a quick-and-dirty way to do this sort of thing, it would be awesome to get a Python code snippet to work with.
The 'base' image ->
Base Image

You should apply the following:
1. Contrast Limited Adaptive Histogram Equalization-CLAHE and convert to gray-scale.
2. Gaussian Blur & Morphological transforms (dialation, erosion, etc) as mentioned by #bad_keypoints. This will help you get rid of the background noise. This is the most tricky step as the results will depend on the order in which you apply (first Gaussian Blur and then Morphological transforms or vice versa) and the window sizes you choose for this purpose.
3. Apply Adaptive thresholding
4. Apply Canny's Edge detection
5. Find contour having four corner points
As said earlier you need to tweak with input parameters of these functions and also need to validate these parameters with other images. As it might be possible that it will work for this case but not for other cases. Based on trial and error you need to fix the parameter values.

Related

handwritten circular annotation removal from scanned image

I have these images containing the handwritten circular annotation on the printed text images. I want to remove these annotations from the input image. I have tried to apply some of the thresholding methods as discussed in many threads on StackOverflow, but my results are not as I expected.
However, the method that I am using works really well if the annotation is marked by a blue pen but when the annotation is marked by a black pen then the method of thresholding and erosion won’t produce the output as expected.
Here is a sample image of my achieved results on blue annotations with the thresholding and erosion method
Image (input on the left and output on the right)
Code
import cv2
import numpy as np
from google.colab.patches import cv2_imshow
img = cv2.imread("/content/Scan_0101.jpg")
cv2_imshow(img)
wimg = img[:, :, 0]
ret,thresh = cv2.threshold(wimg,120,255,cv2.THRESH_BINARY)
cv2_imshow(thresh)
kernel = np.ones((3, 3), np.uint8)
erosion = cv2.erode(thresh, kernel, iterations = 1)
mask = cv2.bitwise_or(erosion, thresh)
#cv2_imshow(erosion)
white = np.ones(img.shape,np.uint8)*255
white[:, :, 0] = mask
white[:, :, 1] = mask
white[:, :, 2] = mask
result = cv2.bitwise_or(img, white)
erosion = cv2.erode(result, kernel, iterations = 1)
Here is a sample image of my achieved results on black annotations with the thresholding and erosion method
Image (input on the left and output on the right)
Any suggested approach for this problem? or how this code can be modified to produce the required results.
You must understand that as the gray values in the text and those of the hand writings are in the same range, no thresholding method in the world can work.
In fact, no algorithm at all can succeed without "hints" on what characters look like or don't look like. Even the stroke thickness is not distinctive enough.
The only possible indication is that the circles are made of a smooth and long stroke. And removing them where they cross the characters is just impossible.
Some Parts of handwritten circles (on line spacing regions) may be able to extract, with the assumption "many letters align on same line". In your image, upper and lower part of the circle will be extracted, I think.
Then, if you track the black line with starting from the extracted part (with assuming smooth curvature), it may be able to detect the connected handwritten circle.
However... in real, I think such process will encounter many difficulties : especially regarding the fact that characters will be cut off by removing curve.

Detecting a horizontal line in an image

Problem:
I'm working with a dataset that contains many images that look something like this:
Now I need all these images to be oriented horizontally or vertically, such that the color palette is either at the bottom or the right side of the image. This can be done by simply rotating the image, but the tricky part is figuring out which images should be rotated and which shouldn't.
What I have tried:
I thought that the best way to do this, is by detecting the white line that separates the the color palette from the image. I decided to rotate all images that have the palette at the bottom such that they have it at the right side.
# yes I am mixing between PIL and opencv (I like the PIL resizing more)
# resize image to be 128 by 128 pixels
img = img.resize((128, 128), PIL.Image.BILINEAR)
img = np.array(img)
# perform edge detection, not sure if these are the best parameters for Canny
edges = cv2.Canny(img, 30, 50, 3, apertureSize=3)
has_line = 0
# take numpy slice of the area where the white line usually is
# (not always exactly in the same spot which probably has to do with the way I resize my image)
for line in edges[75:80]:
# check if most of one of the lines contains white pixels
counts = np.bincount(line)
if np.argmax(counts) == 255:
has_line = True
# rotate if we found such a line
if has_line == True:
s = np.rot90(s)
An example of it working correctly:
An example of it working incorrectly:
This works maybe on 98% of images but there are some cases where it will rotate images that shouldn't be rotated or not rotate images that should be rotated. Maybe there is an easier way to do this, or maybe a more elaborate way that is more consistent? I could do it manually but I'm dealing with a lot of images. Thanks for any help and/or comments.
Here are some images where my code fails for testing purposes:
You can start by thresholding your image by setting a very high threshold like 250 to take advantage of the property that your lines are white. This will make all the background black. Now create a special horizontal kernel with a shape like (1, 15) and erode your image with it. What this will do is remove the vertical lines from the image and only the horizontal lines will be left.
import cv2
import numpy as np
img = cv2.imread('horizontal2.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 250, 255, cv2.THRESH_BINARY)
kernel_hor = np.ones((1, 15), dtype=np.uint8)
erode = cv2.erode(thresh, kernel_hor)
As stated in the question the color palates can only be on the right or the bottom. So we can test to check how many contours does the right region has. For this just divide the image in half and take the right part. Before finding contours dilate the result to fill in any gaps with a normal (3, 3) kernel. Using the cv2.RETR_EXTERNAL find the contours and count how many we have found, if greater than a certain number the image is correct side up and there is no need to rotate.
right = erode[:, erode.shape[1]//2:]
kernel = np.ones((3, 3), dtype=np.uint8)
right = cv2.dilate(right, kernel)
cnts, _ = cv2.findContours(right, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if len(cnts) > 3:
print('No need to rotate')
else:
print('rotate')
#ADD YOUR ROTATE CODE HERE
P.S. I tested for all four images you have provided and it worked well. If in case it does not work for any image let me know.

How can I smoothly detect the welding joint using OpenCV-Python?

I have tried to detect the welding joint (bead) using the codes attached below in the last part of this question. I aim to draw a contour on the joint as shown in the third image but my results are poor and don't look similar to the expected results. here is the summary of what I did but the codes are well clear:
1. Reading the image .jpg format
2. Image blurring
3. Thresholding
4. Morphological operations
5. Creating mask
6. Finding contour
But the results are not promising, how can I get out from here
img_new = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
blur_image = cv2.bilateralFilter(img_new,5,21,21)
thresh = cv2.adaptiveThreshold(blur_image,255,
cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY_INV,15,3)
kernel = np.ones((5,5),dtype='uint8')
thresh_dilated = cv2.dilate(thresh, kernel, iterations = 1)
num_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(thresh_dilated, connectivity=4)
sizes = stats[1:,-1]
min_size = 5000
num_labels = num_labels -1
img2 = np.zeros((labels.shape))
for i in range(0,num_labels):
if sizes[i]>=min_size:
img2[labels==i+1]=255
closing = cv2.morphologyEx(img2,cv2.MORPH_CLOSE,kernel)
opening = cv2.morphologyEx(closing,cv2.MORPH_OPEN,kernel)
result = opening.copy()
new_result = result.astype(dtype=np.uint8)
black = np.full((new_result.shape[0],new_result.shape[1],3),(0,0,0),np.uint8)
black1 = cv2.ellipse(black,(700,750),(300,140),0,0,360,(255,255,255),-1)
grayscale = cv2.cvtColor(black1,cv2.COLOR_BGR2GRAY)
ret,b_mask = cv2.threshold(grayscale,127,255,0)
img_result = cv2.bitwise_and(new_result,b_mask)
contours, hierarchy = cv2.findContours(img_result, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(img, contours, -1, (0,255,0), 2)
cv2.imshow('result',img)
cv2.waitKey(0)```
Read any papers?
Is this for automatic processing of many welding joints on an assembly line? Does this have to work for many other similiar images? It does not make sense to fine tune an algorithm for a single picture in that case.
If yes, than you need to use more images to create the algorithm, maybe use a different light setup, maybe images taken with an IR camera right after the welding to get a "hot" mask. Or use light from left and right sight separately to combine two images for a single mark.
Another thing that would be very helpful is to get a "before" image from the parts. without the welding joint. In that case it could get easy. You would just create the difference between the images, do some filtering to remove the welding beads and the reddish layer.
Edit 1:
Another thing I forgot to mention is please look at the RGB layers separately. This is something that you should always try early on. Often there is something useful to see, e.g. in your case it could be that the blue layer might be interesting. Please add the layers to your question.

OpenCV Detect scratches on fruits

For a little experiment in Python I'm doing I want to find small scratches on fruits. The scratches are very small and hard to detect by human eye.
I'm using a high resolution camera for that experiment.
Here is the defect I want to detect:
Original Image:
This is my result with very few lines of code:
So I found the contours of my fruit. How can I proceed to finding the scratch? The RGB Value is similar to other parts of the fruit. So how can I differentiate between A scratch, and a part of the fruit?
My code:
# Imports
import numpy as np
import cv2
import time
# Read Image & Convert
img = cv2.imread('IMG_0441.jpg')
result = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Filtering
lower = np.array([1,60,50])
upper = np.array([255,255,255])
result = cv2.inRange(result, lower, upper)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(9,9))
result = cv2.dilate(result,kernel)
# Contours
im2, contours, hierarchy = cv2.findContours(result.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
result = cv2.cvtColor(result, cv2.COLOR_GRAY2BGR)
if len(contours) != 0:
for (i, c) in enumerate(contours):
area = cv2.contourArea(c)
if area > 100000:
print(area)
cv2.drawContours(img, c, -1, (255,255,0), 12)
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),12)
# Stack results
result = np.vstack((result, img))
resultOrig = result.copy()
# Save image to file before resizing
cv2.imwrite(str(time.time())+'_0_result.jpg',resultOrig)
# Resize
max_dimension = float(max(result.shape))
scale = 900/max_dimension
result = cv2.resize(result, None, fx=scale, fy=scale)
# Show results
cv2.imshow('res',result)
cv2.waitKey(0)
cv2.destroyAllWindows()
I changed your image to HSL colour space.
I can't see the scratch in the L channel, so the greyscale approach suggested earlier is going to be difficult.
But the scratch is quite noticeable in the hue plane.
You could use an edge detector to find the blemish in the hue channel. Here I use a difference of gaussians detector (with sizes 20 and 4).
personal guess is to use some algorithm to detect the grayscale change. The grayscale variation around the scratch should be bigger than the variation in other area. Sobel and Scharr Derivatives could be an option. This is a link to python-openCV about image gradient. You can first crop out the fruit with coutour application
If you really want to use conventional computer vision techniques, you should start with edges that can be detected on the fruit. Some of the edges are caused by the bumps on the fruit, so you have to look at various features of the area around the edges to find the difference between scratches and bumps. After you look at about a hundred scratches, you should be able to come up with some rules.
But this process is going to be very tiring, and my guess is you will not have much luck. A better way to approach this problem is to train a deep neural network by manually annotating scratches on about 100 images, and letting the network find out by itself how to distinguish scratches from the rest of the fruit.
If you are a beginner to these stuff, search for PyImageSearch and LearnOpenCV. Both are very resourceful sites where you can learn quickly.

Finding the boundary points of a contour in opencv numpy

complete noob at open cv and numpy here. here is the image: here is my code:
import numpy as np
import cv2
im = cv2.imread('test.jpg')
imgray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
imgray = cv2.medianBlur(imgray, ksize=7)
ret, thresh = cv2.threshold(imgray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
_, contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
print ("number of countours detected before filtering %d -> "%len(contours))
new = np.zeros(imgray.shape)
new = cv2.drawContours(im,contours,len(contours)-1,(0,0,255),18)
cv2.namedWindow('Display',cv2.WINDOW_NORMAL)
cv2.imshow('Display',new)
cv2.waitKey()
mask = np.zeros(imgray.shape,np.uint8)
cv2.drawContours(mask,[contours[len(contours)-1]],0,255,-1)
pixelpoints = cv2.findNonZero(mask)
cv2.imwrite("masked_image.jpg",mask)
print(len(pixelpoints))
print("type of pixelpoints is %s" %type(pixelpoints))
the length of pixelpoints is nearly 2 million since it contains all the point covered by the contours. But i only require the bordering point of that contour. How do I do it? I have tried several methods from opencv documentation but always errors with tuples and sorting operations. please...help?
I only require the border points of the contour :(
Is this what you mean by border points of a contour?
The white lines you see are points that I have marked out in white against the blue drawn contours. There's a little spot at the bottom right because I think its most likely that your black background isn't really black and so when I did thresholding and a floodfill to get this,
there was a tiny white speck at the same spot. But if you play around with the parameters and do a more proper thresholding and floodfill it shouldn't be an issue.
In openCV's drawContours function, the cnts would contain lists of contours and each contour will contain an array of points. Each point is also of type numpy.ndarray. If you want to place all points of each contour in one place so it returns you a set of points of boundary points (like the white dots outline in the image above), you might want to append them all into a list. You can try this:
#rgb is brg instead
contoured=cv2.drawContours(black, cnts, -1, (255,0,0), 3)
#list of ALL points of ALL contours
all_pixels=[]
for i in range(0, len(cnts)):
for j in range(0,len(cnts[i])):
all_pixels.append(cnts[i][j])
When I tried to
print(len(all_pixels))
it returned me 2139 points.
Do this if you want to mark out the points for visualization purposes (e.g. like my white points):
#contouredC is a copy of the contoured image above
contouredC[x_val, y_val]=[255,255,255]
If you want less points, just use a step function when iterating through to draw the white points out. Something like this:
In python, for loops are slow so I think there's better ways of replacing the nested for loops with a np.where() function or something instead. Will update this if/when I figure it out. Also, this needs better thresholding and binarization techniques. Floodfill technique referenced from: Python 2.7: Area opening and closing binary image in Python not so accurate.
Hope it helps.

Categories

Resources