Increase accuracy of detecting lines using OpenCV - python

I am implementing a program to detect lines in images from a camera. The problem is that when the photo is blurry, my line detection algorithm misses a few lines. Is there a way to increase the accuracy of the cv.HoughLines() function without editing the parameters?
Example input image:
Desired image:
My current implementation:
def find_lines(img):
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
edges = cv.dilate(gray,np.ones((3,3), np.uint8),iterations=5)
edges = cv.Canny(gray, 50, 150, apertureSize=3)
lines = cv.HoughLines(edges, 1, np.pi/180, 350)

It would be a good idea to preprocess the image before giving it to cv2.HoughLines(). Also I think cv2.HoughLinesP() would be better. Here's a simple approach
Convert image to grayscale
Apply a sharpening kernel
Threshold image
Perform morphological operations to smooth/filter image
We apply a sharpening kernel using cv2.filter2D() which gives us the general shape of the line and removes the blurred sections. Other filters can be found here.
Now we threshold the image to get solid lines
There are small imperfections so we can use morphological operations with a cv2.MORPH_ELLIPSE kernel to get clean diamond shapes
Finally to get the desired result, we dilate using the same kernel. Depending on the number of iterations, we can obtain thinner or wider lines
Left (iterations=2), Right (iterations=3)
import cv2
import numpy as np
image = cv2.imread('1.png', 0)
sharpen_kernel = np.array([[-1,-1,-1], [-1,9,-1], [-1,-1,-1]])
sharpen = cv2.filter2D(image, -1, sharpen_kernel)
thresh = cv2.threshold(sharpen,220, 255,cv2.THRESH_BINARY)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=3)
result = cv2.dilate(opening, kernel, iterations=3)
cv2.imshow('thresh', thresh)
cv2.imshow('sharpen', sharpen)
cv2.imshow('opening', opening)
cv2.imshow('result', result)
cv2.waitKey()

You're looking for image sharpening techniques. You'll find suggestions here.
You can use different kernel operations to achieve this. OpenCV lists this C++ code here
// sharpen image using "unsharp mask" algorithm
Mat blurred; double sigma = 1, threshold = 5, amount = 1;
GaussianBlur(img, blurred, Size(), sigma, sigma);
Mat lowContrastMask = abs(img - blurred) < threshold;
Mat sharpened = img*(1+amount) + blurred*(-amount);
img.copyTo(sharpened, lowContrastMask);
which should be fairly easy to convert to Python.

Related

Compare two different images and find the differences

I have a webcam which takes pictures of a concrete slab. Now I want to check if there are objects on the slab or not. The objects could be anything and accordingly cannot be enumerated in a class. Unfortunately I cannot compare the webcam image directly with an image without objects on the concrete slab, because the image of the camera could shift minimally in x and y direction and the lighting is also not always the same. So I cannot use cv2.substract.
I would prefer a foreground and background substract, where the background is just my concrete slab and the foreground is then the objects. But since the objects don´t move but lie still on the slab, I can´t use cv2.createBackgroundSubtractorMOG2() either.
The Pictures look like this:
The Concrete slap without any objects:
The slap with Objects:
In Python/OpenCV, you could do division normalization to even out the illumination and make the background white. Then do your subtraction. Then use morphology to clean up small regions. Then find contours and discard any small regions that are due to noise left after the division normalization and morphology.
Here is how to do division normalization.
Input 1:
Input 2:
import cv2
import numpy as np
# load image
img1 = cv2.imread("img1.jpg")
img2 = cv2.imread("img2.jpg")
# convert to grayscale
gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
# blur
blur1 = cv2.GaussianBlur(gray1, (0,0), sigmaX=13, sigmaY=13)
blur2 = cv2.GaussianBlur(gray2, (0,0), sigmaX=13, sigmaY=13)
# divide
divide1 = cv2.divide(gray1, blur1, scale=255)
divide2 = cv2.divide(gray2, blur2, scale=255)
# threshold
thresh1 = cv2.threshold(divide1, 200, 255, cv2.THRESH_BINARY)[1]
thresh2 = cv2.threshold(divide2, 200, 255, cv2.THRESH_BINARY)[1]
# morphology
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
morph1 = cv2.morphologyEx(thresh1, cv2.MORPH_OPEN, kernel)
morph2 = cv2.morphologyEx(thresh2, cv2.MORPH_OPEN, kernel)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
morph1 = cv2.morphologyEx(morph1, cv2.MORPH_CLOSE, kernel)
morph2 = cv2.morphologyEx(morph2, cv2.MORPH_CLOSE, kernel)
# write result to disk
cv2.imwrite("img1_division_normalize.jpg", divide1)
cv2.imwrite("img2_division_normalize.jpg", divide2)
cv2.imwrite("img1_division_morph1.jpg", morph1)
cv2.imwrite("img1_division_morph2.jpg", morph2)
# display it
cv2.imshow("img1_norm", divide1)
cv2.imshow("img2_norm", divide2)
cv2.imshow("img1_thresh", thresh1)
cv2.imshow("img2_thresh", thresh2)
cv2.imshow("img1_morph", morph1)
cv2.imshow("img2_morph", morph2)
cv2.waitKey(0)
cv2.destroyAllWindows()
Image 1 Normalized:
Image 2 Normalized:
Image 1 thresholded and morphology cleaned:
Image 2 thresholded and morphology cleaned:
In this case, Image 1 becomes completely white. So it (and subtraction) is not really needed. You just need to find contours for the second image result and if necessary discard tiny regions by area. The rest are your objects.

Grouping Nearby Contours/Bounding Rectangles

I have an image containing obscure rectangular shapes:
Using opencv I would like to group nearby rectangles to have an expected output as:
I've used the Dilate Morphological Transformation to enlarge the shapes so that they would be joined to create a larger shape which produces:
It doesn't join the larger rectangles to right very well, with a kernel size (40,40) any larger the smaller rectangles join to be one big one instead of separates.
Possible to use cv2.minAreaRect(c) and group by similar angles of the rectangles? or any feature based detection in getting the number of rectangles in a certain area?
A thin vertical kernel should do what you want. Just make it taller than the maximum of the minimum 1/2 gaps over all objects you want to connect. Looks like about 65 pixels should work. Here is the morphology close result in Python/OpenCV that seems to connect the parts you want.
Input:
import cv2
import numpy as np
# read image as grayscale
img = cv2.imread('lines.png', cv2.IMREAD_GRAYSCALE)
# threshold to binary
thresh = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY)[2]
# apply morphology
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,65))
morph = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# write results
cv2.imwrite("lines_morphology.png", morph)
# show results
cv2.imshow("thresh", thresh)
cv2.imshow("morph", morph)
cv2.waitKey(0)
Result:

How to remove spotted background from numbers using cv2?

I'm using py-tesseract for OCR on images as below but I'm unable to get consistent output from the unprocessed images. How can the spotted background be reduced and the numbers highlighted using cv2 to increase accuracy? I'm also interested in keeping the separators in the output string.
Below pre-processing seems to work with some accuracy
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (7, 7), 0)
(T, threshInv) = cv2.threshold(blurred, 0, 255,
cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)
Getting output using psm --6: 6.903.722,99
Here's one solution, which is based on the ideas on a similar post. The main idea is to apply a Hit-or-Miss operation looking for the pattern you want to eliminate. In this case the pattern is one black (or white, if you invert the image) surrounded by pixels of the complimentary color. I've also included a thresholding operation with some bias, because some of the characters are easily destroyed (you could really benefit from more high-res image). These are the steps:
Get grayscale image via color conversion
Threshold with bias to get a binary image
Apply the Hit-or-Miss with one central pixel target kernel
Use the result from the prior operation to suppress the noise in the original image
Let's see the code:
# Imports:
import numpy as np
import cv2
image path
path = "D://opencvImages//"
fileName = "8WFNvsZ.jpg"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Threshold via Otsu:
thresh, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
# Use Otsu's threshold value and add some bias:
thresh = 1.05 * thresh
_, binaryImage = cv2.threshold(grayscaleImage, thresh, 255, cv2.THRESH_BINARY_INV )
The first bit of code gets the binary image of the input. Note that I've added some bias to the threshold obtained via Otsu to avoid degrading the characters. This is the result:
Ok, let's apply the Hit-or-Miss operation to get the dot mask:
# Perform morphological hit or miss operation
kernel = np.array([[-1,-1,-1], [-1,1,-1], [-1,-1,-1]])
dotMask = cv2.filter2D(binaryImage, -1, kernel)
# Bitwise-xor mask with binary image to remove dots
result = cv2.bitwise_xor(binaryImage, dotMask)
The dot mask is this:
And the result of subtracting (or XORing) this mask to the original binary image is this:
If I run the inverted (black text on white background) result image on PyOCR I get this string output:
Text is: 6.003.722,09
The other image produces this final result:
And its OCR returns this:
Text is: 4.705.640,00

OpenCV: Detect squares in dark background

currently I am trying to calculate optical flows of moving objects. the objects in particular are the squares that are around the circular knobs:
Here is the vanilla image I am trying to process:
my concern is about the right bottom-most strip. The two squares are usually unable to be detected when I have tried Canny Edge detection or GoodFeaturesToTrack. I am currently trying a sharpen kernel and a threshold, then morphological transformation to finding the contour areas. However, when I threshold I get the following results:
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
filename = 'images/Test21_1.tif'
image = cv.imread(filename)
kernel = [ [0, -1, 0], [-1, 5, -1], [0, -1, 0] ] #sharpen kernel I got from wikipedia
kernel = np.array(kernel)
dst = cv.filter2D(image, -1, kernel)
ret, thresh = cv.threshold(dst, 80, 150, cv.THRESH_BINARY_INV)
plt.subplot(121),plt.imshow(image),plt.title('Original')
plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(thresh),plt.title('Threshold')
plt.xticks([]), plt.yticks([])
plt.show()
I was wondering what I could do in openCV to be able to recognize that square. These squares are the objects that move in the videos, and I wish to use them to calculate their optical flow. I am currently considering resorting to a PyTorch CNN to detect the features. I would manually label the images for training/test datasets, but I believe that may be a bit overkill. Thank you for your time.
I am not sure if this is any better, but you can try using the division normalization technique in Python/OpenCV.
Read the input
Convert to grayscale
Apply morphology
Divide the input by the result from the morphology
Adaptive Threshold
Save the results
import cv2
import numpy as np
# read the image
img = cv2.imread('rods.png')
# convert to gray
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# apply morphology
kernel = cv2.getStructuringElement(cv2.MORPH_RECT , (5,5))
smooth = cv2.morphologyEx(gray, cv2.MORPH_DILATE, kernel)
# divide gray by morphology image
division = cv2.divide(gray, smooth, scale=255)
# threshold
thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 7, 4)
# save results
cv2.imwrite('rods.division.jpg',division)
cv2.imwrite('rods.thresh.jpg',thresh)
# show results
cv2.imshow('smooth', smooth)
cv2.imshow('division', division)
cv2.imshow('thresh', thresh)
cv2.waitKey(0)
cv2.destroyAllWindows()
Division image:
Threshold image:
The problem is local contrast is bad near the bottom right square. Can you try by using CLAHE (Contrast Limited Adaptive Histogram Equalization).
# improving local contrast
GRID_SIZE = 20
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(GRID_SIZE,GRID_SIZE))
image = clahe.apply(image)
Then try using your algorithm to detect the squares.

How to remove the white border/edge around a figure in an image using python?

I want to remove the white border between the black mask and the body image
Image input examples:
Image output with thickness 1:
Image output with thickness 2:
I tried some games with Blur and thresholds that I found in here
I also used this code to find and draw the contour
thickness = 3
image = cv2.imread('../finetune/22.png')
blank_mask = np.zeros(image.shape, dtype=np.uint8)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cnts = cv2.findContours(gray, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cv2.drawContours(image, cnts, -1, (255,0,0), thickness)
cv2.imshow('image', image)
cv2.imwrite('../finetune/22-'+str(thickness)+'r.png',image)
cv2.waitKey()
However the contour I've found is the black mask edge and not the white line
I played with the thickness and it works nice but on each image this contour is different, also the thickness is not equal throughout the figure
what is the best precise way to remove it?
Here are two methods:
Method #1: cv2.erode()
You can use erosion is to erode away the boundaries of the white foreground object. Essentially the idea is to perform 2D convolution with a kernel. A kernel can be created using cv2.getStructuingElement() where you can pass the shape and size of the desired kernel to create. Typical kernels are cv2.MORPH_RECT, cv2.MORPH_ELLIPSE, or cv2.MORPH_CROSS. The kernel slides through the image where a pixel is considered a 1 if all the pixels under the kernel is 1 otherwise it is eroded to 0. The net effect is that all pixels on the boundaries will be discarded depending on the shape and size of the kernel. The thickness of the foreground decreases and is useful for removing small white noise or to detach objects. You can adjust the strength of the erosion with the number of iterations to perform.
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
erode = cv2.erode(image, kernel, iterations=1)
Method #1: Opening with cv2.morphologyEx()
The opposite to erosion is dilation which enhances the image. Typically, dialtion is performed after erosion to "normalize" the effect of the morphological operation. OpenCV combines these steps into a single operation called morphological opening. Opening is just another name for erosion followed by dilation and will typically give you smoother results compared to only eroding.
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
opening = cv2.morphologyEx(image, cv2.MORPH_OPEN, kernel)
Result
You can experiment with the kernel shape and the number of iterations. To remove more noise, increase the kernel size and the number of iterations while to remove less, decrease the kernel size and the number of iterations.
import cv2
image = cv2.imread('1.png')
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
opening = cv2.morphologyEx(image, cv2.MORPH_OPEN, kernel, iterations=3)
cv2.imshow('opening', opening)
cv2.waitKey()
The answer below sent by nathancy show exactly the result I want to achieve but it does not helping me with my root problem
When I use drawContours I could draw the contours on my mask and improve it
So this is more information about my problem:
After I've got a mask using an image segmentation method I want to change the background (in the example I use black background) but I still have white contour between the new background and the figure
Original image:
https://drive.google.com/open?id=1P39VCEe2FTqkD6JbdM4ueMr_71C6nI42
Mask:
https://drive.google.com/open?id=1LTHaclsDOxRJCI9t5bLg3PeanseRR9bc
Output:
https://drive.google.com/open?id=1-uQx77-fmMf_9qFSgNMBvo77Q6Ag8qEZ
This is the code I use:
image = cv2.imread('../finetune/1.png')
mask = cv2.imread('../finetune/1mask.png')
output = np.zeros(image.shape, dtype=np.uint8)
output[np.where(mask == 255)] = image[np.where(mask == 255)]
cv2.imshow("output",output)
cv2.imwrite('../finetune/1.output.png',output)
helping the answer above I can take the contour again and create new mask coordinately but I'm sure there is better elegant way to do so
to clarify, I want to improve the mask in order to prevent the white border when I put it on a new background

Categories

Resources