Counting the number of circular pipes in an image - python

I'm trying to build a program to count the number of pipes from a given image,
https://drive.google.com/drive/folders/1iw2W7dUg3ICGRt3hxOynUJCf-KQj2Pka?usp=sharing are some example test images.
I've tried doing the same with Canny's, Hough's but neither of them seem to be even close to counting them properly. What approach should I go for?

There are several things to consider:
It is better to find the range of pipes with a suitable method (search the net).
In the second step, with PerspectiveTransform, preferably eliminate the rotation of the image and only move the range of the pipes to the next step.
In the next step, from now on, you can use the following algorithm.
This is not a complete algorithm and will not work for all test_cases. You have to spend time. Read and combine different image processing methods or maybe even machine learning; Change the parameters to get a better result.
Another point is to try to keep the environmental conditions constant.
import sys
import cv2
import numpy as np
# Load image
im = cv2.imread(sys.path[0]+'/im.jpeg')
H, W = im.shape[:2]
# Make a copy from image
out = im.copy()
# Make a grayscale version
gry = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
# Make a clean black and white version of that picture
bw = cv2.adaptiveThreshold(
gry, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 12)
bw = cv2.medianBlur(bw, 3)
bw = cv2.erode(bw, np.ones((5, 5)))
bw = cv2.medianBlur(bw, 9)
bw = cv2.dilate(bw, np.ones((5, 5)))
# Draw a rectangle around image to eliminate errors
cv2.rectangle(bw, (0, 0), (W, H), 0, thickness=17)
# Count number of pipes
cnts, _ = cv2.findContours(bw, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
# Change channels of black/white image
bw = cv2.cvtColor(bw, cv2.COLOR_GRAY2BGR)
# Draw position of pipes
c = 0
for cnt in cnts:
c += 1
x, y, w, h = cv2.boundingRect(cnt)
cv2.circle(out, (x+w//2, y+h//2), max(w, h)//2, (c, 220, 255-c), 2)
if c >= 255:
c = 0
# Count and print number of pipes
print(len(cnts))
# Save output images
cv2.imwrite(sys.path[0]+'/im_out.jpg', np.hstack((bw, out)))

Although you may get by with some ad-hoc algorithm, I think you would spend more time tuning it than just training and tuning a model:
The pipes do not seem to have a very complicated texture, so I would create a set of synthetic images by cropping some of the pipes from the images and then I would apply multiple transformations to it. For instance, you can use the imgaug (https://imgaug.readthedocs.io/en/latest/) library in python. Then, you can use HOG, Haar features, LBP, etc.. to get features and train a model adaboost with trees, linear SVM, etc... The object detection pipeline from DLib should be good enough and is easy to use. In general, you need a framework with sliding windows for the detection part.
Another option is to get a bunch of backgrounds as well as transformations over the cropping images and create multiple images with bounding boxes annotations, you could use a tool like this one for that: https://github.com/tylerhutcherson/synthetic-images.
Then, you can use a deep learning model like YOLOv4, RetinaNet, or Faster-RCNN. Notice that you may need to change the strides of the output layers since the objects may be too small.
All in all, If I were you I would start with the object detection from dlib, it should not take you too long to have everything ready.
One thing, given the type of features used for the training the model may detect circles that are not really pipes, but if you expect to have many pipes, that should be easy to remove.
Hope it helps and good luck.

Related

How to remove the background of a noisy image and extract transparent objects?

I have an image processing problem that I can't solve. I have a set of 375 images like the one below (1). I'm trying to remove the background, so to make "background substraction" (or "foreground extraction") and get only the waste on a plain background (black/white/...).
(1) Image example
I tried many things, including createBackgroundSubtractorMOG2 from OpenCV, or threshold. I also tried to remove the background pixel by pixel by subtracting it from the foreground because I have a set of 237 background images (2) (the carpet without the waste, but which is a little bit offset from the image with the objects). There are also variations in brightness on the background images.
(2) Example of a background image
Here is a code example that I was able to test and that gives me the results below (3) and (4). I use Python 3.8.3.
# Function to remove the sides of the images
def delete_side(img, x_left, x_right):
for i in range(img.shape[0]):
for j in range(img.shape[1]):
if j<=x_left or j>=x_right:
img[i,j] = (0,0,0)
return img
# Intialize the background model
backSub = cv2.createBackgroundSubtractorMOG2(history=250, varThreshold=2, detectShadows=True)
# Read the frames and update the background model
for frame in frames:
if frame.endswith(".png"):
filepath = FRAMES_FOLDER + '/' + frame
img = cv2.imread(filepath)
img_cut = delete_side(img, x_left=190, x_right=1280)
gray = cv2.cvtColor(img_cut, cv2.COLOR_BGR2GRAY)
mask = backSub.apply(gray)
newimage = cv2.bitwise_or(img, img, mask=mask)
img_blurred = cv2.GaussianBlur(newimage, (5, 5), 0)
gray2 = cv2.cvtColor(img_blurred, cv2.COLOR_BGR2GRAY)
_, binary = cv2.threshold(gray2, 10, 255, cv2.THRESH_BINARY)
final = cv2.bitwise_or(img, img, mask=binary)
newpath = RESULT_FOLDER + '/' + frame
cv2.imwrite(newpath, final)
I was inspired by many other cases found on Stackoverflow or others (example: removing pixels less than n size(noise) in an image - open CV python).
(3) The result obtained with the code above
(4) Result when increasing the varThreshold argument to 10
Unfortunately, there is still a lot of noise on the resulting pictures.
As a beginner in "background substraction", I don't have all the keys to get an optimal solution. If someone would have an idea to do this task in a more efficient and clean way (Is there a special method to handle the case of transparent objects? Can noise on objects be eliminated more effectively? etc.), I'm interested :)
Thanks
Thanks for your answers. For information, I simply change of methodology and use a segmentation model (U-Net) with 2 labels (foreground, background), to identify the background. It works quite well.

How can I smoothly detect the welding joint using OpenCV-Python?

I have tried to detect the welding joint (bead) using the codes attached below in the last part of this question. I aim to draw a contour on the joint as shown in the third image but my results are poor and don't look similar to the expected results. here is the summary of what I did but the codes are well clear:
1. Reading the image .jpg format
2. Image blurring
3. Thresholding
4. Morphological operations
5. Creating mask
6. Finding contour
But the results are not promising, how can I get out from here
img_new = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
blur_image = cv2.bilateralFilter(img_new,5,21,21)
thresh = cv2.adaptiveThreshold(blur_image,255,
cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY_INV,15,3)
kernel = np.ones((5,5),dtype='uint8')
thresh_dilated = cv2.dilate(thresh, kernel, iterations = 1)
num_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(thresh_dilated, connectivity=4)
sizes = stats[1:,-1]
min_size = 5000
num_labels = num_labels -1
img2 = np.zeros((labels.shape))
for i in range(0,num_labels):
if sizes[i]>=min_size:
img2[labels==i+1]=255
closing = cv2.morphologyEx(img2,cv2.MORPH_CLOSE,kernel)
opening = cv2.morphologyEx(closing,cv2.MORPH_OPEN,kernel)
result = opening.copy()
new_result = result.astype(dtype=np.uint8)
black = np.full((new_result.shape[0],new_result.shape[1],3),(0,0,0),np.uint8)
black1 = cv2.ellipse(black,(700,750),(300,140),0,0,360,(255,255,255),-1)
grayscale = cv2.cvtColor(black1,cv2.COLOR_BGR2GRAY)
ret,b_mask = cv2.threshold(grayscale,127,255,0)
img_result = cv2.bitwise_and(new_result,b_mask)
contours, hierarchy = cv2.findContours(img_result, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(img, contours, -1, (0,255,0), 2)
cv2.imshow('result',img)
cv2.waitKey(0)```
Read any papers?
Is this for automatic processing of many welding joints on an assembly line? Does this have to work for many other similiar images? It does not make sense to fine tune an algorithm for a single picture in that case.
If yes, than you need to use more images to create the algorithm, maybe use a different light setup, maybe images taken with an IR camera right after the welding to get a "hot" mask. Or use light from left and right sight separately to combine two images for a single mark.
Another thing that would be very helpful is to get a "before" image from the parts. without the welding joint. In that case it could get easy. You would just create the difference between the images, do some filtering to remove the welding beads and the reddish layer.
Edit 1:
Another thing I forgot to mention is please look at the RGB layers separately. This is something that you should always try early on. Often there is something useful to see, e.g. in your case it could be that the blue layer might be interesting. Please add the layers to your question.

Methods for detecting a known shape/object in an image using OpenCV

My task is to detect an object in a given image using OpenCV (I do not care whether it is the Python or C++ implementation). The object, shown below in three examples, is a black rectangle with five white rectagles within. All dimensions are known.
However, the rotation, scale, distance, perspective, lighting conditions, camera focus/lens, and background of the image are not known. The edge of the black rectangle is not guaranteed to be fully visible, however there will not be anything in front of the five white rectangles ever - they will always be fully visible. The end goal is to be able to detect the presence of this object within an image, and rotate, scale, and crop to show the object with the perspective removed. I am fairly confident that I can adjust the image to crop to just the object, given its four corners. However I am not so confident that I can reliably find those four corners. In ambiguous cases, not finding the object is preferred to misidentifying some other feature of the image as the object.
Using OpenCV I have come up with the following methods, however I feel I might be missing something obvious. Are there any more methods available, or is one of these the optimal solution?
Edge based outline
First idea was to look for the outside edge of the object.
Using Canny edge detection (after scaling to known size, grayscaling and gaussian blurring), finding a contour which best matches the outer shape of the object.
This deals with perspective, colour, size issues, but fails when there is a complicated background for example, or if there is something of similar shape to the object elsewhere in the image. Maybe this could be improved by a better set of rules for finding the correct contour - perhaps involving the five white rectangles as well as the outer edge.
Feature detection
The next idea was to match to a known template using feature detecting.
Using ORB feature detecting, descriptor matching and homography (from this tutorial) fails, I believe because the features it is detecting are very similar to other features within the object (lots of coreners which are precisely one-quarter white and three-quarters black). However, I do like the idea of matching to a known template - this idea makes sense to me. I suppose though that because the object is quite basic geometrically, it's likely to find a lot of false positives in the feature matching step.
Parallel Lines
Using Houghlines or HoughLinesP, looking for evenly spaced parallel lines. Have just started down this road so need to investigate the best methods for thresholding etc. While it looks messy for images with complex backgrounds, I think it may work well as I can rely on the fact that the white rectangles within the black object should always be high contrast, giving a good indication of where the lines are.
'Barcode Scan'
My final idea is to scan the image by line, looking for the white to black pattern.
I have not started this method, but the idea is to take a strip of the image (at some angle), convert to HSV colour space, and look for the regular black-to-white pattern appearing five times sequentially in the Value column. This idea sounds promising to me, as I believe it should ignore many of the unknown variables.
Thoughts
I have looked at a number of OpenCV tutorials, as well as SO questions such as this one, however because my object is quite geometrically simple I am having issues implementing the ideas given.
I feel like this is an achievable task, however my struggle is knowing which method to pursue further. I have experimented with the first two ideas quite a bit, and while I haven't achieved anything very reliable, maybe there is something I am missing. Is there a standard way of achieving this task which I have not thought of, or is one of my suggested methods the most sensible?
EDIT: Once the corners are found using one of the above methods (or some other method), I am thinking of using Hu Moments or OpenCV's matchShapes() function to remove any false positives.
EDIT2: Added some more input image examples as requested by #Timo
Orig1
Orig2
Orig3
Extra image 1
Extra image 2
Extra image 3
Extra image 4
I had some time looking into the problem and made a little python script. I'm detecting the white rectangles inside your shape. Paste the code into a .py file and copy all input images in an input subfolder. The final result of the image is just a dummy atm and the script isn't complete yet. I'll try to continue it in the next couple of days. The script will create a debug subfolder where it'll save some images that show the current detection state.
import numpy as np
import cv2
import os
INPUT_DIR = 'input'
DEBUG_DIR = 'debug'
OUTPUT_DIR = 'output'
IMG_TARGET_SIZE = 1000
# each algorithm must return a rotated rect and a confidence value [0..1]: (((x, y), (w, h), angle), confidence)
def main():
# a list of all used algorithms
algorithms = [rectangle_detection]
# load and prepare images
files = list(os.listdir(INPUT_DIR))
images = [cv2.imread(os.path.join(INPUT_DIR, f), cv2.IMREAD_GRAYSCALE) for f in files]
images = [scale_image(img) for img in images]
for img, filename in zip(images, files):
results = [alg(img, filename) for alg in algorithms]
roi, confidence = merge_results(results)
display = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
display = cv2.drawContours(display, [cv2.boxPoints(roi).astype('int32')], -1, (0, 230, 0))
cv2.imshow('img', display)
cv2.waitKey()
def merge_results(results):
'''Merges all results into a single result.'''
return max(results, key=lambda x: x[1])
def scale_image(img):
'''Scales the image so that the biggest side is IMG_TARGET_SIZE.'''
scale = IMG_TARGET_SIZE / np.max(img.shape)
return cv2.resize(img, (0,0), fx=scale, fy=scale)
def rectangle_detection(img, filename):
debug_img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
_, binarized = cv2.threshold(img, 50, 255, cv2.THRESH_BINARY)
contours, _ = cv2.findContours(binarized, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
# detect all rectangles
rois = []
for contour in contours:
if len(contour) < 4:
continue
cont_area = cv2.contourArea(contour)
if not 1000 < cont_area < 15000: # roughly filter by the volume of the detected rectangles
continue
cont_perimeter = cv2.arcLength(contour, True)
(x, y), (w, h), angle = rect = cv2.minAreaRect(contour)
rect_area = w * h
if cont_area / rect_area < 0.8: # check the 'rectangularity'
continue
rois.append(rect)
# save intermediate results in the debug folder
rois_img = cv2.drawContours(debug_img, contours, -1, (0, 0, 230))
rois_img = cv2.drawContours(rois_img, [cv2.boxPoints(rect).astype('int32') for rect in rois], -1, (0, 230, 0))
save_dbg_img(rois_img, 'rectangle_detection', filename, 1)
# todo: detect pattern
return rois[0], 1.0 # dummy values
def save_dbg_img(img, folder, filename, index=0):
'''Writes the given image to DEBUG_DIR/folder/filename_index.png.'''
folder = os.path.join(DEBUG_DIR, folder)
if not os.path.exists(folder):
os.makedirs(folder)
cv2.imwrite(os.path.join(folder, '{}_{:02}.png'.format(os.path.splitext(filename)[0], index)), img)
if __name__ == "__main__":
main()
Here is an example image of the current WIP
The next step is to detect the pattern / relation between mutliple rectangles. I'll update this answer when I make progress.

OpenCV Detect scratches on fruits

For a little experiment in Python I'm doing I want to find small scratches on fruits. The scratches are very small and hard to detect by human eye.
I'm using a high resolution camera for that experiment.
Here is the defect I want to detect:
Original Image:
This is my result with very few lines of code:
So I found the contours of my fruit. How can I proceed to finding the scratch? The RGB Value is similar to other parts of the fruit. So how can I differentiate between A scratch, and a part of the fruit?
My code:
# Imports
import numpy as np
import cv2
import time
# Read Image & Convert
img = cv2.imread('IMG_0441.jpg')
result = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Filtering
lower = np.array([1,60,50])
upper = np.array([255,255,255])
result = cv2.inRange(result, lower, upper)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(9,9))
result = cv2.dilate(result,kernel)
# Contours
im2, contours, hierarchy = cv2.findContours(result.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
result = cv2.cvtColor(result, cv2.COLOR_GRAY2BGR)
if len(contours) != 0:
for (i, c) in enumerate(contours):
area = cv2.contourArea(c)
if area > 100000:
print(area)
cv2.drawContours(img, c, -1, (255,255,0), 12)
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),12)
# Stack results
result = np.vstack((result, img))
resultOrig = result.copy()
# Save image to file before resizing
cv2.imwrite(str(time.time())+'_0_result.jpg',resultOrig)
# Resize
max_dimension = float(max(result.shape))
scale = 900/max_dimension
result = cv2.resize(result, None, fx=scale, fy=scale)
# Show results
cv2.imshow('res',result)
cv2.waitKey(0)
cv2.destroyAllWindows()
I changed your image to HSL colour space.
I can't see the scratch in the L channel, so the greyscale approach suggested earlier is going to be difficult.
But the scratch is quite noticeable in the hue plane.
You could use an edge detector to find the blemish in the hue channel. Here I use a difference of gaussians detector (with sizes 20 and 4).
personal guess is to use some algorithm to detect the grayscale change. The grayscale variation around the scratch should be bigger than the variation in other area. Sobel and Scharr Derivatives could be an option. This is a link to python-openCV about image gradient. You can first crop out the fruit with coutour application
If you really want to use conventional computer vision techniques, you should start with edges that can be detected on the fruit. Some of the edges are caused by the bumps on the fruit, so you have to look at various features of the area around the edges to find the difference between scratches and bumps. After you look at about a hundred scratches, you should be able to come up with some rules.
But this process is going to be very tiring, and my guess is you will not have much luck. A better way to approach this problem is to train a deep neural network by manually annotating scratches on about 100 images, and letting the network find out by itself how to distinguish scratches from the rest of the fruit.
If you are a beginner to these stuff, search for PyImageSearch and LearnOpenCV. Both are very resourceful sites where you can learn quickly.

OpenCV (Python): Construct Rectangle from thresholded image

The image below shows an aerial photo of a house block (re-oriented with the longest side vertical), and the same image subjected to Adaptive Thresholding and Difference of Gaussians.
Images: Base; Adaptive Thresholding; Difference of Gaussians
The roof-print of the house is obvious (to the human eye) on the AdThresh image: it's a matter of connecting some obvious dots. In the sample image, finding the blue-bounded box below -
Image with desired rectangle marked in blue
I've had a crack at implementing HoughLinesP() and findContours(), but get nothing sensible (probably because there's some nuance that I'm missing). The python script-chunk that fails to find anything remotely like the blue box, is as follows:
import cv2
import numpy as np
from matplotlib import pyplot as plt
# read in full (RGBA) image - to get alpha layer to use as mask
img = cv2.imread('rotated_12.png', cv2.IMREAD_UNCHANGED)
grey = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Otsu's thresholding after Gaussian filtering
blur_base = cv2.GaussianBlur(grey,(9,9),0)
blur_diff = cv2.GaussianBlur(grey,(15,15),0)
_,thresh1 = cv2.threshold(grey,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
thresh = cv2.adaptiveThreshold(grey,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,11,2)
DoG_01 = blur_base - blur_diff
edges_blur = cv2.Canny(blur_base,70,210)
# Find Contours
(ed, cnts,h) = cv2.findContours(grey, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:4]
for c in cnts:
approx = cv2.approxPolyDP(c, 0.1*cv2.arcLength(c, True), True)
cv2.drawContours(grey, [approx], -1, (0, 255, 0), 1)
# Hough Lines
minLineLength = 30
maxLineGap = 5
lines = cv2.HoughLinesP(edges_blur,1,np.pi/180,20,minLineLength,maxLineGap)
print "lines found:", len(lines)
for line in lines:
cv2.line(grey,(line[0][0], line[0][1]),(line[0][2],line[0][3]),(255,0,0),2)
# plot all the images
images = [img, thresh, DoG_01]
titles = ['Base','AdThresh','DoG01']
for i in xrange(len(images)):
plt.subplot(1,len(images),i+1),plt.imshow(images[i],'gray')
plt.title(titles[i]), plt.xticks([]), plt.yticks([])
plt.savefig('a_edgedetect_12.png')
cv2.destroyAllWindows()
I am trying to set things up without excessive parameterisation. I'm wary of 'tailoring' an algorithm for just this one image since this process will be run on hundreds of thousands of images (with roofs/rooves of different colours which may be less distinguishable from background). That said, I would love to see a solution that 'hit' the blue-box target - that way I could at the very least work out what I've done wrong.
If anyone has a quick-and-dirty way to do this sort of thing, it would be awesome to get a Python code snippet to work with.
The 'base' image ->
Base Image
You should apply the following:
1. Contrast Limited Adaptive Histogram Equalization-CLAHE and convert to gray-scale.
2. Gaussian Blur & Morphological transforms (dialation, erosion, etc) as mentioned by #bad_keypoints. This will help you get rid of the background noise. This is the most tricky step as the results will depend on the order in which you apply (first Gaussian Blur and then Morphological transforms or vice versa) and the window sizes you choose for this purpose.
3. Apply Adaptive thresholding
4. Apply Canny's Edge detection
5. Find contour having four corner points
As said earlier you need to tweak with input parameters of these functions and also need to validate these parameters with other images. As it might be possible that it will work for this case but not for other cases. Based on trial and error you need to fix the parameter values.

Categories

Resources