Robust and automatable droplet fitting - python

I'm trying to perform image analysis on a set of grayscale images like the following image:
The main goal is to be able to measure the dimensions of the elliptical droplets and to identify their center coordinates.
I've tried Hough Circular Transform in openCV and scikit-image. All the examples I've seen so far for scikit-images run quite slowly compared to openCV.
I've had moderate success with this code (taken from the example):
img = read_img[600:,:]
img = cv2.medianBlur(img,5)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,30,
param1=45,param2=20,minRadius=1,maxRadius=45)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
cv2.imshow('detected circles',cimg)
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(20, 20))
ax.imshow(cimg)
which detects the main droplets, but fails to catch the three smaller ones.
The best threshold I was able to construct is with these parameters for openCV
th2 = cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_MEAN_C,\cv2.THRESH_BINARY,15,5)
however, I'm still unable to find the smaller droplets using the code above.
I have a couple thousand images that I would want to process. I would need the algorithm to be able to automatically find the optimal parameters for the transforms or thresholding. So far I have no idea how to implement something like this.
Any suggestions for proper implementation would be greatly appreciated!

I have only suggested a possible pre-processing step not a complete solution. You can perform adaptive thresholding on the green channel of the image.
Code:
img = cv2.imread('C:/Users/Jackson/Desktop/droplet.jpg', 1)
#--- Resized the image to half of its original dimension --
img = cv2.resize(img, (0, 0), fx = 0.5, fy = 0.5)
#--- I narrowed down to these values after some rigorous trial-and-error ---
th3 = cv2.adaptiveThreshold(img[:,:,1], 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C,\
cv2.THRESH_BINARY, 63, 5)
cv2.namedWindow('Original', 0)
cv2.imshow('Original', img)
cv2.namedWindow('th3', 0)
cv2.imshow('th3', th3)
cv2.waitKey()
cv2.destroyAllWindows()
Result:
You can carry on from here.

Related

OpenCV contour bumps detection

I have a following problem that I want to solve:
Paper cup is photographed from it's side and it's edges and contours are found with Canny and findContours function (please see IMG1 attached bellow), then I have to test if that contour is straight (without any bumps or other imprefections) and return if it's passed or not.
EDIT: as per Christoph Rackwitz recomendation I'm posting original non altered image.
I have successfully implemented edge detection as shown in IMG1 bellow, but now I want to remove parts of edges that are not used and want to get result that are shown in IMG2.
What would be the best way of doing it?
Few things that are on my mind:
Masks (mask off other paths)
Somehow delete paths that are radically different in x coordinates.
Yet another issue is detecting bumps on that curvy path. I'm thinking about few possible solutions:
Loop thru contours and look for spikes in x coordinates (radical changes, from 20 to 40 for examples)
draw similar radius arc and compare it with that contour.
The best result is shown in IMG3 image. Would be the perfect way of solving this problem for me.
Maybe there is someone who has more expierience with OPENCV and could light up my path to solving this problem. THANK YOU!
my code is as follow:
import numpy as np
import cv2
# Read the original image
img = cv2.imread('test2.jpg')
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_blur = cv2.GaussianBlur(img_gray, (3,3), 0)
edges = cv2.Canny(image=img_blur, threshold1=110, threshold2=180) # Canny Edge Detection
ret, thresh = cv2.threshold(edges, 110, 180, cv2.THRESH_BINARY)
contours, hierarchy = cv2.findContours(image=thresh, mode=cv2.RETR_TREE, method=cv2.CHAIN_APPROX_NONE)
# print(contours)
for cnt in contours:
print(cv2.arcLength(cnt, False))
image_copy = img.copy()
cv2.drawContours(image=image_copy, contours=contours, contourIdx=-1, color=(255, 0, 0), thickness=1, lineType=cv2.LINE_AA)
cv2.imshow('Binary image', image_copy)
cv2.waitKey(0)
cv2.destroyAllWindows()
IMG1 This is what is currently detected by OpenCV
IMG2 Good edge detection that I strive to achieve
IMG3 Perfect result with combined expected path and defect in one image
IMG4 Original image for testing and so on

Identify region of image python

I have a microscopy image and need to calculate the area shown in red. The idea is to build a function that returns the area inside the red line on the right photo (float value, X mm²).
Since I have almost no experience in image processing, I don't know how to approach this (maybe silly) problem and so I'm looking for help. Other image examples are pretty similar with just 1 aglomerated "interest area" close to the center.
I'm comfortable coding in python and have used the software ImageJ for some time.
Any python package, software, bibliography, etc. should help.
Thanks!
EDIT:
The example in red I made manually just to make people understand what I want. Detecting the "interest area" must be done inside the code.
Canny, morphological transformation and contours can provide a decent result.
Although it might need some fine-tuning depending on the input images.
import numpy as np
import cv2
# Change this with your filename
image = cv2.imread('test.png', cv2.IMREAD_GRAYSCALE)
# You can fine-tune this, or try with simple threshold
canny = cv2.Canny(image, 50, 580)
# Morphological Transformations
se = np.ones((7,7), dtype='uint8')
image_close = cv2.morphologyEx(canny, cv2.MORPH_CLOSE, se)
contours, _ = cv2.findContours(image_close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Create black canvas to draw area on
mask = np.zeros(image.shape[:2], np.uint8)
biggest_contour = max(contours, key = cv2.contourArea)
cv2.drawContours(mask, biggest_contour, -1, 255, -1)
area = cv2.contourArea(biggest_contour)
print(f"Total area image: {image.shape[0] * image.shape[1]} pixels")
print(f"Total area contour: {area} pixels")
cv2.imwrite('mask.png', mask)
cv2.imshow('img', mask)
cv2.waitKey(0)
# Draw the contour on the original image for demonstration purposes
original = cv2.imread('test.png')
cv2.drawContours(original, biggest_contour, -1, (0, 0, 255), -1)
cv2.imwrite('result.png', original)
cv2.imshow('result', original)
cv2.waitKey(0)
The code produces the following output:
Total area image: 332628 pixels
Total area contour: 85894.5 pixels
Only thing left to do is convert the pixels to your preferred measurement.
Two images for demonstration below.
Result
Mask
I don't know much about this topic, but it seems like a very similar question to this one, which has what look like two great answers:
Python: calculate an area within an irregular contour line

OpenCV - How to draw a line inside contour?

I'm working on a DIY 3d Scanner project. I'll use a pretty common algorithm for it.See here: https://lesagegp.wordpress.com/2013/12/04/laser-scanning-explained/
I've totally understood the algorithm and wrote a code for it. All I got to do now is processing the images. I've captured couple images for testing. Here is one of them:
And I've managed to find contours of the laser with a very simple code:
image = cv2.imread("frame/1.png")
image = cv2.flip(image, 1)
hsv_frame = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
low_red = np.array([161, 155, 84])
high_red = np.array([179, 255, 255])
red_mask = cv2.inRange(hsv_frame, low_red, high_red)
contour = cv2.findContours(red_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)[0]
draw_it = cv2.drawContours(image, contour, -1, (0, 255, 0), 3)
cv2.imshow("contour",draw_it)
Result:
And right now all I want to do is drawing a polyline or something like that inside of contour or inner edge of contour. Like a blue line in this example:
Is there a way to do that and take that line's coordinates? Thanks in advance.
Let's start with a slightly trimmed version of your contour image - which I happen to have generated by other means because your code didn't run on my OpenCV version:
I would then read this as greyscale, and use skimage function medial_axis() to find the medial axis like this:
import cv2
from skimage.morphology import medial_axis
# Load your trimmed image as greyscale
image = cv2.imread("a.png", cv2.IMREAD_GRAYSCALE)
# Find medial axis
skeleton = medial_axis(image).astype(np.uint8)
# Save
cv2.imwrite("result.png", skeleton*255)
Keywords: Image processing, Python, OpenCV, skimage, scikit-image, medial axis, skeleton, skeletonisation.

OpenCV: Fit ellipse with most points on contour (instead of least squares)

I have a binarized image, which I've already used open/close morphology operations on (this is as clean as I can get it, trust me on this) that looks like so:
As you can see, there is an obvious ellipse with some distortion on the top. NOTE: I do not have prior info as to the size of the circle, and this has to run very quickly (HoughCircles is too slow, I've found). I'm trying to figure out how to fit an ellipse to it, such that it maximizes the number of points on the fitted ellipse that correspond to edges on the shape. That is, I want a result like this:
However, I can't seem to find a way in OpenCV to do this. Using the common tools of fitEllipse (blue line) and minAreaRect (green line), I get these results:
Which obviously do not represent the actual ellipse I'm trying to detect. Any thoughts as to how I could accomplish this? Happy to see examples in Python or C++.
Given the shown example image, I was very skeptical of the following statement:
which I've already used open/close morphology operations on (this is as clean as I can get it, trust me on this)
And, after reading your comment,
For precision, I need it to be fit within about 2 pixels accuracy
I was pretty sure, there might be good approximation using morphological operations.
Please have a look at the following code:
import cv2
# Load image (as BGR for later drawing the circle)
image = cv2.imread('images/hvFJF.jpg', cv2.IMREAD_COLOR)
# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Get rid of possible JPG artifacts (when do people learn to use PNG?...)
_, gray = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY)
# Downsize image (by factor 4) to speed up morphological operations
gray = cv2.resize(gray, dsize=(0, 0), fx=0.25, fy=0.25)
# Morphological Closing: Get rid of the hole
gray = cv2.morphologyEx(gray, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)))
# Morphological opening: Get rid of the stuff at the top of the circle
gray = cv2.morphologyEx(gray, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (121, 121)))
# Resize image to original size
gray = cv2.resize(gray, dsize=(image.shape[1], image.shape[0]))
# Find contours (only most external)
cnts, _ = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Draw found contour(s) in input image
image = cv2.drawContours(image, cnts, -1, (0, 0, 255), 2)
cv2.imwrite('images/intermediate.png', gray)
cv2.imwrite('images/result.png', image)
The intermediate image looks like this:
And, the final result looks like this:
Since your image is quite large, I think, no harm is done by downsizing it. The following morphological operations are (heavily) sped up, which might be of interest for your setting.
According to your statement:
NOTE: I do not have prior info as to the size of the circle[...]
You can mostly find an appropriate approximation for the above kernel sizes from your inputs. Since there is only one example image given, we can't know the variability on that issue.
Hope that helps!
Hough-Circle is perfect for this. If you know the diameter you can get a better solution. If you only know a range this might fits best:
EDIT: The reason this works better than the fitted ellipse is: If you are looking for a circle you should use a circle as model. The wiki article explains this beautiful idea.
By the way, you could have done this with opening and closing as well. (Given you now exactly how big your circle is)
import skimage
import matplotlib.pyplot as plt
import numpy as np
from skimage import data, color
from skimage.feature import canny
from skimage.draw import circle_perimeter
from skimage.util import img_as_ubyte
from skimage.transform import hough_circle, hough_circle_peaks
image = skimage.io.imread("hvFJF.jpg")
# Load picture and detect edges
edges = canny(image, sigma=3, low_threshold=10, high_threshold=50)
# Detect two radii
hough_radii = np.arange(250, 300, 10)
hough_res = hough_circle(edges, hough_radii)
# Select the most prominent 5 circles
accums, cx, cy, radii = hough_circle_peaks(hough_res, hough_radii,
total_num_peaks=3)
# Draw them
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 4))
image = color.gray2rgb(image)
for center_y, center_x, radius in zip(cy, cx, radii):
circy, circx = circle_perimeter(center_y, center_x, radius)
image[circy, circx] = (220, 20, 20)
ax.imshow(image, cmap=plt.cm.gray)
plt.show()

Extracting text from inside a circular border

I'm trying to develop a script using Python and OpenCV to detect some highlighted regions on a scanned instrumentation diagram and output text using Tesseract's OCR function. My workflow is first to detect the general vicinity of the region of interest, and then apply processing steps to remove everything aside from the blocks of text (lines, borders, noise). The processed image is then feed into Tesseract's OCR engine.
This workflow is works on about half of the images, but fails on the rest due to the text touching the borders. I'll show some examples of what I mean below:
Step 1: Find regions of interest by creating a mask using InRange with the color range of the highlighter.
Step 2: Contour regions of interest, crop and save to file.
--- Referenced code begins here ---
Step 3: Threshold image and apply Canny Edge Detection
Step 4: Contour the edges and filter them into circular shape using cv2.approxPolyDP and looking at ones with vertices greater than 8. Taking the first or second largest contour usually corresponds to the inner edge.
Step 5: Using masks and bitwise operations, everything inside contour is transferred to a white background image. Dilation and erosion is applied to de-noise the image and create the final image that gets fed into the OCR engine.
import cv2
import numpy as np
import pytesseract
pytesseract.pytesseract.tesseract_cmd = 'C:/Program Files (x86)/Tesseract-OCR/tesseract'
d_path = "Test images\\"
img_name = "cropped_12.jpg"
img = cv2.imread(d_path + img_name) # Reads the image
## Resize image before calculating contour
height, width = img.shape[:2]
img = cv2.resize(img,(2*width,2*height),interpolation = cv2.INTER_CUBIC)
img_orig = img.copy() # Makes copy of original image
img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) # Convert to grayscale
# Apply threshold to get binary image and write to file
_, img = cv2.threshold(img,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Edge detection
edges = cv2.Canny(img,100,200)
# Find contours of mask threshold
_, contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Find contours associated w/ polygons with 8 sides or more
cnt_list = []
area_list = [cv2.contourArea(c) for c in contours]
for j in contours:
poly_pts = cv2.approxPolyDP(j,0.01*cv2.arcLength(j,True),True)
area = cv2.contourArea(j)
if (len(poly_pts) > 8) & (area == max(area_list)):
cnt_list.append(j)
cv2.drawContours(img_orig, cnt_list, -1, (255,0,0), 2)
# Show contours
cv2.namedWindow('Show',cv2.WINDOW_NORMAL)
cv2.imshow("Show",img_orig)
cv2.waitKey()
cv2.destroyAllWindows()
# Zero pixels outside circle
mask = np.zeros(img.shape).astype(img.dtype)
cv2.fillPoly(mask, cnt_list, (255,255,255))
mask_inv = cv2.bitwise_not(mask)
a = cv2.bitwise_and(img,img,mask = mask)
wh_back = np.ones(img.shape).astype(img.dtype)*255
b = cv2.bitwise_and(wh_back,wh_back,mask = mask_inv)
res = cv2.add(a,b)
# Get rid of noise
kernel = np.ones((2, 2), np.uint8)
res = cv2.dilate(res, kernel, iterations=1)
res = cv2.erode(res, kernel, iterations=1)
# Show final image
cv2.namedWindow('result',cv2.WINDOW_NORMAL)
cv2.imshow("result",res)
cv2.waitKey()
cv2.destroyAllWindows()
When code works, these are the images that get outputted:
Working
However, in the instances where the text touches the circular border, the code assumes part of the text is part of the larger contour and ignores the last letter. For example:
Not working
Are there any processing steps that can help me bypass this problem? Or perhaps a different approach? I've tried using Hough Circle Transforms to try to detect the borders, but they're quite finicky and doesn't work as well as contouring.
I'm quite new to OpenCV and Python so any help would be appreciated.
If the Hough circle transform didn't work for you I think you're best option will be to approximate the boarder shape. The best method I know for that is: Douglas-Peucker algorithm which will make your contour simpler by reducing the perimeter on pics.
You can check this reference file from OpenCV to see the type of post processing you can apply to your boarder. They also mention Douglas-Peucker:
OpenCV boarder processing
Just a hunch. After OTSU thresholding. Erode and dilate the image. This will result in vanishing of very thin joints. The code for the same is below.
kernel = np.ones((5,5),np.uint8)
th3 = cv2.erode(th3, kernel,iterations=1)
th3 = cv2.dilate(th3, kernel,iterations=1)
Let me know how it goes. I have couple more idea if this did not work.

Categories

Resources