Getting cleaner blobs for counting - python

still on my journey of learning image masking.
Im trying to count the number of red dots in an image.
Here is the input image
After masking red, I get this image
The problem is, some of the blobs aren't full, so it does not count all the blobs, for example in this specific image, it does not count number 6 and 9. (assuming top left is 1)
How do I refine the masking process to get a more accurate blob?
Masking Code:
import cv2, os
import numpy as np
os.chdir('C:\Program Files\Python\projects\Blob')
#Get image input
image_input = cv2.imread('realbutwithacrylic.png')
image_input = np.copy(image_input)
rgb = cv2.cvtColor(image_input, cv2.COLOR_BGR2RGB)
#Range of color wanted
lower_red = np.array([125, 1, 0])
upper_red = np.array([200, 110, 110])
#Masking the Image
first_mask = cv2.inRange(rgb, lower_red, upper_red)
#Output
cv2.imshow('first_mask', first_mask)
cv2.waitKey()
Masking Code with Blob Counter
import cv2, os
import numpy as np
#Some Visual Studio Code bullshit because it cant find the image????
os.chdir('C:\Program Files\Python\projects\Blob')
#Get image input
image_input = cv2.imread('realbutwithacrylic.png')
image_input = np.copy(image_input)
rgb = cv2.cvtColor(image_input, cv2.COLOR_BGR2RGB)
#Range of color wanted
lower_red = np.array([125, 1, 0])
upper_red = np.array([200, 110, 110])
#Masking the Image
first_mask = cv2.inRange(rgb, lower_red, upper_red)
#Initial masking counter
cv2.imshow('first_mask', first_mask)
cv2.waitKey()
#Blob Counter
thresh = cv2.threshold(first_mask,0,255,cv2.THRESH_OTSU + cv2.THRESH_BINARY)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (7,7))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=5)
cnts = cv2.findContours(opening, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
#Couting the blobs
blobs = 0
for c in cnts:
area = cv2.contourArea(c)
cv2.drawContours(first_mask, [c], -1, (36,255,12), -1)
if area > 13000:
blobs += 2
else:
blobs += 1
#Blob Number Output
print('blobs:', blobs)
#Masking Output
cv2.imshow('thresh', thresh)
cv2.imshow('opening', opening)
cv2.imshow('image', image_input)
cv2.imshow('mask', first_mask)
cv2.waitKey()

Since you're looking for bright enough reds, you might have a better time masking things in HSV space:
orig_image = cv2.imread("realbutwithacrylic.jpg")
image = orig_image.copy()
# Blur image to get rid of noise
image = cv2.GaussianBlur(image, (3, 3), cv2.BORDER_DEFAULT)
# Convert to hue-saturation-value
h, s, v = cv2.split(cv2.cvtColor(image, cv2.COLOR_BGR2HSV))
# "Roll" the hue value so reds (which would otherwise be at 0 and 255) are in the middle instead.
# This makes it easier to use `inRange` without needing to AND masks together.
image = cv2.merge(((h + 128) % 255, s, v))
# Select the correct hues with saturated-enough, bright-enough colors.
image = cv2.inRange(image, np.array([40, 128, 100]), np.array([140, 255, 255]))
For your image, the output is
which should be more straightforward to work with.

#AKX has a good suggestion, but I would prefer HSI (as described in A. Hanbury and J. Serra, “Colour image analysis in 3D-polar coordinates”, Joint Pattern Recognition Symposium, 2003), which is typically more suited for image analysis than HSV. Note that this is not the same as another common conversion often also referred to as HSI, which involves an arc cosine operation -- this HSI does not involve trigonometry. For details, if you don't have access to the paper above, see an implementation in C++.
Also, the Gaussian blur should be quite a bit stronger. You have a JPEG-compressed image, with pretty strong compression. JPEG destroys colors, because we're not good at seeing color edges. Our best solution for this image is to apply a lot of smoothing. The better solution would be to improve the imaging, of course.
A proper threshold on the hue channel should allow us to exclude all the orange, which has a different hue than red (which is, by definition, close to 0 degrees). We also must exclude pixels with a low saturation, as some of the dark areas could have a red hue.
I'm showing how to do this with DIPlib because I'm familiar with it (disclosure: I'm an author). I'm sure you can do the same things with OpenCV, though you might need to implement the HSI color space conversion from scratch.
import diplib as dip
img = dip.ImageRead('aAvJj.jpg')
img = dip.Gauss(img, 2) # sigma = 2
hsi = dip.ColorSpaceManager.Convert(img,'hsi')
h = hsi(0)
s = hsi(1)
h = (h + 180) % 360 - 180 # turn range [180,360] into [-180,0]
dots = (dip.Abs(h) < 5) & (s > 45)
To count the dots you can now simply:
lab = dip.Label(dots)
print(dip.MaximumAndMinimum(lab)[1])
...which says 10.

While both answers provide a proper solution it might be important to mention the following:
Cris Luengo managed to provide noise-free mask which is way more
easier to deal with
Both solutions in a way introduce color difference/delta_E, which is important to mention (might require additional dependency, but will definitely simplify everything)
it might not be that critical to have those red markers (see example below), but good to have in terms of reliability
Just a small PoC (no code, using custom segmentation pipeline):
and mask:
If you think delta_E is an overkill simply check several examples with dynamic scenes and changing light conditions. Any attempt to achieve that hardcoding specific colors will likely to fail.

Related

How to accurately detect the positions of hollow circles (rings) with OpenCV Python?

I would like to determine the center positions of the tips of the syringes in this (video still) image. The tips are nominally round and of known size and quantity.
I am currently putting red ink on the tips to make them easier to detect. It would be nice to not to have to do this but I think without it, detection would be very difficult. Anyone like a challenge?
I started off trying SimpleBlobDetector as it has some nice filtering. One thing I couldn't figure out was how to get SimpleBlobDetector to detect the hollow circles (rings)?
I then tried canny + hough but the circle detection was too unstable, the positions jumped around.
I am currently using findContours + minEnclosingCircle which works OK but still quite unstable.
The mask looks like this. The result. You can see the accuracy is not great:
I briefly looked at RANSAC but I couldn't find a Python example that would detect multiple circles plus the edge detection is tricky.
My current code:
# https://stackoverflow.com/questions/32522989/opencv-better-detection-of-red-color
frame_inv = ~frame0
# Convert BGR to HSV
hsv = cv2.cvtColor(frame_inv, cv2.COLOR_BGR2HSV)
blur = cv2.GaussianBlur(hsv, (5, 5), 0)
# define range of color in HSV
lower_red = np.array([90 - 10, 70, 50])
upper_red = np.array([90 + 10, 255, 255])
# Threshold the HSV image to get only red colors
mask = cv2.inRange(hsv, lower_red, upper_red)
# cv2.imshow('Mask', mask)
kernel = np.ones((5, 5), np.uint8)
dilate = cv2.dilate(mask, kernel)
# cv2.imshow('Dilate', dilate)
contours = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[0]
tipXY = []
for c in contours:
area = cv2.contourArea(c)
if area > 200:
(x, y), r = cv2.minEnclosingCircle(c)
center = (int(x), int(y))
r = int(r)
shift = 2
factor = 2 ** shift
cv2.circle(frame0, (int(round((x) * factor)), int(round((y) * factor))),
int(round(10 * factor)), (0, 255, 0), 2, shift=shift)
tipXY.append(center)
Any suggestions to improve the position detection accuracy/stability?
Here is a better way to segment red color using the second image as input.
Idea:
Since the red color is prominent, I tried converting to other known color spaces (LAB and YCrCb) and viewed their individual channels. The Cr from YCrCb expressed the red color more prominently. According to this link, Cr channel represents the difference between red and luminance, enabling the color to stand out.
Code:
img = cv2.imread('stacked_rings.jpg')
ycc = cv2.cvtColor(img, cv2.COLOR_BGR2YCrCb)
cr_channel = ycc[:,:,1]
Though the hollow rings can be seen, the pixel intensity range is limited to the range [109 - 194]. Let's stretch the range:
dst = cv2.normalize(cr_channel, dst=None, alpha=0, beta=255,norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
The circles a more prominent. Hope this pre-processing step helps you.

How to find rectangles in a full transparent object?

I have an input image of a fully transparent object:
I need to detect the 42 rectangles in this image. This is an example of the output image I need (I marked 6 rectangles for better understanding):
The problem is that the rectangles look really different. I have to use this input image.
How can I achieve this?
Edit 1: Here is a input image as png:
If you calculate the variance down the rows and across the columns, using:
import cv2
import numpy as np
im = cv2.imread('YOURIMAGE', cv2.IMREAD_GRAYSCALE)
# Calculate horizontal and vertical variance
h = np.var(im, axis=1)
v = np.var(im, axis=0)
You can plot them and hopefully locate the peaks of variance which should be your objects:
Mark Setchell's idea is out-of-the-box. Here is a more traditional approach.
Approach:
The image contains boxes whose intensity fades away in the lower rows. Using global equalization would fail here since the intensity changes of the entire image is taken into account. I opted for a local equalization approach in OpenCV this is available as CLAHE (Contrast Limited Adaptive Histogram Equalization))
Using CLAHE:
Equalization is applied on individual regions of the image whose size can be predefined.
To avoid over amplification, contrast limiting is applied, (hence the name).
Let's see how to use it in our problem:
Code:
# read image and store green channel
green_channel = img[:,:,1]
# grid-size for CLAHE
ts = 8
# initialize CLAHE function with parameters
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(ts, ts))
# apply the function
cl = clahe.apply(green_channel)
Notice the image above, the boxes in the lower regions appear slightly darker as expected. This will help us later on.
# apply Otsu threshold
r,th_cl = cv2.threshold(cl, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
# dilation performed using vertical kernels to connect disjoined boxes
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1, 3))
dilate = cv2.dilate(th_cl, vertical_kernel, iterations=1)
# find contours and draw bounding boxes
contours, hierarchy = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
img2 = img.copy()
for c in contours:
area = cv2.contourArea(c)
if area > 100:
x, y, w, h = cv2.boundingRect(c)
img2 = cv2.rectangle(img2, (x, y), (x + w, y + h), (0,255,255), 1)
(The top-rightmost box isn't covered properly. You would need to tweak the various parameters to get an accurate result)
Other pre-processing approaches you can try:
Global equalization
Contrast stretching
Normalization

Identify region of image python

I have a microscopy image and need to calculate the area shown in red. The idea is to build a function that returns the area inside the red line on the right photo (float value, X mm²).
Since I have almost no experience in image processing, I don't know how to approach this (maybe silly) problem and so I'm looking for help. Other image examples are pretty similar with just 1 aglomerated "interest area" close to the center.
I'm comfortable coding in python and have used the software ImageJ for some time.
Any python package, software, bibliography, etc. should help.
Thanks!
EDIT:
The example in red I made manually just to make people understand what I want. Detecting the "interest area" must be done inside the code.
Canny, morphological transformation and contours can provide a decent result.
Although it might need some fine-tuning depending on the input images.
import numpy as np
import cv2
# Change this with your filename
image = cv2.imread('test.png', cv2.IMREAD_GRAYSCALE)
# You can fine-tune this, or try with simple threshold
canny = cv2.Canny(image, 50, 580)
# Morphological Transformations
se = np.ones((7,7), dtype='uint8')
image_close = cv2.morphologyEx(canny, cv2.MORPH_CLOSE, se)
contours, _ = cv2.findContours(image_close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Create black canvas to draw area on
mask = np.zeros(image.shape[:2], np.uint8)
biggest_contour = max(contours, key = cv2.contourArea)
cv2.drawContours(mask, biggest_contour, -1, 255, -1)
area = cv2.contourArea(biggest_contour)
print(f"Total area image: {image.shape[0] * image.shape[1]} pixels")
print(f"Total area contour: {area} pixels")
cv2.imwrite('mask.png', mask)
cv2.imshow('img', mask)
cv2.waitKey(0)
# Draw the contour on the original image for demonstration purposes
original = cv2.imread('test.png')
cv2.drawContours(original, biggest_contour, -1, (0, 0, 255), -1)
cv2.imwrite('result.png', original)
cv2.imshow('result', original)
cv2.waitKey(0)
The code produces the following output:
Total area image: 332628 pixels
Total area contour: 85894.5 pixels
Only thing left to do is convert the pixels to your preferred measurement.
Two images for demonstration below.
Result
Mask
I don't know much about this topic, but it seems like a very similar question to this one, which has what look like two great answers:
Python: calculate an area within an irregular contour line

cv2.inRange() make it work for all inputs

I'm using OpenCv (4.x) on Anime Sketch dataset from Kaggle to get the image's silhouette. What I found to be the hardest part was to detect that empty areas inside that silhouette, areas between arm-body, legs and hair. The tutorials I followed always use "full filled" objects, like a ball, head or cars and I ended up tunning that code to make it work, but it is too specific so that tunning just work ok on one image.
Playing around in online-image-editor.com I've noticed that I can use the tool called Trans-parency to change one color, just like cv2.inRange() does.
Original image
The code:
image = cv2.imread("2.png",cv2.IMREAD_UNCHANGED)
crop_img = image[:, 0:512]
fuzz_factor = 0.97
maxColor = (crop_img[1,1] * 1).astype(int)
minColor = (maxColor * fuzz_factor).astype(int)
mask = cv2.inRange(crop_img, minColor, maxColor)
cv2.imshow("mask", mask)
cv2.waitKey()
and outputs this (not that bad..)
BUT then trying with another image it doesn't work anymore, output:
So, question(s):
There is some "magic rule" where I can extract a specific fuzz_factor for each image?
How could I use the image's right half to get that silhouette/contour?
Thanks guys
I post to close this question.
Thanks to Micka I made some progress, there are two variables that have high impact on output's quality:
fuzz_factor: which sets the color range for cv2.inRange()
max_contours: number of contours to draw (sorted by size)
High numbers are better until there are white zones that are not background, so next thing could be discard that ones.
import numpy as np
import cv2
# constants
fuzz_factor = 1
max_contours = -10
image_path = "9.png"
image = cv2.imread(image_path)
image = image[:, 0:512]
# background color boundaries
color = image[3,3]
upper = (color).astype(int)
lower = (color * (100 - fuzz_factor/2.0)/100).astype(int)
# create mask with specific colors
mask = cv2.inRange(image, lower, upper)
# get all contours
contours, _ = cv2.findContours(mask, mode = cv2.RETR_EXTERNAL, method = cv2.CHAIN_APPROX_NONE)
if(len(contours) > 1):
# get the [max_contours] biggest areas
contours = sorted(contours, key=cv2.contourArea)[max_contours:]
# mask where contours are filled
mask = np.zeros_like(image)
# draw contours and fill
cv2.drawContours(mask, contours, -1, color=[255,255,255], thickness= -1)
cv2.drawContours(image, contours, -1, 255, 2)
cv2.imshow("Result", np.hstack([image, mask]))
cv2.waitKey(0)

How can I remove internal noises of gear completely in this image

gear
I want generalize method so that any type of noises inside the gear can be remove. I am using OpenCV with python
I have already try with lots filter and noise removing methods but I am not getting proper output. here is my code
import cv2
import numpy as np
import imutils
from imutils import perspective
from scipy.spatial import distance as dist
img1 = cv2.imread("5cam.png")
img = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
rows, cols = img.shape
dst = cv2.fastNlMeansDenoising(img, 15, 10, 7, 21)
gaussian_blurred_images = cv2.GaussianBlur(dst, (9, 9), 0)
_, thresh = cv2.threshold(gaussian_blurred_images, 200, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel = np.ones((7, 7), np.uint8)
dilation = cv2.dilate(thresh, kernel)
canny = cv2.Canny(dilation, 200, 255)
contours = cv2.findContours(canny, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_NONE)[0]
areas1 = []
for ctr in contours:
areas = cv2.contourArea(ctr)
areas1.append(areas)
amax = max(areas1)
max_contour = [contours[areas1.index(amax)]]
cv2.drawContours(img1, max_contour, -1, (0, 255, 255), 2)
cv2.imshow("g", dst)
cv2.imshow("thresh", thresh)
cv2.imshow("c", canny)
cv2.imshow("img", img1)
cv2.waitKey(0)
cv2.destroyAllWindows()
first you'll need to improve the lighting and scene.
it needs to be more diffuse and not straight on, to prevent reflection on the gear. place lights to the side all around. don't use the camera's built in flash. use/build a "softbox" which is a white sheet of paper or fabric that diffuses the light before it hits the object (either translucent or used like a "mirror").
it needs to be more uniform. your picture shows a "vignette", darkening near the picture's outside. the previous step will probably fix that.
be careful about smudges, dirt on the background
move the camera further away and use zoom if possible. that will improve the overall sharpness of the picture (more depth of focus) and reduce lens distortion (if you care about that).
then you'll need a different approach. I would suggest trying segmentation based on hue and saturation (select the uniformly blue background).
use cv.cvtColor to transform the image into the HSV color space
then use numpy indexing/masking (or cv.inRange) to select a small range of hue (somewhere around green-blue, which is probably a hue of around 180 degrees, or 90 in cvtColor's CV_8U hue values) and saturations (medium to full). for example: mask = ((hsv_img >= (90, 170, 0)) & (hsv_img <= (100, 255, 255))).all(axis=2)
that approach, on the unimproved lighting, gets me this far. on better lighting it should be even better.

Categories

Resources