I am trying to build a script capable of counting how many Euros (for now just with coins) are in a picture. In order to accomplish this I am thinking of firstly locating the coins and then compare their relative size in order to know the value of each one as I've seen done in other places. My hardship lies in the first step, in the pre processing of the image.
A note is that this problem arises only when contrast between the background and certain coins is very low
I've tried various methods pre processing with different methods of detection such as connectedComponentsWithStats(), findContours() and SimpleBlobDetector, but the most successful combination I've achieved is:
import numpy as np
import cv2
import os
path = 'GenericImages/TP2/'
path_coins_highlighted = 'GenericImages/Highlights'
path_gaussian_blurs = 'GenericImages/Gaussian_Blurs'
dirs = os.listdir(path)
i = 0
for file in dirs:
path2img = os.path.join(path, file)
img = cv2.imread(path2img)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# clahe = cv2.createCLAHE(clipLimit=40, tileGridSize=(8, 8))
# equalized = clahe.apply(gray)
gray_blur = cv2.GaussianBlur(gray, (15, 15), 0)
# gray_blur = cv2.bilateralFilter(gray, 9, 65, 9)
circles = cv2.HoughCircles(gray_blur, cv2.HOUGH_GRADIENT, 1, 15, param1=50, param2=30, minRadius=0, maxRadius=0)
circles = np.uint16(np.around(circles))
for x in circles[0, :]:
cv2.circle(img, (x[0], x[1]), x[2], (0, 255, 0), 2)
cv2.circle(img, (x[0], x[1]), 2, (0, 0, 255), 3)
cv2.imshow('Gray', gray)
cv2.imshow('Gaussian Blur', gray_blur)
path_save_gaussian_blur = os.path.join(path_gaussian_blurs, str(i) + '_gaussian_blur.jpg')
cv2.imwrite(path_save_gaussian_blur, gray_blur)
# cv2.imshow('equalized', equalized)
cv2.imshow('Highlights', img)
path_save_highlights = os.path.join(path_coins_highlighted, str(i) + '_highlight.jpg')
cv2.imwrite(path_save_highlights, img)
i += 1
cv2.waitKey(0)
The problem lies in the consistency of the detection, I believe that when it fails, it does so because there is little to no contrast between the background and the coins that HoughCircles is not detecting. The set of images below show the cases in which the algorithm fails.
SET 0:
SET1:
I've tried tweaking with equalization and a bilateral filter with different parameters in order to remove noise but keep the transition zones (contours of the coin) but I haven't found significant improvements.
I would appreciate some direction or ideas of what I should be looking for to solve this issue.
The lighting is non-uniform and your images are small and heavily compressed. These are the two factors that hinder a good detection. It might be difficult to control lighting but at least make sure you use lossless image formats (such as png) to avoid compression artifacts.
Anyway, your non-uniform lighting makes this a good case for a lighting normalization method called Gain Division. The idea is that you try to build a model of the background and then weight each input pixel by that model. The output gain should be relatively constant during most of the image. This is very useful because if we eliminate the non-uniform lighting we can create a foreground mask for the coins, and then we simply approximate circles to the coin's contours.
Let's give it a try:
# imports:
import cv2
import numpy as np
# image path
path = "D://opencvImages//"
fileName = "FHlbm.jpg"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Deep copy for results:
inputImageCopy = inputImage.copy()
# Get local maximum:
kernelSize = 30
maxKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
localMax = cv2.morphologyEx(inputImage, cv2.MORPH_CLOSE, maxKernel, None, None, 1, cv2.BORDER_REFLECT101)
# Perform gain division
gainDivision = np.where(localMax == 0, 0, (inputImage / localMax))
# Clip the values to [0,255]
gainDivision = np.clip((255 * gainDivision), 0, 255)
# Convert the mat type from float to uint8:
gainDivision = gainDivision.astype("uint8")
cv2.imshow("Gain Division", gainDivision)
cv2.waitKey(0)
Which yields:
This is the result of applying gain division to the first image. Note that now the background is almost uniform. This is excellent, because we can apply a simple auto threshold to create a binary mask containing just the foreground objects, like this:
# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(gainDivision, cv2.COLOR_BGR2GRAY)
# Get binary image via Otsu:
_, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
This is the binary image:
Now, we have a problem here. The compression artifacts make this mask noisy. We could apply a little bit of morphology to improve the binary blobs, but your image is really small, so I have skipped this step. If you have access to larger, lossless images, you might want to include a cleaning step.
For now I'll simply try to compute the Minimum Enclosing Circle of each blob larger than a threshold, and I should get a detection a little bit more robust than Hough's. Let's see:
# Find the circle blobs on the binary mask:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contoursPoly = [None] * len(contours)
# Store the circles here:
detectedCircles = []
# Alright, just look for the outer bounding boxes:
for i, c in enumerate(contours):
# Get blob area:
blobArea = cv2.contourArea(c)
print(blobArea)
# Set min area:
minArea = 100
# Process only big blobs:
if blobArea > minArea:
# Approximate the contour to a circle:
(x, y), radius = cv2.minEnclosingCircle(c)
# Compute the center and radius:
center = (int(x), int(y))
radius = int(radius)
# Draw the circles:
cv2.circle(inputImageCopy, center, radius, (0, 0, 255), 1)
cv2.line(inputImageCopy, center, center, (0, 255, 0), 2)
# Store the center and radius:
detectedCircles.append([center, radius])
cv2.imshow("Circles", inputImageCopy)
cv2.waitKey(0)
Let's see the results drawn onto a deep copy of the original image:
Not bad. All the circle's data (center and radius) is stored in the detectedCircles list. We can print the info like this:
# Check out the detected circles:
for i in range(len(detectedCircles)):
center, r = detectedCircles[i]
print("Circle #: "+str(i)+" x: "+str(center[0])+" y: "+str(center[1])+" r: "+str(r))
Related
This is the picture I have, and I want to detect the red ball:
However, I simply cannot get the code to work. I've tried experimenting with different param1 and param2 values, larger dp values, and even rescaling the image.
Any help on this (or even an alternate method for detecting the ball) would be much appreciated.
CODE:
frame = cv.imread("cricket_ball.png")
# Convert frame to grayscale
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
# cv.HoughCircles returns a 3-element floating-point vector (x,y,radius) for each circle detected
circles = cv.HoughCircles(gray,cv.HOUGH_GRADIENT,1,minDist=100, minRadius=2.5,maxRadius=10) # Cricket ball on videos are approximately 10 pixels in diameter.
print(circles)
# Ensure at least one circle was found
if circles is not None:
# Converts (x,y,radius to integers)
circles = np.uint8(np.around(circles))
for i in circles[0,:]:
cv.circle(frame, (i[0],i[1]), i[2], (0,255,0), 20) # Produce circle outline
cv.imshow("Ball", frame)
cv.waitKey(0)
Here's my attempt. The idea is to find the ball assuming is (one) of the most saturated objects in the scene. This should cover all bright objects, independent of their color.
I don't use Hough's circles because its a little bit difficult to parametrize and it often doesn't scale well to other image. Instead, I just detect blobs on a binary image and calculate blob circularity, assuming the thing I'm looking for is close to a circle (and its circularity should be close to 1.0).
This is the code:
# imports:
import cv2
import numpy as np
# image path
path = "D://opencvImages//"
fileName = "fv8w3.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Deep copy for results:
inputImageCopy = inputImage.copy()
# Convert the image to the HSV color space:
hsvImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2HSV)
# Set the HSV values:
lowRange = np.array([0, 120, 0])
uppRange = np.array([179, 255, 255])
# Create the HSV mask
binaryMask = cv2.inRange(hsvImage, lowRange, uppRange)
Let's check out what kind of HSV mask we get looking only for high Saturation values:
It's all right, the object of interest is there, but the mask is noisy. Let's try some morphology to define a little bit more those blobs:
# Apply Dilate + Erode:
kernel = np.ones((3, 3), np.uint8)
binaryMask = cv2.morphologyEx(binaryMask, cv2.MORPH_DILATE, kernel, iterations=1)
This is the filtered image:
Now, let me detect contours and compute contour properties to filter the noise. I'll store the blobs of interest in a list called detectedCircles:
# Find the circle blobs on the binary mask:
contours, hierarchy = cv2.findContours(binaryMask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Store the circles here:
detectedCircles = []
# Alright, just look for the outer bounding boxes:
for i, c in enumerate(contours):
# Get blob area:
blobArea = cv2.contourArea(c)
print(blobArea)
# Get blob perimeter:
blobPerimeter = cv2.arcLength(c, True)
print(blobPerimeter)
# Compute circulariity
blobCircularity = (4 * 3.1416 * blobArea)/(blobPerimeter**2)
print(blobCircularity)
# Set min circularuty:
minCircularity = 0.8
# Set min Area
minArea = 35
# Approximate the contour to a circle:
(x, y), radius = cv2.minEnclosingCircle(c)
# Compute the center and radius:
center = (int(x), int(y))
radius = int(radius)
# Set Red color (unfiltered blob)
color = (0, 0, 255)
# Process only big blobs:
if blobCircularity > minCircularity and blobArea > minArea:
# Set Blue color (filtered blob)
color = (255, 0, 0)
# Store the center and radius:
detectedCircles.append([center, radius])
# Draw the circles:
cv2.circle(inputImageCopy, center, radius, color, 2)
cv2.imshow("Circles", inputImageCopy)
cv2.waitKey(0)
I've set a circularity and minimum area test to filter the noisy blobs. All the relevant blobs are stored in the detectedCircles list as fitted circles. Let's see the result:
Looks good. The blob of interested is enclosed by a blue circle and the noise with a red one. Now, let's try another color for the ball. I created a version of the image with a blue ball instead of a red one, this is the result:
I have a irregular shape like below:
I need to get the centre of that white area, I just tried with contour in openCV like below
img=cv.imread('mypic',0)
ret,thresh = cv.threshold(img,127,255,cv.THRESH_BINARY_INV)
cnts = cv.findContours(thresh.copy(), cv.RETR_EXTERNAL,
cv.CHAIN_APPROX_SIMPLE)
cnt = contours[0]
x,y,w,h = cv.boundingRect(cnt)
res_img = cv.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
cv.imwrite('output.png',res_img)
But those cnts doeesn't give me very good results, as you can see the original image and its two small black point below the picture. Can someone point me to a good solution to get a centre of an irregular shape like above?
As I suggested, perform an erosion followed by dilation (an opening operation) on the binary image, then compute central moments and use this information to calculate the centroid. These are the steps:
Get a binary image from the input via Otsu's Thresholding
Compute central moments using cv2.moments
Compute the blob's centroid using the previous information
Let's see the code:
import cv2
import numpy as np
# Set image path
path = "C:/opencvImages/"
fileName = "pn43H.png"
# Read Input image
inputImage = cv2.imread(path+fileName)
# Convert BGR to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Threshold via Otsu + bias adjustment:
threshValue, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
This is the binary image you get. Notice the small noise:
The opening operation will get rid of the smalls blobs. A rectangular structuring element will suffice, let's use 3 iterations:
# Apply an erosion + dilation to get rid of small noise:
# Set kernel (structuring element) size:
kernelSize = 3
# Set operation iterations:
opIterations = 3
# Get the structuring element:
maxKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
# Perform closing:
openingImage = cv2.morphologyEx(binaryImage, cv2.MORPH_OPEN, maxKernel, None, None, opIterations, cv2.BORDER_REFLECT101)
This is the filtered image:
Now, compute the central moments and then the blob's centroid:
# Calculate the moments
imageMoments = cv2.moments(openingImage)
# Compute centroid
cx = int(imageMoments['m10']/imageMoments['m00'])
cy = int(imageMoments['m01']/imageMoments['m00'])
# Print the point:
print("Cx: "+str(cx))
print("Cy: "+str(cy))
Additionally, let's draw this point onto the binary image to check out the results:
# Draw centroid onto BGR image:
bgrImage = cv2.cvtColor(binaryImage, cv2.COLOR_GRAY2BGR)
bgrImage = cv2.line(bgrImage, (cx,cy), (cx,cy), (0,255,0), 10)
This is the result:
One can think of the centroid calculated using the image moments as the "mass" center of the object in relation to the pixel intensity. Depending on the actual shape of the object it may not even be inside the object.
An alternative would be calculating the center of the bounding circle:
thresh = cv2.morphologyEx(thresh, cv2.MORPH_DILATE, np.ones((3, 3)))
contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = [c for c in contours if cv2.contourArea(c) > 100]
(x, y), r = cv2.minEnclosingCircle(contours[0])
output = thresh.copy()
cv2.circle(output, (int(x), int(y)), 3, (0, 0, 0), -1)
cv2.putText(output, f"{(int(x), int(y))}", (int(x-50), int(y-10)), cv2.FONT_HERSHEY_PLAIN, 1, (0, 0, 0), 1)
cv2.circle(output, (int(x), int(y)), int(r), (255, 0, 0), 2)
The output of that code looks like this:
You may want to try ConnectedComponentsWithStats function. This returns centroids, areas and bounding box parameters. Also blur and morphology(dilate/erode) helps a lot with noice, as noted above. If you're generous enough with erode, you`re gonna get almost no stray pixels after processing.
I am trying to do some image segmentation with watershed to detect all balls in an image. I have followed a pyimage tuto. But I am getting very poor results. My guess is that the reflexion is the problem. Still the image is pretty clean and the instances look quite separable.
Am I using the correct approach here? Did I miss somethings?
I tested cellpose and I get almost perfect results. It's not the same approach of course and I was hopping to get something with "classical" computer-vision techniques.
Following is the code I have, the original image and the current result. I have tried to change the parameters, but I am not sure about what I am doing here. I also looked at inRange, but I am afraid the balls are never of the same color.
original image: https://i.stack.imgur.com/7595R.jpg
import numpy as np
from scipy import ndimage
import cv2
from skimage.feature import peak_local_max
from skimage.segmentation import watershed
import imutils
from matplotlib import pyplot as plt
img = cv2.imread('balls.jpg')
gray = - cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
plt.imshow(gray)
# Things I tried...
# gray = cv2.dilate(gray,kernel,iterations = 1)
# hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# h, s, v = cv2.split(hsv)
thresh = cv2.threshold(gray, 250, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
# compute the exact Euclidean distance from every binary
# pixel to the nearest zero pixel, then find peaks in this
# distance map
D = ndimage.distance_transform_edt(thresh)
localMax = peak_local_max(D, indices=False, min_distance=30, labels=thresh)
# perform a connected component analysis on the local peaks,
# using 8-connectivity, then appy the Watershed algorithm
markers = ndimage.label(localMax, structure=np.ones((3, 3)))[0]
labels = watershed(-D, markers, mask=thresh)
# draw on mask
for label in np.unique(labels):
# if the label is zero -> 'background'
if label == 0:
continue
mask = np.zeros(gray.shape, dtype="uint8")
mask[labels == label] = 255
# detect contours in the mask and grab the largest one
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
c = max(cnts, key=cv2.contourArea)
# draw a circle enclosing the object
((x, y), r) = cv2.minEnclosingCircle(c)
cv2.circle(img, (int(x), int(y)), int(r), (0, 255, 0), 2)
cv2.putText(img, "#{}".format(label), (int(x) - 10, int(y)),
cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 2)
plt.imshow(img)
The labels : https://i.stack.imgur.com/M6hZb.png
matchTemplate "solution" with opencv/samples/mouse_and_match.py
use whatever you like to find the peaks.
yes, with this approach you gotta pick a template manually.
to fix that, there could be approaches exploiting self-similarity (auto-correlation). exercise to the reader.
you can't pick whole balls because their sizes vary, so that's a huge downside to template matching already, but also a rectangle around a circle contains a significant amount of non-object pixels, which drives down the correlation score wherever that part varies.
picking the reflection works (off of a medium ball) because the reflection shows an environment with nice strong contrast.
notice the one small ball near the top, slightly to the right? that's not doing so well for a bunch of reasons.
I am currently working on developing an algorithm to determine centroid positions from (Brightfield) microscopy images of bacterial clusters. This is currently a major open problem in image processing.
This question is a follow-up to: Python/OpenCV — Matching Centroid Points of Bacteria in Two Images.
Currently, the algorithm is effective for sparse, spaced-out bacteria. However, it becomes totally ineffective when the bacteria become clustered together.
In these images, notice how the bacterial centroids are located effectively.
Bright-Field Image #1
Bright-Field Image #2
Bright-Field Image #3
However, the algorithm fails when the bacteria cluster at varying levels.
Bright-Field Image #4
Bright-Field Image #5
Bright-Field Image #6
Bright-Field Image #7
Bright-Field Image #8
Original Images
Bright-Field Image #1
Bright-Field Image #2
Bright-Field Image #3
Bright-Field Image #4
Bright-Field Image #5
Bright-Field Image #6
Bright-Field Image #7
Bright-Field Image #8
I'd like to optimize my current algorithm so it's more robust for these type of images. This is the program I'm running.
import cv2
import numpy as np
import os
kernel = np.array([[0, 0, 1, 0, 0],
[0, 1, 1, 1, 0],
[1, 1, 1, 1, 1],
[0, 1, 1, 1, 0],
[0, 0, 1, 0, 0]], dtype=np.uint8)
def e_d(image, it):
image = cv2.erode(image, kernel, iterations=it)
image = cv2.dilate(image, kernel, iterations=it)
return image
path = r"(INSERT IMAGE DIRECTORY HERE)"
img_files = [file for file in os.listdir(path)]
def segment_index(index: int):
segment_file(img_files[index])
def segment_file(img_file: str):
img_path = path + "\\" + img_file
print(img_path)
img = cv2.imread(img_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Applying adaptive mean thresholding
th = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 11, 2)
# Removing small noise
th = e_d(th.copy(), 1)
# Finding contours with RETR_EXTERNAL flag and removing undesired contours and
# drawing them on a new image.
cnt, hie = cv2.findContours(th, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cntImg = th.copy()
for contour in cnt:
x, y, w, h = cv2.boundingRect(contour)
# Eliminating the contour if its width is more than half of image width
# (bacteria will not be that big).
if w > img.shape[1] / 2:
continue
cntImg = cv2.drawContours(cntImg, [cv2.convexHull(contour)], -1, 255, -1)
# Removing almost all the remaining noise.
# (Some big circular noise will remain along with bacteria contours)
cntImg = e_d(cntImg, 3)
# Finding new filtered contours again
cnt2, hie2 = cv2.findContours(cntImg, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Now eliminating circular type noise contours by comparing each contour's
# extent of overlap with its enclosing circle.
finalContours = [] # This will contain the final bacteria contours
for contour in cnt2:
# Finding minimum enclosing circle
(x, y), radius = cv2.minEnclosingCircle(contour)
center = (int(x), int(y))
radius = int(radius)
# creating a image with only this circle drawn on it(filled with white colour)
circleImg = np.zeros(img.shape, dtype=np.uint8)
circleImg = cv2.circle(circleImg, center, radius, 255, -1)
# creating a image with only the contour drawn on it(filled with white colour)
contourImg = np.zeros(img.shape, dtype=np.uint8)
contourImg = cv2.drawContours(contourImg, [contour], -1, 255, -1)
# White pixels not common in both contour and circle will remain white
# else will become black.
union_inter = cv2.bitwise_xor(circleImg, contourImg)
# Finding ratio of the extent of overlap of contour to its enclosing circle.
# Smaller the ratio, more circular the contour.
ratio = np.sum(union_inter == 255) / np.sum(circleImg == 255)
# Storing only non circular contours(bacteria)
if ratio > 0.55:
finalContours.append(contour)
finalContours = np.asarray(finalContours)
# Finding center of bacteria and showing it.
bacteriaImg = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
for bacteria in finalContours:
M = cv2.moments(bacteria)
cx = int(M['m10'] / M['m00'])
cy = int(M['m01'] / M['m00'])
bacteriaImg = cv2.circle(bacteriaImg, (cx, cy), 5, (0, 0, 255), -1)
cv2.imshow("bacteriaImg", bacteriaImg)
cv2.waitKey(0)
# Segment Each Image
for i in range(len(img_files)):
segment_index(i)
Ideally I would like at least to improve on a couple of the posted images.
The mask is always the weak point in identifying objects, and the most important step. This will improve identifying images with high numbers of bacteria. I have modified your e_d function by adding an OPEN and another ERODE pass with the kernal, and changed the it (number of iterations) variable (to 1, 2 instead of 1,3) for your code to do this. This is by no means a finished effort, but I hope it will give you an idea of what you might try to enhance it further. I used the images you provided, and since they already have a red dot, this may be interfering with my result images... but you can see it is able to identify more bacteria on most. Some of my results show two dots, and the image with only one bacteria, I missed it, each quite possibly because it was already marked. Try it with the raw images and see how it does.
Also, since the bacteria are relatively uniform in both size and shape, I think you could work with the ratio and/or average of height to width of each bacteria to filter out the extreme shapes (small or large) and the skinny, long shapes too. You can measure enough bacteria to see what is the average contour length, or height and width, or height/width ratio, etc., to find reasonable tolerances rather than the proportion to the image size itself. Another suggestion, would be to rethink how you are masking the images all together, possibly to try it in two steps. One to find the boundary of the long shape containing the bacteria, and then to find the bacteria within it. This assumes all of your images will be similar to these, and if that is so, it may help to eliminate the stray hits outside of this boundary, that are never bacteria.
#!usr/bin/env python
# https://stackoverflow.com/questions/63182075/python-opencv-centroid-determination-in-bacterial-clusters
import cv2
import numpy as np
import os
kernel = np.array([[0, 0, 1, 0, 0],
[0, 1, 1, 1, 0],
[1, 1, 1, 1, 1],
[0, 1, 1, 1, 0],
[0, 0, 1, 0, 0]], dtype=np.uint8)
def e_d(image, it):
print(it)
image = cv2.erode(image, kernel, iterations=it)
image = cv2.dilate(image, kernel, iterations=it)
image = cv2.morphologyEx(image, cv2.MORPH_OPEN, kernel, iterations = 1)
image = cv2.morphologyEx(image, cv2.MORPH_ERODE, kernel, iterations = 1)
return image
#path = r"(INSERT IMAGE DIRECTORY HERE)"
path = r"E:\stackimages"
img_files = [file for file in os.listdir(path)]
def segment_index(index: int):
segment_file(img_files[index])
def segment_file(img_file: str):
img_path = path + "\\" + img_file
print(img_path)
head, tail = os.path.split(img_path)
img = cv2.imread(img_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imshow("bacteriaImg-1", img)
cv2.waitKey(0)
# Applying adaptive mean thresholding
th = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 11, 2)
# Removing small noise
th = e_d(th.copy(), 1)
# Finding contours with RETR_EXTERNAL flag and removing undesired contours and
# drawing them on a new image.
cnt, hie = cv2.findContours(th, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cntImg = th.copy()
for contour in cnt:
x, y, w, h = cv2.boundingRect(contour)
# Eliminating the contour if its width is more than half of image width
# (bacteria will not be that big).
if w > img.shape[1] / 2:
continue
else:
cntImg = cv2.drawContours(cntImg, [cv2.convexHull(contour)], -1, 255, -1)
# Removing almost all the remaining noise.
# (Some big circular noise will remain along with bacteria contours)
cntImg = e_d(cntImg, 2)
cv2.imshow("bacteriaImg-2", cntImg)
cv2.waitKey(0)
# Finding new filtered contours again
cnt2, hie2 = cv2.findContours(cntImg, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Now eliminating circular type noise contours by comparing each contour's
# extent of overlap with its enclosing circle.
finalContours = [] # This will contain the final bacteria contours
for contour in cnt2:
# Finding minimum enclosing circle
(x, y), radius = cv2.minEnclosingCircle(contour)
center = (int(x), int(y))
radius = int(radius)
# creating a image with only this circle drawn on it(filled with white colour)
circleImg = np.zeros(img.shape, dtype=np.uint8)
circleImg = cv2.circle(circleImg, center, radius, 255, -1)
# creating a image with only the contour drawn on it(filled with white colour)
contourImg = np.zeros(img.shape, dtype=np.uint8)
contourImg = cv2.drawContours(contourImg, [contour], -1, 255, -1)
# White pixels not common in both contour and circle will remain white
# else will become black.
union_inter = cv2.bitwise_xor(circleImg, contourImg)
# Finding ratio of the extent of overlap of contour to its enclosing circle.
# Smaller the ratio, more circular the contour.
ratio = np.sum(union_inter == 255) / np.sum(circleImg == 255)
# Storing only non circular contours(bacteria)
if ratio > 0.55:
finalContours.append(contour)
finalContours = np.asarray(finalContours)
# Finding center of bacteria and showing it.
bacteriaImg = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
for bacteria in finalContours:
M = cv2.moments(bacteria)
cx = int(M['m10'] / M['m00'])
cy = int(M['m01'] / M['m00'])
bacteriaImg = cv2.circle(bacteriaImg, (cx, cy), 5, (0, 0, 255), -1)
cv2.imshow("bacteriaImg", bacteriaImg)
cv2.waitKey(0)
# Segment Each Image
for i in range(len(img_files)):
segment_index(i)
Here's some code that you can try and see if it works for you. It uses an alternative approach to segmenting images. You can fiddle around with parameters to see what combination gives you most acceptable results.
import numpy as np
import cv2
import matplotlib.pyplot as plt
# Adaptive threshold params
gw = 11
bs = 7
offset = 5
bact_aspect_min = 2.0
bact_aspect_max = 10.0
bact_area_min = 20 # in pixels
bact_area_max = 1000
url = "/path/to/image"
img_color = cv2.imread(url)
img = cv2.cvtColor(img_color, cv2.COLOR_BGR2GRAY)
rows, cols = img.shape
img_eq = img.copy()
cv2.equalizeHist(img, img_eq)
img_blur = cv2.medianBlur(img_eq, gw)
th = cv2.adaptiveThreshold(img_blur, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, bs, offset)
_, contours, hier = cv2.findContours(th.copy(), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
for i in range(len(contours)):
# Filter closed contours
rect = cv2.minAreaRect(contours[i])
area = cv2.contourArea(contours[i])
(x, y), (width, height), angle = rect
if min(width, height) == 0:
continue
aspect_ratio = max(width, height) / min(width, height)
if hier[0][i][3] != -1 and \
bact_aspect_min < aspect_ratio < bact_aspect_max and \
bact_area_min < area < bact_area_max:
M = cv2.moments(contours[i])
cx = int(M['m10'] / M['m00'])
cy = int(M['m01'] / M['m00'])
img_color = cv2.circle(img_color, (cx, cy), 3, (255, 0, 0), cv2.FILLED)
plt.imshow(img_color)
It seems that your bacterias seem fused/overlapped in most of the images and it is extremely hard to gauge their size when they are fused and to separate them. Best way is to run this code snippet in Jupyter/ipywidgets with a range of parameter values and see what works best. Good luck!
EDIT 1
I have updated the code to use a slight bit different technique and idea. Basically using l2 contours (holes) to ascertain bacteria, this is much more in line with the shape of the bacteria. You can, again, fiddle around with the parameters to see what works best. Set of parameters in the code gave me satisfactory results. You may want to filter the image a bit more to remove false positives.
Couple of other tricks can be used in addition to the one in the latest code:
Try out ADAPTIVE_THRESH_GAUSSIAN_C
Try equalized image without blurring
Use level 1 contours along with level 2
Use different size constraints for l1 and l2 contours.
I think a combination of all these should provide you with a pretty decent result.
I'm working on a shooting simulator project where I have to detect bullet holes from images. I'm trying to differentiate two images so I can detect the new hole between the images, but its not working as expected. Between the two images, there are minor changes in the previous bullet holes because of slight movement between camera frames.
My first image is here
before.png
and the second one is here
after.png
I tried this code for checking differences
import cv2
import numpy as np
before = cv2.imread("before.png") after = cv2.imread("after.png")
result = after - before
cv2.imwrite("result.png", result)
the result i'm getting in result.png is the image below
result.png
but this is not what i expected, i only want to detect new hole
but it is showing diff with some pixels of previous image.
The result I'm expecting is
expected.png
Please help me figure it out so it can only detect big differences.
Thanks in advance.
Any new idea will be appreciated.
In order to find the differences between two images, you can utilize the Structural Similarity Index (SSIM) which was introduced in Image Quality Assessment: From Error Visibility to Structural Similarity. This method is already implemented in the scikit-image library for image processing. You can install scikit-image with pip install scikit-image.
Using the skimage.metrics.structural_similarity() function from scikit-image, it returns a score and a difference image, diff. The score represents the structual similarity index between the two input images and can fall between the range [-1,1] with values closer to one representing higher similarity. But since you're only interested in where the two images differ, the diff image what you're looking for. The diff image contains the actual image differences between the two images.
Next, we find all contours using cv2.findContours() and filter for the largest contour. The largest contour should represent the new detected difference as slight differences should be smaller then the added bullet.
Here is the largest detected difference between the two images
Here is the actual differences between the two images. Notice how all of the differences were captured but since a new bullet is most likely the largest contour, we can filter out all the other slight movements between camera frames.
Note: this method works pretty well if we assume that the new bullet will have the largest contour in the diff image. If the newest hole was smaller, you may have to mask out the existing regions and whatever new contours in the new image would be the new hole (assuming the image will be a uniform black background with white holes).
from skimage.metrics import structural_similarity
import cv2
# Load images
image1 = cv2.imread('1.png')
image2 = cv2.imread('2.png')
# Convert to grayscale
image1_gray = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)
image2_gray = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY)
# Compute SSIM between the two images
(score, diff) = structural_similarity(image1_gray, image2_gray, full=True)
# The diff image contains the actual image differences between the two images
# and is represented as a floating point data type in the range [0,1]
# so we must convert the array to 8-bit unsigned integers in the range
# [0,255] image1 we can use it with OpenCV
diff = (diff * 255).astype("uint8")
print("Image Similarity: {:.4f}%".format(score * 100))
# Threshold the difference image, followed by finding contours to
# obtain the regions of the two input images that differ
thresh = cv2.threshold(diff, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
contours = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
contour_sizes = [(cv2.contourArea(contour), contour) for contour in contours]
result = image2.copy()
# The largest contour should be the new detected difference
if len(contour_sizes) > 0:
largest_contour = max(contour_sizes, key=lambda x: x[0])[1]
x,y,w,h = cv2.boundingRect(largest_contour)
cv2.rectangle(result, (x, y), (x + w, y + h), (36,255,12), 2)
cv2.imshow('result', result)
cv2.imshow('diff', diff)
cv2.waitKey()
Here's another example with different input images. SSIM is pretty good for detecting differences between images
This is my approach: after we subtract one from the other, there is some noise still remaining, so I just tried to remove that noise. I am dividing the image on a percentile of it's size, and, for each small section of the image, comparing between before and after, so that only significant chunks of white pixels are remaining. This algorithm lacks precision when there is occlusion, that is, whenever the new shot overlaps an existing one.
import cv2
import numpy as np
# This is the percentage of the width/height we're gonna cut
# 0.99 < percent < 0.1
percent = 0.01
before = cv2.imread("before.png")
after = cv2.imread("after.png")
result = after - before # Here, we eliminate the biggest differences between before and after
h, w, _ = result.shape
hPercent = percent * h
wPercent = percent * w
def isBlack(crop): # Function that tells if the crop is black
mask = np.zeros(crop.shape, dtype = int)
return not (np.bitwise_or(crop, mask)).any()
for wFrom in range(0, w, int(wPercent)): # Here we are gonna remove that noise
for hFrom in range(0, h, int(hPercent)):
wTo = int(wFrom+wPercent)
hTo = int(hFrom+hPercent)
crop = result[wFrom:wTo,hFrom:hTo] # Crop the image
if isBlack(crop): # If it is black, there is no shot in it
continue # We dont need to continue with the algorithm
beforeCrop = before[wFrom:wTo,hFrom:hTo] # Crop the image before
if not isBlack(beforeCrop): # If the image before is not black, it means there was a hot already there
result[wFrom:wTo,hFrom:hTo] = [0, 0, 0] # So, we erase it from the result
cv2.imshow("result",result )
cv2.imshow("before", before)
cv2.imshow("after", after)
cv2.waitKey(0)
As you can see, it worked for the use case you provided. A good next step is to keep an array of positions of shots, so that you can
My code :
from skimage.measure import compare_ssim
import argparse
import imutils
import cv2
import numpy as np
# load the two input images
imageA = cv2.imread('./Input_1.png')
cv2.imwrite("./org.jpg", imageA)
# imageA = cv2.medianBlur(imageA,29)
imageB = cv2.imread('./Input_2.png')
cv2.imwrite("./test.jpg", imageB)
# imageB = cv2.medianBlur(imageB,29)
# convert the images to grayscale
grayA = cv2.cvtColor(imageA, cv2.COLOR_BGR2GRAY)
grayB = cv2.cvtColor(imageB, cv2.COLOR_BGR2GRAY)
##########################################################################################################
difference = cv2.subtract(grayA,grayB)
result = not np.any(difference)
if result is True:
print ("Pictures are the same")
else:
cv2.imwrite("./open_cv_subtract.jpg", difference )
print ("Pictures are different, the difference is stored.")
##########################################################################################################
diff = cv2.absdiff(grayA, grayB)
cv2.imwrite("./tabsdiff.png", diff)
##########################################################################################################
grayB=cv2.resize(grayB,(grayA.shape[1],grayA.shape[0]))
(score, diff) = compare_ssim(grayA, grayB, full=True)
diff = (diff * 255).astype("uint8")
print("SSIM: {}".format(score))
#########################################################################################################
thresh = cv2.threshold(diff, 25, 255,cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
#s = imutils.grab_contours(cnts)
count = 0
# loop over the contours
for c in cnts:
# images differ
count=count+1
(x, y, w, h) = cv2.boundingRect(c)
cv2.rectangle(imageA, (x, y), (x + w, y + h), (0, 0, 255), 2)
cv2.rectangle(imageB, (x, y), (x + w, y + h), (0, 0, 255), 2)
##########################################################################################################
print (count)
cv2.imwrite("./original.jpg", imageA)
# cv2.imshow("Modified", imageB)
cv2.imwrite("./test_image.jpg", imageB)
cv2.imwrite("./compare_ssim.jpg", diff)
cv2.imwrite("./thresh.jpg", thresh)
cv2.waitKey(0)
Another code :
import subprocess
# -fuzz 5% # ignore minor difference between two images
# -density 300
# miff:- | display
# -metric phash
# -highlight-color White # by default its RED
# -lowlight-color Black
# -compose difference # src
# -threshold 0
# -separate -evaluate-sequence Add
cmd = 'compare -highlight-color black -fuzz 5% -metric AE Input_1.png ./Input_2.png -compose src ./result.png x: '
a = subprocess.call(cmd, shell=True)
Above code are various image comparison algorithms for images difference using opencv, ImageMagic, numpy, skimage, etc
Hope this will find you help-full.