I'm working on a shooting simulator project where I have to detect bullet holes from images. I'm trying to differentiate two images so I can detect the new hole between the images, but its not working as expected. Between the two images, there are minor changes in the previous bullet holes because of slight movement between camera frames.
My first image is here
before.png
and the second one is here
after.png
I tried this code for checking differences
import cv2
import numpy as np
before = cv2.imread("before.png") after = cv2.imread("after.png")
result = after - before
cv2.imwrite("result.png", result)
the result i'm getting in result.png is the image below
result.png
but this is not what i expected, i only want to detect new hole
but it is showing diff with some pixels of previous image.
The result I'm expecting is
expected.png
Please help me figure it out so it can only detect big differences.
Thanks in advance.
Any new idea will be appreciated.
In order to find the differences between two images, you can utilize the Structural Similarity Index (SSIM) which was introduced in Image Quality Assessment: From Error Visibility to Structural Similarity. This method is already implemented in the scikit-image library for image processing. You can install scikit-image with pip install scikit-image.
Using the skimage.metrics.structural_similarity() function from scikit-image, it returns a score and a difference image, diff. The score represents the structual similarity index between the two input images and can fall between the range [-1,1] with values closer to one representing higher similarity. But since you're only interested in where the two images differ, the diff image what you're looking for. The diff image contains the actual image differences between the two images.
Next, we find all contours using cv2.findContours() and filter for the largest contour. The largest contour should represent the new detected difference as slight differences should be smaller then the added bullet.
Here is the largest detected difference between the two images
Here is the actual differences between the two images. Notice how all of the differences were captured but since a new bullet is most likely the largest contour, we can filter out all the other slight movements between camera frames.
Note: this method works pretty well if we assume that the new bullet will have the largest contour in the diff image. If the newest hole was smaller, you may have to mask out the existing regions and whatever new contours in the new image would be the new hole (assuming the image will be a uniform black background with white holes).
from skimage.metrics import structural_similarity
import cv2
# Load images
image1 = cv2.imread('1.png')
image2 = cv2.imread('2.png')
# Convert to grayscale
image1_gray = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)
image2_gray = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY)
# Compute SSIM between the two images
(score, diff) = structural_similarity(image1_gray, image2_gray, full=True)
# The diff image contains the actual image differences between the two images
# and is represented as a floating point data type in the range [0,1]
# so we must convert the array to 8-bit unsigned integers in the range
# [0,255] image1 we can use it with OpenCV
diff = (diff * 255).astype("uint8")
print("Image Similarity: {:.4f}%".format(score * 100))
# Threshold the difference image, followed by finding contours to
# obtain the regions of the two input images that differ
thresh = cv2.threshold(diff, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
contours = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
contour_sizes = [(cv2.contourArea(contour), contour) for contour in contours]
result = image2.copy()
# The largest contour should be the new detected difference
if len(contour_sizes) > 0:
largest_contour = max(contour_sizes, key=lambda x: x[0])[1]
x,y,w,h = cv2.boundingRect(largest_contour)
cv2.rectangle(result, (x, y), (x + w, y + h), (36,255,12), 2)
cv2.imshow('result', result)
cv2.imshow('diff', diff)
cv2.waitKey()
Here's another example with different input images. SSIM is pretty good for detecting differences between images
This is my approach: after we subtract one from the other, there is some noise still remaining, so I just tried to remove that noise. I am dividing the image on a percentile of it's size, and, for each small section of the image, comparing between before and after, so that only significant chunks of white pixels are remaining. This algorithm lacks precision when there is occlusion, that is, whenever the new shot overlaps an existing one.
import cv2
import numpy as np
# This is the percentage of the width/height we're gonna cut
# 0.99 < percent < 0.1
percent = 0.01
before = cv2.imread("before.png")
after = cv2.imread("after.png")
result = after - before # Here, we eliminate the biggest differences between before and after
h, w, _ = result.shape
hPercent = percent * h
wPercent = percent * w
def isBlack(crop): # Function that tells if the crop is black
mask = np.zeros(crop.shape, dtype = int)
return not (np.bitwise_or(crop, mask)).any()
for wFrom in range(0, w, int(wPercent)): # Here we are gonna remove that noise
for hFrom in range(0, h, int(hPercent)):
wTo = int(wFrom+wPercent)
hTo = int(hFrom+hPercent)
crop = result[wFrom:wTo,hFrom:hTo] # Crop the image
if isBlack(crop): # If it is black, there is no shot in it
continue # We dont need to continue with the algorithm
beforeCrop = before[wFrom:wTo,hFrom:hTo] # Crop the image before
if not isBlack(beforeCrop): # If the image before is not black, it means there was a hot already there
result[wFrom:wTo,hFrom:hTo] = [0, 0, 0] # So, we erase it from the result
cv2.imshow("result",result )
cv2.imshow("before", before)
cv2.imshow("after", after)
cv2.waitKey(0)
As you can see, it worked for the use case you provided. A good next step is to keep an array of positions of shots, so that you can
My code :
from skimage.measure import compare_ssim
import argparse
import imutils
import cv2
import numpy as np
# load the two input images
imageA = cv2.imread('./Input_1.png')
cv2.imwrite("./org.jpg", imageA)
# imageA = cv2.medianBlur(imageA,29)
imageB = cv2.imread('./Input_2.png')
cv2.imwrite("./test.jpg", imageB)
# imageB = cv2.medianBlur(imageB,29)
# convert the images to grayscale
grayA = cv2.cvtColor(imageA, cv2.COLOR_BGR2GRAY)
grayB = cv2.cvtColor(imageB, cv2.COLOR_BGR2GRAY)
##########################################################################################################
difference = cv2.subtract(grayA,grayB)
result = not np.any(difference)
if result is True:
print ("Pictures are the same")
else:
cv2.imwrite("./open_cv_subtract.jpg", difference )
print ("Pictures are different, the difference is stored.")
##########################################################################################################
diff = cv2.absdiff(grayA, grayB)
cv2.imwrite("./tabsdiff.png", diff)
##########################################################################################################
grayB=cv2.resize(grayB,(grayA.shape[1],grayA.shape[0]))
(score, diff) = compare_ssim(grayA, grayB, full=True)
diff = (diff * 255).astype("uint8")
print("SSIM: {}".format(score))
#########################################################################################################
thresh = cv2.threshold(diff, 25, 255,cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
#s = imutils.grab_contours(cnts)
count = 0
# loop over the contours
for c in cnts:
# images differ
count=count+1
(x, y, w, h) = cv2.boundingRect(c)
cv2.rectangle(imageA, (x, y), (x + w, y + h), (0, 0, 255), 2)
cv2.rectangle(imageB, (x, y), (x + w, y + h), (0, 0, 255), 2)
##########################################################################################################
print (count)
cv2.imwrite("./original.jpg", imageA)
# cv2.imshow("Modified", imageB)
cv2.imwrite("./test_image.jpg", imageB)
cv2.imwrite("./compare_ssim.jpg", diff)
cv2.imwrite("./thresh.jpg", thresh)
cv2.waitKey(0)
Another code :
import subprocess
# -fuzz 5% # ignore minor difference between two images
# -density 300
# miff:- | display
# -metric phash
# -highlight-color White # by default its RED
# -lowlight-color Black
# -compose difference # src
# -threshold 0
# -separate -evaluate-sequence Add
cmd = 'compare -highlight-color black -fuzz 5% -metric AE Input_1.png ./Input_2.png -compose src ./result.png x: '
a = subprocess.call(cmd, shell=True)
Above code are various image comparison algorithms for images difference using opencv, ImageMagic, numpy, skimage, etc
Hope this will find you help-full.
Related
For a project I want to detect braille dots on a plate. I make a picture on which I make my detection thanks to the connectedComponentsWithStats function. Despite my attempts I can never get a threshold value where all the dots and only them are detected, I have the same problem if I try to use the circle detection. I'm trying to use template matching on the advice of a teacher but I'm also having problems with my detection since the only factor that influences it is the threshold.
import matplotlib.pyplot as plt
img1 = cv.imread(r"traitement\prod.png")
plt.figure(figsize=(40,40))
plt.subplot(3,1,1)
gray_img = cv.cvtColor(img1, cv.COLOR_BGR2GRAY)
test = cv.adaptiveThreshold(gray_img, 255, cv.ADAPTIVE_THRESH_MEAN_C, cv.THRESH_BINARY_INV, 11, 6)
_, _, boxes, _ = cv.connectedComponentsWithStats(test)
boxes = boxes[1:]
filtered_boxes = []
for x,y,w,h,pixels in boxes:
if pixels < 1000 and h < 35 and w < 35 and h > 14 and w > 14 and x > 15 and y > 15:
filtered_boxes.append((x,y,w,h))
for x,y,w,h in filtered_boxes:
W = int(w)/2
H = int(h)/2
#print(w)
cv.circle(img1,(x+int(W),y+int(H)),2,(0,255,0),20)
cv.imwrite("gray.png",gray_img)
cv.imwrite("test.png",test)
plt.imshow(test)
plt.subplot(3,1,2)
plt.imshow(img1)
import cv2 as cv
import numpy as np
from imutils.object_detection import non_max_suppression
import matplotlib.pyplot as plt
img = cv.imread('traitement/prod.png')
temp_gray = cv.imread('dot.png',0)
W, H = temp.shape[:2]
thresh = 0.6
img_gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
match = cv.matchTemplate(image=img_gray, templ=temp_gray, method=cv.TM_CCOEFF_NORMED)
(y_points, x_points) = np.where(match >= thresh)
boxes = list()
for (x, y) in zip(x_points, y_points):
# update our list of rectangles
boxes.append((x, y, x + W, y + H))
boxes = non_max_suppression(np.array(boxes))
# loop over the final bounding boxes
for (x1, y1, x2, y2) in boxes:
cv.circle(img,(x1+int(W/2),y1+int(H/2)),2,(255,0,0),15)
plt.figure(figsize=(40,40))
plt.subplot(3,1,1)
plt.imshow(img)
Image with adaptive threshold:
Image with template detection:
I found a solution that may not be better than your solutions, because I had to overfit few parameters for the given input...
The problem is challenging because the input image was taken under non-uniform illumination conditions (the center part is brighter than the top). Consider taking a better snapshot...
Point of thought:
The dots are ordered in rows, and we are not using that information.
We may get better results if we were using the fact that the dots are ordered in rows.
For overcoming the brightness differences we may subtract the median of the surrounding pixels from each pixel (using large filter radius), and compute the absolute difference:
bg = cv2.medianBlur(gray, 151) # Background
fg = cv2.absdiff(gray, bg) # Foreground (use absdiff because the dost are dark but bright at the center).
Apply binary threshold (use THRESH_OTSU for automatic threshold level):
_, thresh = cv2.threshold(fg, 0, 255, cv2.THRESH_OTSU)
The result of thresh is not good enough for finding the dots.
We may use the fact that the dots are dark with bright center.
That fact makes an high edges around and inside the dots.
Apply Canny edge detection:
edges = cv2.Canny(gray, threshold1=50, threshold2=100)
Merge edges with thresh (use binary or):
thresh = cv2.bitwise_or(thresh, edges)
Find connected components and continue (filter the components by area).
Code sample:
import numpy as np
import cv2
img1 = cv2.imread('prod.jpg')
gray = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) # Convert to grayscale
bg = cv2.medianBlur(gray, 151) # Compute the background (use a large filter radius for excluding the dots)
fg = cv2.absdiff(gray, bg) # Compute absolute difference
_, thresh = cv2.threshold(fg, 0, 255, cv2.THRESH_OTSU) # Apply binary threshold (THRESH_OTSU applies automatic threshold level)
edges = cv2.Canny(gray, threshold1=50, threshold2=100) # Apply Canny edge detection.
thresh = cv2.bitwise_or(thresh, edges) # Merge edges with thresh
_, _, boxes, _ = cv2.connectedComponentsWithStats(thresh)
boxes = boxes[1:]
filtered_boxes = []
for x, y, w, h, pixels in boxes:
#if pixels < 1000 and h < 35 and w < 35 and h > 14 and w > 14 and x > 15 and y > 15 and pixels > 100:
if pixels < 1000 and x > 15 and y > 15 and pixels > 200:
filtered_boxes.append((x, y, w, h))
for x, y, w, h in filtered_boxes:
W = int(w)/2
H = int(h)/2
cv2.circle(img1, (x+int(W), y+int(H)), 2, (0, 255, 0), 20)
# Show images for testing
cv2.imshow('bg', bg)
cv2.imshow('fg', fg)
cv2.imshow('gray', gray)
cv2.imshow('edges', edges)
cv2.imshow('thresh', thresh)
cv2.imshow('img1', img1)
cv2.waitKey()
cv2.destroyAllWindows()
Result:
There are few dots that are marked twice.
It is relatively simple to merge the overlapping circles into one circle.
Intermediate results:
thresh (before merging with edges):
edges:
thresh merged with edges:
Update:
As Jeru Luke commented we may use non-maximum suppression as done in question.
Here is a code sample:
import numpy as np
import cv2
from imutils.object_detection import non_max_suppression
img1 = cv2.imread('prod.jpg')
gray = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) # Convert to grayscale
bg = cv2.medianBlur(gray, 151) # Compute the background (use a large filter radius for excluding the dots)
fg = cv2.absdiff(gray, bg) # Compute absolute difference
_, thresh = cv2.threshold(fg, 0, 255, cv2.THRESH_OTSU) # Apply binary threshold (THRESH_OTSU applies automatic threshold level)
edges = cv2.Canny(gray, threshold1=50, threshold2=100) # Apply Canny edge detection.
thresh = cv2.bitwise_or(thresh, edges) # Merge edges with thresh
_, _, boxes, _ = cv2.connectedComponentsWithStats(thresh)
boxes = boxes[1:]
filtered_boxes = []
for x, y, w, h, pixels in boxes:
if pixels < 1000 and x > 15 and y > 15 and pixels > 200:
filtered_boxes.append((x, y, x+w, y+h))
filtered_boxes = non_max_suppression(np.array(filtered_boxes), overlapThresh=0.2)
for x1, y1, x2, y2 in filtered_boxes:
cv2.circle(img1, ((x1+x2)//2, (y1+y2)//2), 2, (0, 255, 0), 20)
Result:
Approach: template matching.
Because the appearance of these dots can't be caught by just thresholding on brightness. These dots have both brighter and darker pixels than the flat surface. At best you'd get fractured components and given how close these dots are, any morphology operations to fix up the fractured components would run the risk of joining adjacent dots.
So here I do template matching. Works well enough, even though the appearance of these dots changes across the image. Brightness is uneven but that's not too much of a problem. TM_CCOEFF subtracts the mean for both patches before correlating.
imutils requires you to come up with bounding boxes for its NMS. Instead I'm using a simple and effective NMS formulation for raster data. Comparing floats for equality is okay here because dilate simply replicates the maximum value across the kernel size. It uses L-inf distance; for euclidean distance or an approximation, the kernel needs to be round. One downside: if there are multiple peaks of equal value, this will not remove either of them. To catch adjacent equal peaks, connectedComponents would work instead of findNonZero. For non-adjacent but equal peaks... let's just assume that doesn't happen because the data would make that situation impossible.
# Non-maximum suppression for raster data
def non_maximum_suppression(im, radius):
dilated = cv.dilate(im, kernel=None, iterations=radius)
return (im == dilated)
# Read image
im = cv.imread("fK2WOX.jpeg", cv.IMREAD_GRAYSCALE)
# Select template
# (x,y,w,h) = (880, 247, 44, 44)
(x,y,w,h) = cv.selectROI("ROI", 255-im, showCrosshair=False, fromCenter=True) # inverted to see the white rectangle...
cv.destroyWindow("ROI")
print((x,y,w,h))
template = im[y:y+h, x:x+w]
# find instances
scores = cv.matchTemplate(im, template, method=cv.TM_CCOEFF)
scores = scores / scores.max()
You see that the template creates additional peaks to the top and bottom of each dot. That can't be helped because that's the appearance of these dots, they consist of a bright dash surrounded by two darker blobs. These false peaks can be suppressed with NMS quite easily.
# find peaks of sufficient strength and NMS
threshold = 0.2
nmsradius = 20 # proportional to size of template/dot
nmsmask = (scores >= threshold) & non_maximum_suppression(scores, radius=nmsradius)
coords = cv.findNonZero(nmsmask.astype(np.uint8)).reshape((-1, 2))
coords += (w//2, h//2) # shift coordinates to be center of template
# draw result
canvas = cv.cvtColor(im, cv.COLOR_GRAY2BGR)
for pt in coords:
cv.circle(canvas, pt, radius=5, color=(0,0,255), thickness=cv.FILLED)
There are no overlapping/split detections. You'll have to excuse the false detections around the punched holes near the top. Just crop those out.
As for associating those dots into symbols, that's a new problem. I'd recommend using nearest-neighbor queries. When the spacing (in N/S/E/W direction) to other dots has been estimated, you can "probe" in those directions going from each dot and check if there is another dot there... and assemble symbols like that, marking dots as "associated" until there's nothing left. You would also want to associate symbols into strings, assuming a certain spacing between symbols. same approach there... and that gives you the added information of a baseline for the "text".
I have an input image of a fully transparent object:
I need to detect the 42 rectangles in this image. This is an example of the output image I need (I marked 6 rectangles for better understanding):
The problem is that the rectangles look really different. I have to use this input image.
How can I achieve this?
Edit 1: Here is a input image as png:
If you calculate the variance down the rows and across the columns, using:
import cv2
import numpy as np
im = cv2.imread('YOURIMAGE', cv2.IMREAD_GRAYSCALE)
# Calculate horizontal and vertical variance
h = np.var(im, axis=1)
v = np.var(im, axis=0)
You can plot them and hopefully locate the peaks of variance which should be your objects:
Mark Setchell's idea is out-of-the-box. Here is a more traditional approach.
Approach:
The image contains boxes whose intensity fades away in the lower rows. Using global equalization would fail here since the intensity changes of the entire image is taken into account. I opted for a local equalization approach in OpenCV this is available as CLAHE (Contrast Limited Adaptive Histogram Equalization))
Using CLAHE:
Equalization is applied on individual regions of the image whose size can be predefined.
To avoid over amplification, contrast limiting is applied, (hence the name).
Let's see how to use it in our problem:
Code:
# read image and store green channel
green_channel = img[:,:,1]
# grid-size for CLAHE
ts = 8
# initialize CLAHE function with parameters
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(ts, ts))
# apply the function
cl = clahe.apply(green_channel)
Notice the image above, the boxes in the lower regions appear slightly darker as expected. This will help us later on.
# apply Otsu threshold
r,th_cl = cv2.threshold(cl, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
# dilation performed using vertical kernels to connect disjoined boxes
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1, 3))
dilate = cv2.dilate(th_cl, vertical_kernel, iterations=1)
# find contours and draw bounding boxes
contours, hierarchy = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
img2 = img.copy()
for c in contours:
area = cv2.contourArea(c)
if area > 100:
x, y, w, h = cv2.boundingRect(c)
img2 = cv2.rectangle(img2, (x, y), (x + w, y + h), (0,255,255), 1)
(The top-rightmost box isn't covered properly. You would need to tweak the various parameters to get an accurate result)
Other pre-processing approaches you can try:
Global equalization
Contrast stretching
Normalization
I would like to find the empty spaces (black regions) in images similar to the one I've posted below, where I have randomly sized blocks scattered in it.
By empty spaces, I refer to such possible open fields ( i have no particular lower bound on the area, but I would like to extract the top 3-4 largest ones present in the image.) There is also no restriction on the geometric shape they can take, but these empty spaces must not contain any of the blue blocks.
What is the best way to go about this?
What I've done till now:
My original image actually looks like this. I identified all the points, grouped them based on a certain distance threshold and applied a convex hull around them. I'm unsure how to proceed further. Any help would be greatly appreciated. Thank you!
Here is one way in Python/OpenCV using the distance transform to find the largest Euclidean distance between the Xs.
Input:
import cv2
import numpy as np
import skimage.exposure
# read image
img = cv2.imread('xxx.png')
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold to binary and invert so background is white and xxx are black
thresh = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY)[1]
thresh = 255 - thresh
# add black border around threshold image to avoid corner being largest distance
thresh2 = cv2.copyMakeBorder(thresh, 1,1,1,1, cv2.BORDER_CONSTANT, (0))
h, w = thresh2.shape
# create zeros mask 2 pixels larger in each dimension
mask = np.zeros([h + 2, w + 2], np.uint8)
# apply distance transform
distimg = thresh2.copy()
distimg = cv2.distanceTransform(distimg, cv2.DIST_L2, 5)
# remove excess border
distimg = distimg[1:h-1, 1:w-1]
# get max value and location in distance image
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(distimg)
# scale distance image for viewing
distimg = skimage.exposure.rescale_intensity(distimg, in_range='image', out_range=(0,255))
distimg = distimg.astype(np.uint8)
# draw circle on input
result = img.copy()
centx = max_loc[0]
centy = max_loc[1]
radius = int(max_val)
cv2.circle(result, (centx, centy), radius, (0,0,255), 1)
print('center x,y:', max_loc,'center radius:', max_val)
# save image
cv2.imwrite('xxx_distance.png',distimg)
cv2.imwrite('xxx_radius.png',result)
# show the images
cv2.imshow("thresh", thresh)
cv2.imshow("thresh2", thresh2)
cv2.imshow("distance", distimg)
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Distance Transform Image:
Region of Largest Distance to Xs:
Textual Information:
center x,y: (179, 352) radius: 92.5286865234375
I have many x-ray scans and need to crop the scanned object from its background noise.
The files are in .png format and I am planning to use OpenCV Python for this task. I have seen some works with FindContours() but unsure that thresholding will work for this case.
Before Image:
After/Cropped Image:
Any suggested solution/code is appreciated.
Here is one way to do that in Python/OpenCV. It assumes you have the same excess border in all your images so that one can sort contours by area and skip the largest contour to get the second largest one.
Input:
import cv2
import numpy as np
# load image
img = cv2.imread("table_xray.jpg")
hh, ww = img.shape[:2]
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# median filter
filt = cv2.medianBlur(gray, 15)
# threshold the filtered image and invert
thresh = cv2.threshold(filt, 64, 255, cv2.THRESH_BINARY)[1]
thresh = 255 - thresh
# find contours and store index with area in list
cntrs_info = []
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
index=0
for cntr in contours:
area = cv2.contourArea(cntr)
print(index, area)
cntrs_info.append((index,area))
index = index + 1
# sort contours by area
def takeSecond(elem):
return elem[1]
cntrs_info.sort(key=takeSecond, reverse=True)
# get bounding box of second largest contour skipping large border
index_second = cntrs_info[1][0]
x,y,w,h = cv2.boundingRect(contours[index_second])
print(index_second,x,y,w,h)
# crop input image
results = img[y:y+h,x:x+w]
# write result to disk
cv2.imwrite("table_xray_thresholded.png", thresh)
cv2.imwrite("table_xray_extracted.png", results)
cv2.imshow("THRESH", thresh)
cv2.imshow("RESULTS", results)
cv2.waitKey(0)
cv2.destroyAllWindows()
Filtered and Thresholded Image:
Cropped Result:
This is another possible solution. It uses the K-Channel of your input image, once converted to the CMYK color-space. The K (or Key) channel has most of the information of the black color, so it should be useful for segmenting the input image. After that, you can apply a heavy morphological chain to produce a good mask of the object. After that, cropping the object is very straightforward. Let's see the code:
# Imports
import cv2
import numpy as np
# Read image
imagePath = "D://opencvImages//"
inputImage = cv2.imread(imagePath+"jU6QA.jpg")
# Convert to float and divide by 255:
imgFloat = inputImage.astype(np.float) / 255.
# Calculate channel K:
kChannel = 1 - np.max(imgFloat, axis=2)
# Convert back to uint 8:
kChannel = (255*kChannel).astype(np.uint8)
The first bit of the program converts your image to the CMYK color-space and extracts the K channel. OpenCV has no direct conversion to this color-space, so a manual conversion is necessary. We need to be careful with the data types because there are float operations involved. The resulting image is this:
Pixels with black information are assigned an intensity close to 255. Now, let's threshold this image to get a binary mask. The threshold level is fixed:
# Threshold the image with a fixed thresh level
thresholdLevel = 200
_, binaryImage = cv2.threshold(kChannel, thresholdLevel, 255, cv2.THRESH_BINARY)
This produces the following binary image:
Alright. We need to isolate the object, however we have both the lines of the background and the "frame" around the image. Let's get rid of the lines first. We will apply a morphological Erosion. Then, we will remove the frame Flood-Filling with black color at two locations: upper left and bottom right of the image. After that, we will apply a Dilation to restore the object's original size. I wrapped these OpenCV functions inside custom functions that save me the typing of a couple of lines - These helper functions are presented at the end of the post. This is the approach:
# Perform Small Erosion:
binaryImage = morphoOperation(binaryImage, 3, 5, "Erode")
# Flood-Fill at two locations: Top left corner and bottom right:
(imageHeight, imageWidth) = binaryImage.shape[:2]
floodPositions = [(0, 0),(imageWidth-1, imageHeight-1)]
binaryImage = floodFill(binaryImage, floodPositions, 0)
# Perform Small Dilate:
binaryImage = morphoOperation(binaryImage, 3, 5, "Dilate")
This is the result:
Nice. We can improve the mask by applying a second morphological chain, this time with more iterations. Let's apply a Dilation to try and join the "holes" of the object, followed with a Erosion to, once again, restore the object's original size:
# Perform Big Dilate:
binaryImage = morphoOperation(binaryImage, 3, 10, "Dilate")
# Perform Big Erode:
binaryImage = morphoOperation(binaryImage, 3, 10, "Erode")
This yields the following result:
The gaps inside the object have been filled. Now, let's retrieve the contours on this mask to find the object's contour. I've additionally included an area filter. The mask is pretty clean by this point, so maybe this filter is not too necessary. Once the contour is located, we can crop the object from the original image:
# Find the contours on the binary image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# BGR image for drawing results:
binaryBGR = cv2.cvtColor(binaryImage, cv2.COLOR_GRAY2BGR)
# Look for the outer bounding boxes (no children):
for _, c in enumerate(contours):
# Get blob area:
currentArea = cv2.contourArea(c)
# Set a min area value:
minArea = 10000
if minArea < currentArea:
# Get the contour's bounding rectangle:
boundRect = cv2.boundingRect(c)
# Get the dimensions of the bounding rect:
rectX = boundRect[0]
rectY = boundRect[1]
rectWidth = boundRect[2]
rectHeight = boundRect[3]
# Set bounding rect:
color = (0, 255, 0)
cv2.rectangle( binaryBGR, (int(rectX), int(rectY)),
(int(rectX + rectWidth), int(rectY + rectHeight)), color, 5 )
cv2.imshow("Rects", binaryBGR)
# Crop original input:
currentCrop = inputImage[rectY:rectY + rectHeight, rectX:rectX + rectWidth]
cv2.imshow("Cropped", currentCrop)
cv2.waitKey(0)
The last step produces the following two images. The first is the object enclosed by a rectangle, the second one is the actual crop:
I also tested the algorithm with your second image, these are the final results:
Wow. Somebody brought a gun to the airport? That's not OK. These are the helper functions used earlier. This first function performs the morphological operations:
def morphoOperation(binaryImage, kernelSize, opIterations, opString):
# Get the structuring element:
morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
# Perform Operation:
if opString == "Dilate":
op = cv2.MORPH_DILATE
else:
if opString == "Erode":
op = cv2.MORPH_ERODE
outImage = cv2.morphologyEx(binaryImage, op, morphKernel, None, None, opIterations,
cv2.BORDER_REFLECT101)
return outImage
The second function performs Flood-Filling given a list of seed-points:
def floodFill(binaryImage, positions, color):
# Loop thru the positions list of tuples:
for p in range(len(positions)):
currentSeed = positions[p]
x = int(currentSeed[0])
y = int(currentSeed[1])
# Apply flood-fill:
cv2.floodFill(binaryImage, mask=None, seedPoint=(x, y), newVal=(color))
return binaryImage
I'm trying to identify a list of objects that appeared newly in a photo. The plan is to get multiple cropped images from the original image and feed them to a neural network for object detection. Right now, I'm having trouble in extracting objects that appeared in a frame.
import cv2 as cv
import matplotlib.pyplot as plt
def mdisp(image):
plt.imshow(image)
plt.show()
im1 = cv.imread('images/litter-before.jpg')
mdisp(im1)
print(im1.shape)
im2 = cv.imread('images/litter-after.jpg')
mdisp(im2)
print(im2.shape)
backsub1=cv.createBackgroundSubtractorMOG2()
backsub2=cv.createBackgroundSubtractorKNN()
fgmask = backsub1.apply(im1)
fgmask = backsub1.apply(im2)
print(fgmask.shape)
mdisp(fgmask)
new_image = im2 * (fgmask[:,:,None].astype(im2.dtype))
mdisp(new_image)
Ideally, I would like to get a cropped picture of the item within red circle. How can I do it with OpenCv
Here's an approach, subtracting the two frames directly. The idea is that you first convert your images to grayscale, then blur a little bit to ignore the noise. Subtract the two frames, threshold the difference and look for the largest blob that is above a certain area threshold value.
Let's see:
import cv2
import numpy as np
# image path
path = "C:/opencvImages/"
fileName01 = "01.jpg"
fileName02 = "02.jpg"
# Read the2 images in default mode:
image01 = cv2.imread(path + fileName01)
image02 = cv2.imread(path + fileName02)
# Store a copy of the last frame for results drawing:
inputCopy = image02.copy()
# Convert RGB images to grayscale:
grayscaleImage01 = cv2.cvtColor(image01, cv2.COLOR_BGR2GRAY)
grayscaleImage02 = cv2.cvtColor(image02, cv2.COLOR_BGR2GRAY)
# Convert RGB images to grayscale:
filterSize = 5
imageMedian01 = cv2.medianBlur(grayscaleImage01, filterSize)
imageMedian02 = cv2.medianBlur(grayscaleImage02, filterSize)
Now you have the grayscale, blurred frames. Next, we need to calculate the difference between these frames. I don't wanna loose data, so I have to be careful with the data type here. Remember that these are grayscale, uint8 matrices, but the difference could potentially yield negative values. Let's convert the matrices to floats, take the difference, and convert this matrix to uint8:
# uint8 to float32 conversion:
imageMedian01 = imageMedian01.astype('float32')
imageMedian02 = imageMedian02.astype('float32')
# Take the difference and convert back to uint8
imageDifference = np.clip(imageMedian01 - imageMedian02, 0, 255)
imageDifference = imageDifference.astype('uint8')
This gives you the frames difference:
Let's threshold this to get a binary image. I'm using a threshold value of 127, as it is a the center of the 8-bit range:
threshValue = 127
_, binaryImage = cv2.threshold(imageDifference, threshValue, 255, cv2.THRESH_BINARY)
This is the binary image:
We are looking for the biggest blob here, let's find blob/contours and filter the small ones. Let's set a minimum area of 10 pixels:
# Perform an area filter on the binary blobs:
componentsNumber, labeledImage, componentStats, componentCentroids = \
cv2.connectedComponentsWithStats(binaryImage, connectivity=4)
# Set the minimum pixels for the area filter:
minArea = 10
# Get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remainingComponentLabels = [i for i in range(1, componentsNumber) if componentStats[i][4] >= minArea]
# Filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
filteredImage = np.where(np.isin(labeledImage, remainingComponentLabels) == True, 255, 0).astype('uint8')
# Find the big contours/blobs on the filtered image:
contours, hierarchy = cv2.findContours(filteredImage, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
contours_poly = [None] * len(contours)
boundRect = []
# Alright, just look for the outer bounding boxes:
for i, c in enumerate(contours):
if hierarchy[0][i][3] == -1:
contours_poly[i] = cv2.approxPolyDP(c, 3, True)
boundRect.append(cv2.boundingRect(contours_poly[i]))
# Draw the bounding boxes on the (copied) input image:
for i in range(len(boundRect)):
print(boundRect[i])
color = (0, 255, 0)
cv2.rectangle(inputCopy, (int(boundRect[i][0]), int(boundRect[i][1])), \
(int(boundRect[i][0] + boundRect[i][2]), int(boundRect[i][1] + boundRect[i][3])), color, 1)
Check out the results: