Counting drop on an image - python

I have image of drops and I want to calculate the number of it.
Here is the original image :
And Here after threshold application :
i tried a lot of fonction on OpenCV and it's never right.
Do you have any ideas on how to do ?
Thanks
The best I got, was by using :
(img_morph is my binairized image)
rbc_bw = label(img_morph)
rbc_props = regionprops(rbc_bw)
fig, ax = plt.subplots(figsize=(18, 8))
ax.imshow(img_morph)
rbc_count = 0
for i, prop in enumerate(filter(lambda x: x.area > 250, rbc_props)):
y1, x1, y2, x2 = (prop.bbox[0], prop.bbox[1],
prop.bbox[2], prop.bbox[3])
width = x2 - x1
height = y2 - y1
r = plt.Rectangle((x1, y1), width = width, height=height,
color='b', fill=False)
ax.add_patch(r)
rbc_count += 1
print('Red Blood Cell Count:', rbc_count)
plt.show()
And all my circles are detected here but also the gap in between.
A more difficult image :

Core idea: matchTemplate.
Approach:
pick a template manually from the picture
histogram equalization for badly lit inputs (or always)
matchTemplate with suitable matching mode
also using copyMakeBorder to catch instances clipping the border
thresholding and non-maximum suppression
I'll skip the boring parts and use the first example input.
Manually picked template:
scores = cv.matchTemplate(haystack, template, cv.TM_CCOEFF_NORMED)
Thresholding and NMS:
levelmask = (scores >= 0.3)
localmax = cv.dilate(scores, None, iterations=26)
localmax = (scores == localmax)
candidates = levelmask & localmax
(nlabels, labels, stats, centroids) = cv.connectedComponentsWithStats(candidates.astype(np.uint8), connectivity=8)
print(nlabels-1, "found") # background counted too
# and then draw a circle for each centroid except label 0
And that finds 766 instances. I see a few false negatives (missed) and saw a false positive too once, but that looks like less than 1%.

Related

How to change pixels in a bounding box within and image to black?

I have a set of binary masks that contain some noise (image below). I want to write a piece of code to change the pixels from the areas that contain that noise to black. I have tried with the code below but this does not change any pixel from the original array to pitch black. Does anyone know how I can do this?
Mask with noise
I have tried the following code:
ht, wt = array.shape
area = ht * wt
for region in sme.regionprops(array):
if 0.1 > (region.area/area) > 0.001:
x1 = math.ceil(region.bbox[0])
x2 = math.ceil(region.bbox[1])
y1 = math.ceil(region.bbox[2])
y2 = math.ceil(region.bbox[3])
for item in array[y1:y2, x1:x2]:
array = np.where(array, item>0.1, 0)
plt.imshow(array, cmap='gray')
plt.show()

Is there a way to slice an image using either numpy or opencv such that the sliced image has at least one instance of the objects of interest?

Essentially, my original image has N instances of a certain object. I have the bounding box coordinates and the class for all of them in a text file. This is basically a dataset for YoloV3 and darknet. I want to generate additional images by slicing the original one in a way such that it contains at least 1 instance of one of those objects and if it does, save the image, and the new bounding box coordinates of the objects in that image.
The following is the code for slicing the image:
x1 = random.randint(0, 1200)
width = random.randint(0, 800)
y1 = random.randint(0, 1200)
height = random.randint(30, 800)
slice_img = img[x1:x1+width, y1:y1+height]
plt.imshow(slice_img)
plt.show()
My next step is to use template matching to find if my sliced image is in the original one:
w, h = slice_img.shape[:-1]
res = cv2.matchTemplate(img, slice_img, cv2.TM_CCOEFF_NORMED)
threshold = 0.6
loc = np.where(res >= threshold)
for pt in zip(*loc[::-1]): # Switch columns and rows
cv2.rectangle(img, pt, (pt[0] + w, pt[1] + h), (0, 0, 255), 5)
cv2.imwrite('result.png', img)
At this stage, I am quite lost and not sure how to proceed any further.
Ultimately, I need many new images with corresponding text files containing the class and coordinates. Any advice would be appreciated. Thank you.
P.S I cannot share my images with you, unfortunately.
Template matching is way overkill for this. Template matching essentially slides a kernel image over your main image and compares pixels of each, performing many many computations. There's no need to search the image because you already know where the objects are within the image. Essentially, you are trying to determine whether one rectangle (bounding box for an object) overlaps sufficiently with the slice, and you know the exact coordinates of each rectangle. Thus, it's a geometry problem rather than a computer vision problem.
(As an aside: the correct term for what you are calling a slice would probably be crop; slice generally means you're taking an N-dimensional array (say 3 x 4 x 5) and taking a subset of data that is N-1 dimensional by selecting a single index for one dimension (i.e. take index 0 on dimension 0 to get a 1 x 4 x 5 array).
Here's a brief example of how you might do this. Let x1 x2 y1 y2 be the min and max x and y coordinates for the crop you generate. Let ox1 ox2 oy1 oy2 be the min and max x and y coordinates for an object:
NO_SUCCESSFUL_CROPS = True
while NO_SUCCESSFUL_CROPS:
# Generate crop
x1 = random.randint(0, 1200)
width = random.randint(0, 800)
y1 = random.randint(0, 1200)
height = random.randint(30, 800)
x2 = x1 + width
y2 = y1 + height
# for each bounding box
#check if at least (nominally) 70% of object is within crop
threshold = 0.7
for bbox in all_objects:
#assign bbox to ox1 ox2 oy1 oy2
ox1,ox2,oy1,oy2 = bbox
# compute percentage of bbox that is within crop
minx = max(ox1,x1)
miny = max(oy1,y1)
maxx = min(ox2,x2)
maxy = min(oy2,y2)
area_in_crop = (maxx-minx)*(maxy-miny)
area of bbox = (ox2-ox1)*(oy2-oy1)
ratio = area_in_crop / area_of_bbox
if ratio > threshold:
# break loop
NO_SUCCESSFUL_CROPS = False
# crop image as above
crop_image = image[y1:y2,x1:x2] # if image is an array, may have to do y then x because y is row and x is column. Not sure exactly which form opencv uses
cv2.imwrite("output_file.png",crop_image)
# shift bbox coords since (x1,y1) is the new (0,0) pixel in crop_image
ox1 -= x1
ox2 -= x1
oy1 -= y1
oy2 -= y2
break # no need to continue (although you could alternately continue until you have N crops, or even make sure you get one crop with each object)

Orienting an Object in an image horizontally

I have images of Food Trays oriented in various angles. I would like to make all the trays horizontally oriented. For this I tried finding the longest edge of the tray using hough's transformation and calculated its orientation with respect to the image border and rotated it. It works fine for very few cases. I would like to make it work for all the images I have. Can anyone please help me with this? I have attached some sample images in the link below and also I have included the code which I am currently using.
Link for images
def Enquiry(lis1):
return(np.array(lis1))
img = cv2.imread('path/to/image')
canny = cv.Canny(img, 100, 200)
minLineLength = 200
maxLineGap = 10
lines = cv.HoughLinesP(canny, 1, np.pi / 180, 100, minLineLength, maxLineGap)
if Enquiry(lines).size>=4:
lines1 = lines[:,0,:]
max_length = 0
index = 0
i = 0
for x1, y1, x2, y2 in lines1:
length = (x1-x2)*(x1-x2) + (y1-y2)*(y1-y2)
if length > max_length:
max_length = length
index = i
i += 1
[x1,y1,x2,y2]=lines1[index]
degree = math.atan(abs(y1-y2)/abs(x1-x2))
angle = degree*180/np.pi
H, W = img.shape[:2]
rotation_matrix = cv.getRotationMatrix2D((W/2, H/2), -angle, 1)
img_rotation = cv.warpAffine(img, rotation_matrix, (W, H))
cv2.imwrite('rotated_image.jpg', img_rotation)
Only rotation will not help. In your test images, the tray has some shear and skew too.
What I would suggest, is find the corners from the intersection of the lines.
At least 3 corners. Then find the affine transformation between the corners and the expected actual corners of the tray.

How to remove variations in the y axis of template matching (opencv)

With the below function, the coords returned will be slightly different each time, because the template matching found the same template a few pixels up or down. Increasing threshold does not help, as it will simply not find anything if higher.
How could I make it always return the same coordinates after finding the template, without this little variation in the y axis (I don't care about the precision of the x axis)?
The function, which runs inside an infinite loop in this:
def fishing_region(img_gray, region_template_gray, w, h): # w, h is how wide and high is the template
region_detected = False
green_bar_region = img_gray[y-5:470+y, 347+x:488+x]
res = cv2.matchTemplate(img_gray, region_template_gray, cv2.TM_CCOEFF_NORMED)
threshold = 0.65
loc = np.where( res >= threshold)
for pt in zip(*loc[::-1]):
x1, y1 = pt[0], pt[1] # top-left corner
x2, y2 = pt[0] + w, pt[1] + h # bottom-right corner
coords_list = [y1, y2, x1 + 55, x2 - 35]
region_detected = True
print("Region detected")
break # only finds the template 1 time per function call
if not region_detected:
print("No region")
return region_detected, coords_list
EDIT: Here is the rectangle draw with the coordinates and the template: album.
Also, would masking the template image, removing the parts that changes colors, be possible?
If you require only one matching region per image, using the cv2.minMaxLoc() function on res will find the global maximum. This will be stable for a given image and template.
To replicate the threshold you have, you could use the following psuedocode:
~, maxVal, ~, maxLoc = cv2.minMaxLoc(res)
if (maxVal > thresh):
rest_of_function()

Automatically remove hot/dead pixels from an image in python

I am using numpy and scipy to process a number of images taken with a CCD camera. These images have a number of hot (and dead) pixels with very large (or small) values. These interfere with other image processing, so they need to be removed. Unfortunately, though a few of the pixels are stuck at either 0 or 255 and are always at the same value in all of the images, there are some pixels that are temporarily stuck at other values for a period of a few minutes (the data spans many hours).
I am wondering if there is a method for identifying (and removing) the hot pixels already implemented in python. If not, I am wondering what would be an efficient method for doing so. The hot/dead pixels are relatively easy to identify by comparing them with neighboring pixels. I could see writing a loop that looks at each pixel, compares its value to that of its 8 nearest neighbors. Or, it seems nicer to use some kind of convolution to produce a smoother image and then subtract this from the image containing the hot pixels, making them easier to identify.
I have tried this "blurring method" in the code below, and it works okay, but I doubt that it is the fastest. Also, it gets confused at the edge of the image (probably since the gaussian_filter function is taking a convolution and the convolution gets weird near the edge). So, is there a better way to go about this?
Example code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.ndimage
plt.figure(figsize=(8,4))
ax1 = plt.subplot(121)
ax2 = plt.subplot(122)
#make a sample image
x = np.linspace(-5,5,200)
X,Y = np.meshgrid(x,x)
Z = 255*np.cos(np.sqrt(x**2 + Y**2))**2
for i in range(0,11):
#Add some hot pixels
Z[np.random.randint(low=0,high=199),np.random.randint(low=0,high=199)]= np.random.randint(low=200,high=255)
#and dead pixels
Z[np.random.randint(low=0,high=199),np.random.randint(low=0,high=199)]= np.random.randint(low=0,high=10)
#Then plot it
ax1.set_title('Raw data with hot pixels')
ax1.imshow(Z,interpolation='nearest',origin='lower')
#Now we try to find the hot pixels
blurred_Z = scipy.ndimage.gaussian_filter(Z, sigma=2)
difference = Z - blurred_Z
ax2.set_title('Difference with hot pixels identified')
ax2.imshow(difference,interpolation='nearest',origin='lower')
threshold = 15
hot_pixels = np.nonzero((difference>threshold) | (difference<-threshold))
#Don't include the hot pixels that we found near the edge:
count = 0
for y,x in zip(hot_pixels[0],hot_pixels[1]):
if (x != 0) and (x != 199) and (y != 0) and (y != 199):
ax2.plot(x,y,'ro')
count += 1
print 'Detected %i hot/dead pixels out of 20.'%count
ax2.set_xlim(0,200); ax2.set_ylim(0,200)
plt.show()
And the output:
Basically, I think that the fastest way to deal with hot pixels is just to use a size=2 median filter. Then, poof, your hot pixels are gone and you also kill all sorts of other high-frequency sensor noise from your camera.
If you really want to remove ONLY the hot pixels, then substituting you can subtract the median filter from the original image, as I did in the question, and replace only these values with the values from the median filtered image. This doesn't work well at the edges, so if you can ignore the pixels along the edge, then this will make things a lot easier.
If you want to deal with the edges, you can use the code below. However, it is not the fastest:
import numpy as np
import matplotlib.pyplot as plt
import scipy.ndimage
plt.figure(figsize=(10,5))
ax1 = plt.subplot(121)
ax2 = plt.subplot(122)
#make some sample data
x = np.linspace(-5,5,200)
X,Y = np.meshgrid(x,x)
Z = 100*np.cos(np.sqrt(x**2 + Y**2))**2 + 50
np.random.seed(1)
for i in range(0,11):
#Add some hot pixels
Z[np.random.randint(low=0,high=199),np.random.randint(low=0,high=199)]= np.random.randint(low=200,high=255)
#and dead pixels
Z[np.random.randint(low=0,high=199),np.random.randint(low=0,high=199)]= np.random.randint(low=0,high=10)
#And some hot pixels in the corners and edges
Z[0,0] =255
Z[-1,-1] =255
Z[-1,0] =255
Z[0,-1] =255
Z[0,100] =255
Z[-1,100]=255
Z[100,0] =255
Z[100,-1]=255
#Then plot it
ax1.set_title('Raw data with hot pixels')
ax1.imshow(Z,interpolation='nearest',origin='lower')
def find_outlier_pixels(data,tolerance=3,worry_about_edges=True):
#This function finds the hot or dead pixels in a 2D dataset.
#tolerance is the number of standard deviations used to cutoff the hot pixels
#If you want to ignore the edges and greatly speed up the code, then set
#worry_about_edges to False.
#
#The function returns a list of hot pixels and also an image with with hot pixels removed
from scipy.ndimage import median_filter
blurred = median_filter(Z, size=2)
difference = data - blurred
threshold = 10*np.std(difference)
#find the hot pixels, but ignore the edges
hot_pixels = np.nonzero((np.abs(difference[1:-1,1:-1])>threshold) )
hot_pixels = np.array(hot_pixels) + 1 #because we ignored the first row and first column
fixed_image = np.copy(data) #This is the image with the hot pixels removed
for y,x in zip(hot_pixels[0],hot_pixels[1]):
fixed_image[y,x]=blurred[y,x]
if worry_about_edges == True:
height,width = np.shape(data)
###Now get the pixels on the edges (but not the corners)###
#left and right sides
for index in range(1,height-1):
#left side:
med = np.median(data[index-1:index+2,0:2])
diff = np.abs(data[index,0] - med)
if diff>threshold:
hot_pixels = np.hstack(( hot_pixels, [[index],[0]] ))
fixed_image[index,0] = med
#right side:
med = np.median(data[index-1:index+2,-2:])
diff = np.abs(data[index,-1] - med)
if diff>threshold:
hot_pixels = np.hstack(( hot_pixels, [[index],[width-1]] ))
fixed_image[index,-1] = med
#Then the top and bottom
for index in range(1,width-1):
#bottom:
med = np.median(data[0:2,index-1:index+2])
diff = np.abs(data[0,index] - med)
if diff>threshold:
hot_pixels = np.hstack(( hot_pixels, [[0],[index]] ))
fixed_image[0,index] = med
#top:
med = np.median(data[-2:,index-1:index+2])
diff = np.abs(data[-1,index] - med)
if diff>threshold:
hot_pixels = np.hstack(( hot_pixels, [[height-1],[index]] ))
fixed_image[-1,index] = med
###Then the corners###
#bottom left
med = np.median(data[0:2,0:2])
diff = np.abs(data[0,0] - med)
if diff>threshold:
hot_pixels = np.hstack(( hot_pixels, [[0],[0]] ))
fixed_image[0,0] = med
#bottom right
med = np.median(data[0:2,-2:])
diff = np.abs(data[0,-1] - med)
if diff>threshold:
hot_pixels = np.hstack(( hot_pixels, [[0],[width-1]] ))
fixed_image[0,-1] = med
#top left
med = np.median(data[-2:,0:2])
diff = np.abs(data[-1,0] - med)
if diff>threshold:
hot_pixels = np.hstack(( hot_pixels, [[height-1],[0]] ))
fixed_image[-1,0] = med
#top right
med = np.median(data[-2:,-2:])
diff = np.abs(data[-1,-1] - med)
if diff>threshold:
hot_pixels = np.hstack(( hot_pixels, [[height-1],[width-1]] ))
fixed_image[-1,-1] = med
return hot_pixels,fixed_image
hot_pixels,fixed_image = find_outlier_pixels(Z)
for y,x in zip(hot_pixels[0],hot_pixels[1]):
ax1.plot(x,y,'ro',mfc='none',mec='r',ms=10)
ax1.set_xlim(0,200)
ax1.set_ylim(0,200)
ax2.set_title('Image with hot pixels removed')
ax2.imshow(fixed_image,interpolation='nearest',origin='lower',clim=(0,255))
plt.show()
Output:

Categories

Resources