As stated in the help section of Stackoverflow that one can ask about a "software algorithm," I believe this question is on topic. I'm viewing the following algorithm and I'm having a hard time understanding the why it is being used. I've explained the mechanics below. The code was pulled from the following github repo.
import numpy as np
import cv2
import sys
def calc_sloop_change(histo, mode, tolerance):
sloop = 0
for i in range(0, len(histo)):
if histo[i] > max(1, tolerance):
sloop = i
return sloop
else:
sloop = i
def process(inpath, outpath, tolerance):
original_image = cv2.imread(inpath)
tolerance = int(tolerance) * 0.01
#Get properties
width, height, channels = original_image.shape
color_image = original_image.copy()
blue_hist = cv2.calcHist([color_image], [0], None, [256], [0, 256])
green_hist = cv2.calcHist([color_image], [1], None, [256], [0, 256])
red_hist = cv2.calcHist([color_image], [2], None, [256], [0, 256])
blue_mode = blue_hist.max()
blue_tolerance = np.where(blue_hist == blue_mode)[0][0] * tolerance
green_mode = green_hist.max()
green_tolerance = np.where(green_hist == green_mode)[0][0] * tolerance
red_mode = red_hist.max()
red_tolerance = np.where(red_hist == red_mode)[0][0] * tolerance
sloop_blue = calc_sloop_change(blue_hist, blue_mode, blue_tolerance)
sloop_green = calc_sloop_change(green_hist, green_mode, green_tolerance)
sloop_red = calc_sloop_change(red_hist, red_mode, red_tolerance)
gray_image = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY)
gray_hist = cv2.calcHist([original_image], [0], None, [256], [0, 256])
largest_gray = gray_hist.max()
threshold_gray = np.where(gray_hist == largest_gray)[0][0]
#Red cells
gray_image = cv2.adaptiveThreshold(gray_image, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 85, 4)
_, contours, hierarchy = cv2.findContours(gray_image, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
c2 = [i for i in contours if cv2.boundingRect(i)[3] > 15]
cv2.drawContours(color_image, c2, -1, (0, 0, 255), 1)
cp = [cv2.approxPolyDP(i, 0.015 * cv2.arcLength(i, True), True) for i in c2]
countRedCells = len(c2)
for c in cp:
xc, yc, wc, hc = cv2.boundingRect(c)
cv2.rectangle(color_image, (xc, yc), (xc + wc, yc + hc), (0, 255, 0), 1)
#Malaria cells
gray_image = cv2.inRange(original_image, np.array([sloop_blue, sloop_green, sloop_red]), np.array([255, 255, 255]))
_, contours, hierarchy = cv2.findContours(gray_image, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
c2 = [i for i in contours if cv2.boundingRect(i)[3] > 8]
cv2.drawContours(color_image, c2, -1, (0, 0, 0), 1)
cp = [cv2.approxPolyDP(i, 0.15 * cv2.arcLength(i, True), True) for i in c2]
countMalaria = len(c2)
for c in cp:
xc, yc, wc, hc = cv2.boundingRect(c)
cv2.rectangle(color_image, (xc, yc), (xc + wc, yc + hc), (0, 0, 0), 1)
#Write image
cv2.imwrite(outpath, color_image)
#Write statistics
with open(outpath + '.stats', mode='w') as f:
f.write(str(countRedCells) + '\n')
f.write(str(countMalaria) + '\n')
The above code looks at images of cells(irregular shapes) and identifies if there are black spots /blobs inside them. Then, it draws contours around the cells and blobs. For example:
I don't understand why the algorithm works the following way:
Let me illustrate with an example:
Let's say my tolerance passed into process() is 50. Let's say blue_hist returns an array [1, 2, 3, 4, 100, 0, ..., 0] and the largest value in this array is 100 at index 4. This indicates that there are a 100 pixels with an intensity of 4 in the gray scale version of the color image when just the blue signal is extracted. In this situation, the function where(blue_hist = blue_mode) will return 4. This value is multiplied by 0.01*tolerance giving us 2.
So, if the value 4 is pixel intensity value then multiplying it by a scalar only gives another pixel intensity value (in our case, (4 * (0.01*50)) = 2. This new pixel intensity is passed into calc_sloop_change(). In this function, compares histo[i] which returns the number of pixels at intensity i with tolerance(which is the pixel value we calculated earlier). So in our case, the first value greater than 2 happens when i=3. So 3 is returned.
This is where I'm confused. Why is this being done? It seems illogical to compare the number of pixels vs pixel intensity. They are not even the same entity. So, why are they using this algorithm? I must add that this code actually performs really well. So something must be right.
Lastly, the three values calculated by calc_sloop_change(), one for each color signal, acts as a lower cutoff to produce a binary image. Anything less than those values(which are actually pixel intensity values) becomes black and everything above those values becomes white.
Related
I'm learning OpenCV and I'm looking for a code in python that get an input coordinate of a small image and map it to the coordinate of a large image so that a small image insert to the large image and it can be transform like rotating. I want to use translation matrix as an input to do that. For example if the matrix is:
([75, 120][210,320],
[30, 90][190,305],
[56, 102][250,474],
[110, 98][330,520])
it means that pixel at (75, 120) in small image should map to pixel at (210, 320) in large image and pixel at (30, 90) in small image should map to pixel at (190, 305) in large image ...
I searched a lot but I didn't get the proper answer to my problem.
How can I solve this problem?
Inset small image in large one:
import sys
import cv2
dir = sys.path[0]
small = cv2.imread(dir+'/small.png')
big = cv2.imread(dir+'/big.png')
x, y = 20, 20
h, w = small.shape[:2]
big[y:y+h, x:x+w] = small
cv2.imwrite(dir+'/out.png', big)
Resize and then insert:
h, w = small.shape[:2]
small=cv2.resize(small,(w//2,h//2))
x, y = 20, 20
h, w = small.shape[:2]
big[y:y+h, x:x+w] = small
Insert part of image:
x, y = 20, 20
h, w = small.shape[:2]
hh, ww = h//2, w//2
big[y:y+hh, x:x+ww] = small[0:hh, 0:ww]
Rotating sample:
bH, bW = big.shape[:2]
sH, sW = small.shape[:2]
ch, cw = sH//2, sW//2
x, y = sW-cw//2, ch
empty = 0 * np.ones((bH, bW, 3), dtype=np.uint8)
empty[y:y+sH, x:x+sW] = small
M = cv2.getRotationMatrix2D(center=(x+cw, y+ch), angle=45, scale=1)
rotated = cv2.warpAffine(empty, M, (bW, bH))
big[np.where(rotated != 0)] = rotated[np.where(rotated != 0)]
Perspective transform sample:
bH, bW = big.shape[:2]
sH, sW = small.shape[:2]
x, y = 0, 0
empty = 0 * np.ones((bH, bW, 3), dtype=np.uint8)
empty[y:y+sH, x:x+sW] = small
_inp = np.float32([[0, 0], [sW, 0], [bW, sH], [0, sH]])
_out = np.float32([[bW//2-sW//2, 0], [bW//2+sW//2, 0], [bW, bH], [0, bH]])
M = cv2.getPerspectiveTransform(_inp, _out)
transformed = cv2.warpPerspective(empty, M, (bH, bW))
big[np.where(transformed != 0)] = transformed[np.where(transformed != 0)]
And finally for mapping cordinates; I think you just need to fill _out:
bH, bW = big.shape[:2]
sH, sW = small.shape[:2]
empty = 0 * np.ones((bH, bW, 3), dtype=np.uint8)
empty[:sH, :sW] = small
# Cordinates: TopLeft, TopRight, BottomRight, BottomLeft
_inp = np.float32([[0, 0], [sW, 0], [sW, sH], [0, sH]])
_out = np.float32([[50, 40], [300, 40], [200, 200], [10, 240]])
M = cv2.getPerspectiveTransform(_inp, _out)
transformed = cv2.warpPerspective(empty, M, (bH, bW))
big[np.where(transformed != 0)] = transformed[np.where(transformed != 0)]
I don't know of a matrix operation that maps pixels to pixels anywhere, and because images are usually represented by 2D arrays, there isn't a general way to make these pixels point to the same data.
But given that these images are represented by NumPy arrays, you can use advanced indexing to copy any pixels from one array to another:
# smallimage is a NumPy array
# bigimage is a NumPy array
### Indices ###
# I formatted it so the matching indices
# between the 2 images line up in a column
bigD1 = [210, 190, 250, 330] # dimension 0
bigD2 = [320, 305, 474, 520] # dimension 1
smallD1 = [75, 30, 56, 110]
smallD2 = [120, 90, 102, 98]
### copy pixels from small image to big image ###
# on right side of =, advanced indexing copies
# the selected pixels to a new temporary array
# v
bigimage[bigD1, bigD2] = smallimage[smallD1, smallD2]
# ^
# on left side of =, advanced indexing specifies
# where we copy the temporary array's pixels to.
# smallimage is unchanged
# bigimage has edited pixels
I have created an alghoritm that detects the edges of an extruded colagen casing and draws a centerline between these edges on an image. Casing with a centerline.
Here is my code:
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
img = cv2.imread("C:/Users/5.jpg", cv2.IMREAD_GRAYSCALE)
img = cv2.resize(img, (1500, 1200))
#ROI
fromCenter = False
r = cv2.selectROI(img, fromCenter)
imCrop = img[int(r[1]):int(r[1]+r[3]), int(r[0]):int(r[0]+r[2])]
#Operations on an image
_,thresh = cv2.threshold(imCrop,100,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel = np.ones((5,5),np.uint8)
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
blur = cv2.GaussianBlur(opening,(7,7),0)
edges = cv2.Canny(blur, 0,20)
#Edges localization, packing coords into a list
indices = np.where(edges != [0])
coordinates = list(zip(indices[1], indices[0]))
num = len(coordinates)
#Separating into top and bot edge
bot_cor = coordinates[:int(num/2)]
top_cor = coordinates[-int(num/2):]
#Converting to arrays, sorting
a, b = np.array(top_cor), np.array(bot_cor)
a, b = a[a[:,0].argsort()], b[b[:,0].argsort()]
#Edges approximation by a 5th degree polynomial
min_a_x, max_a_x = np.min(a[:,0]), np.max(a[:,0])
new_a_x = np.linspace(min_a_x, max_a_x, imCrop.shape[1])
a_coefs = np.polyfit(a[:,0],a[:,1], 5)
new_a_y = np.polyval(a_coefs, new_a_x)
min_b_x, max_b_x = np.min(b[:,0]), np.max(b[:,0])
new_b_x = np.linspace(min_b_x, max_b_x, imCrop.shape[1])
b_coefs = np.polyfit(b[:,0],b[:,1], 5)
new_b_y = np.polyval(b_coefs, new_b_x)
#Defining a centerline
midx = [np.average([new_a_x[i], new_b_x[i]], axis = 0) for i in range(imCrop.shape[1])]
midy = [np.average([new_a_y[i], new_b_y[i]], axis = 0) for i in range(imCrop.shape[1])]
plt.figure(figsize=(16,8))
plt.title('Cross section')
plt.xlabel('Length of the casing', fontsize=18)
plt.ylabel('Width of the casing', fontsize=18)
plt.plot(new_a_x, new_a_y,c='black')
plt.plot(new_b_x, new_b_y,c='black')
plt.plot(midx, midy, '-', c='blue')
plt.show()
#Converting coords type to a list (plotting purposes)
coords = list(zip(midx, midy))
points = list(np.int_(coords))
mask = np.zeros((imCrop.shape[:2]), np.uint8)
mask = edges
#Plotting
for point in points:
cv2.circle(mask, tuple(point), 1, (255,255,255), -1)
for point in points:
cv2.circle(imCrop, tuple(point), 1, (255,255,255), -1)
cv2.imshow('imCrop', imCrop)
cv2.imshow('mask', mask)
cv2.waitKey(0)
cv2.destroyAllWindows()
Now I would like to sum up the intensities of each pixel in a region between top edge and a centerline (same thing for a region between centerline and a bottom edge).
Is there any way to limit the ROI to the region between the detected edges and split it into two regions based on the calculated centerline?
Or is there any way to access the pixels which are contained between the edge and a centerline based on theirs coordinates?
(It's my very first post here, sorry in advance for all the mistakes)
I wrote a somewhat naïve code to get masks for the upper and lower part. My code considers that the source image will be always like yours: with horizontal stripes.
After applying Canny I get this:
Then I run some loops through image array to fill unwanted areas of your image. This is done separately for upper and lower part, creating masks. The results are:
Then you can use this masks to sum only the elements you're interested in, using cv.sumElems.
import cv2 as cv
#open as grayscale image
src = cv.imread("colagen.png",cv.IMREAD_GRAYSCALE)
# apply canny and find contours
threshold = 100
canny_output = cv.Canny(src, threshold, threshold * 2)
# find mask for upper part
mask1 = canny_output.copy()
x, y = canny_output.shape
area = 0
for j in range(y):
area = 0
for i in range(x):
if area == 0:
if mask1[i][j] > 0:
area = 1
continue
else:
mask1[i][j] = 255
elif area == 1:
if mask1[i][j] > 0:
area = 2
else:
continue
else:
mask1[i][j] = 255
mask1 = cv.bitwise_not(mask1)
# find mask for lower part
mask2 = canny_output.copy()
x, y = canny_output.shape
area = 0
for j in range(y):
area = 0
for i in range(x):
if area == 0:
if mask2[-i][j] > 0:
area = 1
continue
else:
mask2[-i][j] = 255
elif area == 1:
if mask2[-i][j] > 0:
area = 2
else:
continue
else:
mask2[-i][j] = 255
mask2 = cv.bitwise_not(mask2)
# apply masks and calculate sum of elements in upper and lower part
sums = [0,0]
(sums[0],_,_,_) = cv.sumElems(cv.bitwise_and(src,mask1))
(sums[1],_,_,_) = cv.sumElems(cv.bitwise_and(src,mask2))
cv.imshow('src',src)
cv.imshow('canny',canny_output)
cv.imshow('mask1',mask1)
cv.imshow('mask2',mask2)
cv.imshow('masked1',cv.bitwise_and(src,mask1))
cv.imshow('masked2',cv.bitwise_and(src,mask2))
cv.waitKey()
Alternatives...
Probably there exist some function that fill the areas of the Canny result. I tried cv.fillPoly and cv.floodFill, but didn't manage to make them work easily... But maybe someone else can help you with that...
Edit
Found another way to get the masks with a cleaner code. Using numpy np.add.accumulate then np.clip, and then a modulo operation:
# first divide canny_output by 255 to get 0's and 1's, then perform
# an accumulate addition for each column. Thus you'll get +1 for every
# line, "painting" areas with 1, 2, 3...
a = np.add.accumulate(canny_output/255,0)
# clip values: anything greater than 2 becomes 2
a = np.clip(a, 0, 2)
# performe a modulo, to get areas alternating with 0 or 1; then multiply by 255
a = a%2 * 255
# convert to uint8
mask1 = cv.convertScaleAbs(a)
# to get mask2 (the lower mask) flip the array then do the same as above
a = np.add.accumulate(np.flip(canny_output,0)/255,0)
a = np.clip(a, 0, 2)
a = a%2 * 255
mask2 = cv.convertScaleAbs(np.flip(a,0))
This returns almost the same result. The border of the mask is a little bit different...
Edit: Quick Summary so far: I use the watershed algorithm but I have probably a problem with threshold. It didn't detect the brighter circles.
New: Fast radial symmetry transform approach which didn't quite work eiter (Edit 6).
I want to detect circles with different sizes. The use case is to detect coins on an image and to extract them solely. -> Get the single coins as single image files.
For this I used the Hough Circle Transform of open-cv:
(https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html)
import sys
import cv2 as cv
import numpy as np
def main(argv):
## [load]
default_file = "data/newcommon_1euro.jpg"
filename = argv[0] if len(argv) > 0 else default_file
# Loads an image
src = cv.imread(filename, cv.IMREAD_COLOR)
# Check if image is loaded fine
if src is None:
print ('Error opening image!')
print ('Usage: hough_circle.py [image_name -- default ' + default_file + '] \n')
return -1
## [load]
## [convert_to_gray]
# Convert it to gray
gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
## [convert_to_gray]
## [reduce_noise]
# Reduce the noise to avoid false circle detection
gray = cv.medianBlur(gray, 5)
## [reduce_noise]
## [houghcircles]
rows = gray.shape[0]
circles = cv.HoughCircles(gray, cv.HOUGH_GRADIENT, 1, rows / 8,
param1=100, param2=30,
minRadius=0, maxRadius=120)
## [houghcircles]
## [draw]
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0, :]:
center = (i[0], i[1])
# circle center
cv.circle(src, center, 1, (0, 100, 100), 3)
# circle outline
radius = i[2]
cv.circle(src, center, radius, (255, 0, 255), 3)
## [draw]
## [display]
cv.imshow("detected circles", src)
cv.waitKey(0)
## [display]
return 0
if __name__ == "__main__":
main(sys.argv[1:])
I tried all parameters (rows, param1, param2, minRadius, and maxRadius) to optimize the results. This worked very well for one specific image but other images with different sized coins didn't work.
Examples:
Parameters
circles = cv.HoughCircles(gray, cv.HOUGH_GRADIENT, 1, rows / 16,
param1=100, param2=30,
minRadius=0, maxRadius=120)
With the same parameters:
Changed to rows/8
I also tried two other approaches of this thread: writing robust (color and size invariant) circle detection with opencv (based on Hough transform or other features)
The approach of fireant leads to this result:
The approach of fraxel didn't work either.
For the first approach: This happens with all different sizes and also the min and max radius.
How could I change the code, so that the coin size is not important or that it finds the parameters itself?
Thank you in advance for any help!
Edit:
I tried the watershed algorithm of Open-cv, as suggested by Alexander Reynolds: https://docs.opencv.org/3.4/d3/db4/tutorial_py_watershed.html
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
img = cv.imread('data/P1190263.jpg')
gray = cv.cvtColor(img,cv.COLOR_BGR2GRAY)
ret, thresh = cv.threshold(gray,0,255,cv.THRESH_BINARY_INV+cv.THRESH_OTSU)
# noise removal
kernel = np.ones((3,3),np.uint8)
opening = cv.morphologyEx(thresh,cv.MORPH_OPEN,kernel, iterations = 2)
# sure background area
sure_bg = cv.dilate(opening,kernel,iterations=3)
# Finding sure foreground area
dist_transform = cv.distanceTransform(opening,cv.DIST_L2,5)
ret, sure_fg = cv.threshold(dist_transform,0.7*dist_transform.max(),255,0)
# Finding unknown region
sure_fg = np.uint8(sure_fg)
unknown = cv.subtract(sure_bg,sure_fg)
# Marker labelling
ret, markers = cv.connectedComponents(sure_fg)
# Add one to all labels so that sure background is not 0, but 1
markers = markers+1
# Now, mark the region of unknown with zero
markers[unknown==255] = 0
markers = cv.watershed(img,markers)
img[markers == -1] = [255,0,0]
#Display:
cv.imshow("detected circles", img)
cv.waitKey(0)
It works very well on the test image of the open-cv website:
But it performs very bad on my own images:
I can't really think of a good reason why it's not working on my images?
Edit 2:
As suggested I looked at the intermediate images. The thresh looks not good in my opinion. Next, there is no difference between opening and dist_transform. The corresponding sure_fg shows the detected images.
thresh:
opening:
dist_transform:
sure_bg:
sure_fg:
Edit 3:
I tried all distanceTypes and maskSizes I could find, but the results were quite the same (https://www.tutorialspoint.com/opencv/opencv_distance_transformation.htm)
Edit 4:
Furthermore, I tried to change the (first) threshold function. I used different threshold values instead of the OTSU Function. The best one was with 160, but it was far from good:
In the tutorial it looks like this:
It seems like the coins are somehow too bright to be detected by this algorithm, but I don't know how to improve it?
Edit 5:
Changing the overall contrast and brightness of the image (with cv.convertScaleAbs) didn't improve the results. Increasing the contrast however should increase the "difference" between foreground and background, at least on the normal image. But it even got worse. The corresponding threshold image didn't improved (didn't get more white pixel).
Edit 6: I tried another approach, the fast radial symmetry transform (from here https://github.com/ceilab/frst_python)
import cv2
import numpy as np
def gradx(img):
img = img.astype('int')
rows, cols = img.shape
# Use hstack to add back in the columns that were dropped as zeros
return np.hstack((np.zeros((rows, 1)), (img[:, 2:] - img[:, :-2]) / 2.0, np.zeros((rows, 1))))
def grady(img):
img = img.astype('int')
rows, cols = img.shape
# Use vstack to add back the rows that were dropped as zeros
return np.vstack((np.zeros((1, cols)), (img[2:, :] - img[:-2, :]) / 2.0, np.zeros((1, cols))))
# Performs fast radial symmetry transform
# img: input image, grayscale
# radii: integer value for radius size in pixels (n in the original paper); also used to size gaussian kernel
# alpha: Strictness of symmetry transform (higher=more strict; 2 is good place to start)
# beta: gradient threshold parameter, float in [0,1]
# stdFactor: Standard deviation factor for gaussian kernel
# mode: BRIGHT, DARK, or BOTH
def frst(img, radii, alpha, beta, stdFactor, mode='BOTH'):
mode = mode.upper()
assert mode in ['BRIGHT', 'DARK', 'BOTH']
dark = (mode == 'DARK' or mode == 'BOTH')
bright = (mode == 'BRIGHT' or mode == 'BOTH')
workingDims = tuple((e + 2 * radii) for e in img.shape)
# Set up output and M and O working matrices
output = np.zeros(img.shape, np.uint8)
O_n = np.zeros(workingDims, np.int16)
M_n = np.zeros(workingDims, np.int16)
# Calculate gradients
gx = gradx(img)
gy = grady(img)
# Find gradient vector magnitude
gnorms = np.sqrt(np.add(np.multiply(gx, gx), np.multiply(gy, gy)))
# Use beta to set threshold - speeds up transform significantly
gthresh = np.amax(gnorms) * beta
# Find x/y distance to affected pixels
gpx = np.multiply(np.divide(gx, gnorms, out=np.zeros(gx.shape), where=gnorms != 0),
radii).round().astype(int);
gpy = np.multiply(np.divide(gy, gnorms, out=np.zeros(gy.shape), where=gnorms != 0),
radii).round().astype(int);
# Iterate over all pixels (w/ gradient above threshold)
for coords, gnorm in np.ndenumerate(gnorms):
if gnorm > gthresh:
i, j = coords
# Positively affected pixel
if bright:
ppve = (i + gpx[i, j], j + gpy[i, j])
O_n[ppve] += 1
M_n[ppve] += gnorm
# Negatively affected pixel
if dark:
pnve = (i - gpx[i, j], j - gpy[i, j])
O_n[pnve] -= 1
M_n[pnve] -= gnorm
# Abs and normalize O matrix
O_n = np.abs(O_n)
O_n = O_n / float(np.amax(O_n))
# Normalize M matrix
M_max = float(np.amax(np.abs(M_n)))
M_n = M_n / M_max
# Elementwise multiplication
F_n = np.multiply(np.power(O_n, alpha), M_n)
# Gaussian blur
kSize = int(np.ceil(radii / 2))
kSize = kSize + 1 if kSize % 2 == 0 else kSize
S = cv2.GaussianBlur(F_n, (kSize, kSize), int(radii * stdFactor))
return S
img = cv2.imread('data/P1190263.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
result = frst(gray, 60, 2, 0, 1, mode='BOTH')
cv2.imshow("detected circles", result)
cv2.waitKey(0)
I only get this nearly black output (it has some very dark grey shadows). I don't know what to change and would be thankful for help!
A project I have been working about for some time is a unsupervised leaf segmentation. The leaves are captured on a white or colored paper, and some of them has shadows.
I want to be able to threshold the leaf and also remove the shadow (while reserving the leaf's details); however I cannot use fixed threshold values due to diseases changing the color of the leaf.
Then, I begin to research and find out a proposal by Horprasert et. al. (1999) in "A Statistical Approach for Real-time Robust Background Subtraction and Shadow Detection", which compare areas in the image with colour of the now-known background using the chromacity distortion measure. This measure takes account of the fact that for desaturated colours, hue is not a relevant measure.
Based on it, I was able to achieve the following results:
However, the leaves that are captured on a white paper need to change the Mask V cv2.bitwise_not() giving me the below result:
I'm thinking that I'm forgetting some step to get a complete mask that will work for all or most of my leaves. Samples can be found here.
My Code:
import numpy as np
import cv2
import matplotlib.pyplot as plot
import scipy.ndimage as ndimage
def brightness_distortion(I, mu, sigma):
return np.sum(I*mu/sigma**2, axis=-1) / np.sum((mu/sigma)**2, axis=-1)
def chromacity_distortion(I, mu, sigma):
alpha = brightness_distortion(I, mu, sigma)[...,None]
return np.sqrt(np.sum(((I - alpha * mu)/sigma)**2, axis=-1))
def bwareafilt ( image ):
image = image.astype(np.uint8)
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(image, connectivity=4)
sizes = stats[:, -1]
max_label = 1
max_size = sizes[1]
for i in range(2, nb_components):
if sizes[i] > max_size:
max_label = i
max_size = sizes[i]
img2 = np.zeros(output.shape)
img2[output == max_label] = 255
return img2
img = cv2.imread("Amostra03.jpeg")
sat = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)[:,:,1]
val = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)[:,:,2]
sat = cv2.medianBlur(sat, 11)
val = cv2.medianBlur(val, 11)
thresh_S = cv2.adaptiveThreshold(sat , 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 401, 10);
thresh_V = cv2.adaptiveThreshold(val , 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 401, 10);
mean_S, stdev_S = cv2.meanStdDev(img, mask = 255 - thresh_S)
mean_S = mean_S.ravel().flatten()
stdev_S = stdev_S.ravel()
chrom_S = chromacity_distortion(img, mean_S, stdev_S)
chrom255_S = cv2.normalize(chrom_S, chrom_S, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX).astype(np.uint8)[:,:,None]
mean_V, stdev_V = cv2.meanStdDev(img, mask = 255 - thresh_V)
mean_V = mean_V.ravel().flatten()
stdev_V = stdev_V.ravel()
chrom_V = chromacity_distortion(img, mean_V, stdev_V)
chrom255_V = cv2.normalize(chrom_V, chrom_V, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX).astype(np.uint8)[:,:,None]
thresh2_S = cv2.adaptiveThreshold(chrom255_S , 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 401, 10);
thresh2_V = cv2.adaptiveThreshold(chrom255_V , 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 401, 10);
images = [img, thresh_S, thresh_V, cv2.bitwise_and(thresh2_S, cv2.bitwise_not(thresh2_V))]
titles = ['Original Image', 'Mask S', 'Mask V', 'S + V']
for i in range(4):
plot.subplot(2,2,i+1),
if i == 0 :
plot.imshow(images[i])
else :
plot.imshow(images[i], cmap='gray')
plot.title(titles[i])
plot.xticks([]),plot.yticks([])
plot.show()
Any idea to solve this issue?
Try this on...I'm using "grabCut" from the openCV lib. It's not perfect, but it might be a good start.
import cv2
import numpy as np
from matplotlib import pyplot as plt
import matplotlib
#%matplotlib inline #uncomment if in notebook
def mask_leaf(im_name, external_mask=None):
im = cv2.imread(im_name)
im = cv2.blur(im, (5,5))
height, width = im.shape[:2]
mask = np.ones(im.shape[:2], dtype=np.uint8) * 2 #start all possible background
'''
#from docs:
0 GC_BGD defines an obvious background pixels.
1 GC_FGD defines an obvious foreground (object) pixel.
2 GC_PR_BGD defines a possible background pixel.
3 GC_PR_FGD defines a possible foreground pixel.
'''
#2 circles are "drawn" on mask. a smaller centered one I assume all pixels are definite foreground. a bigger circle, probably foreground.
r = 100
cv2.circle(mask, (int(width/2.), int(height/2.)), 2*r, 3, -3) #possible fg
#next 2 are greens...dark and bright to increase the number of fg pixels.
mask[(im[:,:,0] < 45) & (im[:,:,1] > 55) & (im[:,:,2] < 55)] = 1 #dark green
mask[(im[:,:,0] < 190) & (im[:,:,1] > 190) & (im[:,:,2] < 200)] = 1 #bright green
mask[(im[:,:,0] > 200) & (im[:,:,1] > 200) & (im[:,:,2] > 200) & (mask != 1)] = 0 #pretty white
cv2.circle(mask, (int(width/2.), int(height/2.)), r, 1, -3) #fg
#if you pass in an external mask derived from some other operation it is factored in here.
if external_mask is not None:
mask[external_mask == 1] = 1
bgdmodel = np.zeros((1,65), np.float64)
fgdmodel = np.zeros((1,65), np.float64)
cv2.grabCut(im, mask, None, bgdmodel, fgdmodel, 1, cv2.GC_INIT_WITH_MASK)
#show mask
plt.figure(figsize=(10,10))
plt.imshow(mask)
plt.show()
#mask image
mask2 = np.where((mask==1) + (mask==3), 255, 0).astype('uint8')
output = cv2.bitwise_and(im, im, mask=mask2)
plt.figure(figsize=(10,10))
plt.imshow(output)
plt.show()
mask_leaf('leaf1.jpg', external_mask=None)
mask_leaf('leaf2.jpg', external_mask=None)
Addressing the external mask. Here's an example of HDBSCAN clustering...I'm not going to go into the details...you can look up the docs and change it or use as-is.
import hdbscan
from collections import Counter
def hdbscan_mask(im_name):
im = cv2.imread(im_name)
im = cv2.blur(im, (5,5))
indices = np.dstack(np.indices(im.shape[:2]))
data = np.concatenate((indices, im), axis=-1)
data = data[:,2:]
data = imb.reshape(im.shape[0]*im.shape[1], 3)
clusterer = hdbscan.HDBSCAN(min_cluster_size=1000, min_samples=20)
clusterer.fit(data)
plt.figure(figsize=(10,10))
plt.imshow(clusterer.labels_.reshape(im.shape[0:2]))
plt.show()
height, width = im.shape[:2]
mask = np.ones(im.shape[:2], dtype=np.uint8) * 2 #start all possible background
cv2.circle(mask, (int(width/2.), int(height/2.)), 100, 1, -3) #possible fg
#grab cluster number for circle
vals_im = clusterer.labels_.reshape(im.shape[0:2])
vals = vals_im[mask == 1]
commonvals = []
cnts = Counter(vals)
for v, count in cnts.most_common(20):
#print '%i: %7d' % (v, count)
if v == -1:
continue
commonvals.append(v)
tst = np.in1d(vals_im, np.array(commonvals))
tst = tst.reshape(vals_im.shape)
hmask = tst.astype(np.uint8)
plt.figure(figsize=(10,10))
plt.imshow(hmask)
plt.show()
return hmask
hmask = hdbscan_mask('leaf1.jpg')
then to use the initial function with the new mask (output suppressed):
mask_leaf('leaf1.jpg', external_mask=hmask)
This was all made in a notebook from scratch so hopefully there's no errant variables that choke it up when running it somewhere else. (note: I did NOT swap BGR to RGB for plt display, sorry)
I want to convert a 3 channel RGB image to a index image with Python. It's used for handling the labels of training a deep net for semantic segmentation. By index image I mean it has one channel and each pixel is the index, which should starts with zero. And certainly they should have the same size. The conversion is based on the following mapping in Python dict:
color2index = {
(255, 255, 255) : 0,
(0, 0, 255) : 1,
(0, 255, 255) : 2,
(0, 255, 0) : 3,
(255, 255, 0) : 4,
(255, 0, 0) : 5
}
I've implemented a naive function:
def im2index(im):
"""
turn a 3 channel RGB image to 1 channel index image
"""
assert len(im.shape) == 3
height, width, ch = im.shape
assert ch == 3
m_lable = np.zeros((height, width, 1), dtype=np.uint8)
for w in range(width):
for h in range(height):
b, g, r = im[h, w, :]
m_lable[h, w, :] = color2index[(r, g, b)]
return m_lable
The input im is a numpy array created by cv2.imread(). However, this code is really slow.
Since the im is in numpy array I firstly tried the ufunc of numpy with something like this:
RGB2index = np.frompyfunc(lambda x: color2index(tuple(x)))
indices = RGB2index(im)
But it turns out that the ufunc takes only one element each time. I was unable to give the function three arguments(RGB value) one time.
So is there any other ways to do the optimization?
The mapping has not to be that way, if a more efficient data structure exists. I noticed that the access of a Python dict dose not cost much time, but the casting from numpy array to tuple(which is hashable) does.
PS:
One idea I got is to implement a kernel in CUDA. But it would be more complicated.
UPDATA1:
Dan Mašek's Answer works fine. But first we have to convert the RGB image to grayscale. It could be problematic when two colors have the same grayscale value.
I paste the working code here. Hope it could help others.
lut = np.ones(256, dtype=np.uint8) * 255
lut[[255,29,179,150,226,76]] = np.arange(6, dtype=np.uint8)
im_out = cv2.LUT(cv2.cvtColor(im, cv2.COLOR_BGR2GRAY), lut)
What about this?
color2index = {
(255, 255, 255) : 0,
(0, 0, 255) : 1,
(0, 255, 255) : 2,
(0, 255, 0) : 3,
(255, 255, 0) : 4,
(255, 0, 0) : 5
}
def rgb2mask(img):
assert len(img.shape) == 3
height, width, ch = img.shape
assert ch == 3
W = np.power(256, [[0],[1],[2]])
img_id = img.dot(W).squeeze(-1)
values = np.unique(img_id)
mask = np.zeros(img_id.shape)
for i, c in enumerate(values):
try:
mask[img_id==c] = color2index[tuple(img[img_id==c][0])]
except:
pass
return mask
Then just call:
mask = rgb2mask(ing)
Here's a small utility function to convert images (np.array) to per-pixel labels (indices), which can also be a one-hot encoding:
def rgb2label(img, color_codes = None, one_hot_encode=False):
if color_codes is None:
color_codes = {val:i for i,val in enumerate(set( tuple(v) for m2d in img for v in m2d ))}
n_labels = len(color_codes)
result = np.ndarray(shape=img.shape[:2], dtype=int)
result[:,:] = -1
for rgb, idx in color_codes.items():
result[(img==rgb).all(2)] = idx
if one_hot_encode:
one_hot_labels = np.zeros((img.shape[0],img.shape[1],n_labels))
# one-hot encoding
for c in range(n_labels):
one_hot_labels[: , : , c ] = (result == c ).astype(int)
result = one_hot_labels
return result, color_codes
img = cv2.imread("input_rgb_for_labels.png")
img_labels, color_codes = rgb2label(img)
print(color_codes) # e.g. to see what the codebook is
img1 = cv2.imread("another_rgb_for_labels.png")
img1_labels, _ = rgb2label(img1, color_codes) # use the same codebook
It calculates (and returns) the color codebook if None is supplied.
actually for-loop takes much time.
binary_mask = (im_array[:,:,0] == 255) & (im_array[:,:,1] == 255) & (im_array[:,:,2] == 0)
maybe above code can help you
I've implemented a naive function: …
I firstly tried the ufunc of numpy with something like this: …
I suggest using an even more naive function which converts just one pixel:
def rgb2index(rgb):
"""
turn a 3 channel RGB color to 1 channel index color
"""
return color2index[tuple(rgb)]
Then using a numpy routine is a good idea, but we don't need a ufunc:
np.apply_along_axis(rgb2index, 2, im)
Here numpy.apply_along_axis() is used to apply our rgb2index() function to the RGB slices along the last of the three axes (0, 1, 2) for the whole image im.
We could even do without the function and just write:
np.apply_along_axis(lambda rgb: color2index[tuple(rgb)], 2, im)
Similar to what Armali and Mendrika proposed, I somehow had to tweak it a little bit to get it to work (maybe totally my fault). So I just wanted to share a snippet that works.
COLORS = np.array([
[0, 0, 0],
[0, 0, 255],
[255, 0, 0]
])
W = np.power(255, [0, 1, 2])
HASHES = np.sum(W * COLORS, axis=-1)
HASH2COLOR = {h : c for h, c in zip(HASHES, COLORS)}
HASH2IDX = {h: i for i, h in enumerate(HASHES)}
def rgb2index(segmentation_rgb):
"""
turn a 3 channel RGB color to 1 channel index color
"""
s_shape = segmentation_rgb.shape
s_hashes = np.sum(W * segmentation_rgb, axis=-1)
func = lambda x: HASH2IDX[int(x)]
segmentation_idx = np.apply_along_axis(func, 0, s_hashes.reshape((1, -1)))
segmentation_idx = segmentation_idx.reshape(s_shape[:2])
return segmentation_idx
segmentation = np.array([[0, 0, 0], [0, 0, 255], [255, 0, 0]] * 3).reshape((3, 3, 3))
rgb2index(segmentation)
Example plot
The code is also available here:
https://github.com/theRealSuperMario/supermariopy/blob/dev/scripts/rgb2labels.py
Did you check Pillow library https://python-pillow.org/? As I remember, it has some classes and methods to deal with color conversion. See: https://pillow.readthedocs.io/en/4.0.x/reference/Image.html#PIL.Image.Image.convert
If you are happy using MATLAB - maybe saving the result as *.mat and loading with scipy.io.loadmat - there is the rgb2ind function in MATLAB, which does exactly what you are asking for. If not, it could be used as inspiration for a similar implementation in Python.