Detecting circles in an image using laplacian - python

I'm trying detect the disk and cup in the back of an eye (fundus) to calculate certain things later on. So here is an image of the eye:
I'm just trying to detect the disk, or the larger yellow-ish circle on the right side of the image, and the cup, or the smaller yellow circle inside that first circle, using OpenCV and python so I can eventually perform certain calculations.
So far, I've tried to use laplacian filtering to make the circle more prominent.
Here is my code:
import cv2
import numpy as np
from matplotlib import pyplot as plt
from pylab import *
# loading image
img0 = cv2.imread('01_g.jpg',)
# converting to gray scale
gray = cv2.cvtColor(img0, cv2.COLOR_BGR2GRAY)
# remove noise
img = cv2.GaussianBlur(gray,(3,3),0)
# convolute with proper kernels
laplacian = cv2.Laplacian(img,cv2.CV_64F)
sobelx = cv2.Sobel(img,cv2.CV_64F,1,0,ksize=5) # x
sobely = cv2.Sobel(img,cv2.CV_64F,0,1,ksize=5) # y
magnitude = sqrt(sobelx**2+sobely**2)
plt.subplot(2,2,1),plt.imshow(img,cmap = 'gray')
plt.title('Original'), plt.xticks([]), plt.yticks([])
plt.subplot(2,2,2),plt.imshow(laplacian,cmap = 'gray')
plt.title('Laplacian'), plt.xticks([]), plt.yticks([])
plt.subplot(2,2,3),plt.imshow(sobelx,cmap = 'gray')
plt.title('Sobel X'), plt.xticks([]), plt.yticks([])
plt.subplot(2,2,4),plt.imshow(sobely,cmap = 'gray')
plt.title('Sobel Y'), plt.xticks([]), plt.yticks([])
plt.show()
This is the result I got:
As you can see, the laplacian filter didn't help at all-- in fact, the cup and disk aren't even visible. The sobelx and sobely at least gave some outline of the outer circle (disk)
I also have tried the absolute value of the laplacian:
final = np.absolute(laplacian)
plt.imshow(final, cmap = 'gray')
plt.show()
and I got this result:
I have also tried applying the Difference of Gaussians method using this code:
#difference of gaussians
blur1 = cv2.GaussianBlur(img,(3,3),1)
blur2 = cv2.GaussianBlur(img,(5,5),1.1)
difference = blur2 - blur1
plt.imshow(difference, cmap = 'gray')
plt.show()
But this also doesn't get me anywhere. I would really appreciate some help on how I might go about detecting the cup and disk in this image.

EDIT:
Previous MSER approach as shown fails to find a circular blob, though it highlights the region. So I tried Difference-of-Gaussians(DoG) for blob detection, and it gives good results. You can experiment with the gaussian kernel sizes and their sigmas. Note that I've downsampled the image and removed the vessel structures by dilation prior to applying the DoG. Thresholding the DoG image gives you the blobs.
Also I noticed that the region you are interested in is the global maximum of the image (it may not be so for a different image). May be you can combine this knowledge as well to your algorithm.
DoG
Thresholded DoG
Global Max
Code (c++) for Dog approach
Mat im = imread("8Lzuq.jpg", 0);
Mat dw;
pyrDown(im, dw);
pyrDown(dw, dw);
pyrDown(dw, dw);
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(7, 7));
morphologyEx(dw, dw, CV_MOP_DILATE, kernel);
Mat g1, g2, dog, bw;
GaussianBlur(dw, g1, Size(31, 31), 21, 21);
GaussianBlur(dw, g2, Size(65, 65), 31, 31);
dog = g1 - g2;
normalize(dog, dog, 0, 255, NORM_MINMAX);
threshold(dog, bw, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
Point mx;
minMaxLoc(dw, NULL, NULL, NULL, &mx);
circle(dw, mx, 20, Scalar(255, 255, 255), 2);
MSER approach
I tried downsampling the color image, dilating it, then detecting MSERs in individual channels. The result looks good though it doesn't outline the disk as a perfect circle.
Blue channel:
Green channel:
Red channel:
Detecting MSERs in the color image didn't work well.
Code in c++
Mat im = imread("8Lzuq.jpg");
Mat dw;
pyrDown(im, dw);
pyrDown(dw, dw);
pyrDown(dw, dw);
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(7, 7));
morphologyEx(dw, dw, CV_MOP_DILATE, kernel);
Mat ch[3];
split(dw, ch);
MSER mser;
vector<vector<Point>> regions;
mser(ch[2], regions);
Mat regionsMat = Mat::zeros(dw.rows, dw.cols, CV_8U);
for (size_t i = 0; i < regions.size(); i++)
{
for (Point pt: regions[i])
{
uchar& val = regionsMat.at<uchar>(pt);
if (val > 0)
{
val += 1;
}
else
{
val = 1;
}
}
}
imwrite("reg.jpg", regionsMat*50);

Related

Image segmentation using python on google colab

I'm trying to segment the food parts of this plate into three different images using just opencv/ python libraries instead of deep learning techniques.
This is the main food image
I tried applying mask on the BGR form of the image that we see when using opencv.
lower_blue = np.array([10, 50, 100])
upper_blue = np.array([110, 225, 225])
mask = cv2.inRange(orig, lower_blue, upper_blue)
result = cv2.bitwise_and(orig, orig, mask = mask)
#cv2_imshow(result)
plt.imshow(result)
I got this image where the most of the plate is black and only food is highlighted.
I then converted it back into
RGB
I thought of taking the contours from the previous image.
lowerthresh = np.array([60,60,60])
higherthresh = np.array([250,250,250])
mask = cv2.inRange(convertedimage,lowerthresh,higherthresh)
mask.shape
Contours of image:here
My main goal is to get the three food items into three different images.
Like 1 for example,
and 2
Basically to get cropped images of all the food.
But now it seems as though I didn't choose the right approach to this problem.
Is there any other way to solve this?
Update: I used used the hue channel in the HSV colorspace to mask the background of the image
from skimage.color import rgb2hsv
sample_h= rgb2hsv(rgb)
fig, ax = plt.subplots(1, 3, figsize=(15,5))
ax[0].imshow(sample_h[:,:,0], cmap='hsv')
ax[0].set_title('Hue',fontsize=15)
ax[1].imshow(sample_h[:,:,1], cmap='hsv')
ax[1].set_title('Saturation',fontsize=15)
ax[2].imshow(sample_h[:,:,2], cmap='hsv')
ax[2].set_title('Value',fontsize=15);
plt.show()
Image
fig, ax = plt.subplots(1,3,figsize=(15,5))
im = ax[0].imshow(sample_h[:,:,0],cmap='hsv')
fig.colorbar(im,ax=ax[0])
ax[0].set_title('Hue Graph',fontsize=15)
lower_mask = sample_h[:,:,0] > 0
upper_mask = sample_h[:,:,0] < 0.15
mask = upper_mask*lower_mask
red = rgb[:,:,0]*mask
green = rgb[:,:,1]*mask
blue = rgb[:,:,2]*mask
mask2 = np.dstack((red,green,blue))
ax[1].imshow(mask)
ax[2].imshow(mask2)
ax[1].set_title('Mask',fontsize=15)
ax[2].set_title('Final Image',fontsize=15)
plt.tight_layout()
plt.show()
Image

How to group and highlight group of pixels in an image using OpenCV? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
During the process of error level analysis on an image, I want to highlight the pixel changes using OpenCV(With just a single image and not the difference). I know the pixel-level values for the output image but not sure about the methods to group them together and assign a shape to that (Example below where the pixel change is specified with a shape). I want to know if I could detect the circle with the lighter pixels and group them and add a grouped shape for the pixels
Input Image:
Result Image:
If I understand correctly, you want to highlight the differences between the input and output images in a new image. To do this, you can take a quantitative approach to determine the exact discrepancies between images using the Structural Similarity Index (SSIM) which was introduced in Image Quality Assessment: From Error Visibility to Structural Similarity. This method is already implemented in the scikit-image library for image processing. You can install scikit-image with pip install scikit-image.
The skimage.measure.compare_ssim() function returns a score and a diff image. The score represents the structural similarity index between the two input images and can fall between the range [-1,1] with values closer to one representing higher similarity. But since you're only interested in where the two images differ, the diff image is what we'll focus on. Specifically, the diff image contains the actual image differences with darker regions having more disparity. Larger areas of disparity are highlighted in black while smaller differences are in gray. Here's the diff image
If you look closely, there are gray noisy areas probably due to .jpg lossy compression. So to obtain a cleaner result, we perform morphological operations to smooth the image. We would obtain a cleaner result if the images used a lossless image compression format such as .png. After cleaning up the image, we highlight the differences in green
from skimage.measure import compare_ssim
import numpy as np
import cv2
# Load images and convert to grayscale
image1 = cv2.imread('1.jpg')
image2 = cv2.imread('2.jpg')
image1_gray = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)
image2_gray = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY)
# Compute SSIM between two images
(score, diff) = compare_ssim(image1_gray, image2_gray, full=True)
# The diff image contains the actual image differences between the two images
# and is represented as a floating point data type in the range [0,1]
# so we must convert the array to 8-bit unsigned integers in the range
# [0,255] before we can use it with OpenCV
diff = 255 - (diff * 255).astype("uint8")
cv2.imwrite('original_diff.png',diff)
# Perform morphological operations
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
opening = cv2.morphologyEx(diff, cv2.MORPH_OPEN, kernel, iterations=1)
close = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel, iterations=1)
diff = cv2.merge([close,close,close])
# Color difference pixels
diff[np.where((diff > [10,10,50]).all(axis=2))] = [36,255,12]
cv2.imwrite('diff.png',diff)
I thing the best way is to simply threshold you image and apply Morphological Transformations.
I have got the following results.
Threashold + Morphological:
Select the largest component:
using this code:
cv::Mat result;
cv::Mat img = cv::imread("fOTmh.jpg");
//-- gray & smooth image
cv::cvtColor(img, result, cv::COLOR_BGR2GRAY);
cv::blur(result, result, cv::Size(5,5));
//-- threashold with max value of the image and smooth again!
double min, max;
cv::minMaxLoc(result, &min, &max);
cv::threshold(result, result, 0.3*max, 255, cv::THRESH_BINARY);
cv::medianBlur(result, result, 7);
//-- apply Morphological Transformations
cv::Mat se = getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(11, 11));
cv::morphologyEx(result, result, cv::MORPH_DILATE, se);
cv::morphologyEx(result, result, cv::MORPH_CLOSE, se);
//-- find the largest component
vector<vector<cv::Point> > contours;
vector<cv::Vec4i> hierarchy;
cv::findContours(result, contours, hierarchy, cv::RETR_LIST, cv::CHAIN_APPROX_NONE);
vector<cv::Point> *l = nullptr;
for(auto &&c: contours){
if (l==nullptr || l->size()< c.size())
l = &c;
}
//-- expand and plot Rect around the largest component
cv::Rect r = boundingRect(*l);
r.x -=10;
r.y -=10;
r.width +=20;
r.height +=20;
cv::rectangle(img, r, cv::Scalar::all(255), 3);
//-- result
cv::resize(img, img, cv::Size(), 0.25, 0.25);
cv::imshow("result", img);
Python Code :
import cv2 as cv
img = cv.imread("ELA_Final.jpg")
result = cv.cvtColor(img, cv.COLOR_BGR2GRAY);
result = cv.blur(result, (5,5));
minVal, maxVal, minLoc, maxLoc = cv.minMaxLoc(result)
ret,result = cv.threshold(result, 0.3*maxVal, 255, cv.THRESH_BINARY)
median = cv.medianBlur(result, 7)
se = cv.getStructuringElement(cv.MORPH_ELLIPSE,(11, 11));
result = cv.morphologyEx(result, cv.MORPH_DILATE, se);
result = cv.morphologyEx(result, cv.MORPH_CLOSE, se);
_,contours, hierarchy = cv.findContours(result,cv.RETR_LIST, cv.CHAIN_APPROX_NONE)
x = []
for eachCOntor in contours:
x.append(len(eachCOntor))
m = max(x)
p = [i for i, j in enumerate(x) if j == m]
color = (255, 0, 0)
x, y, w, h = cv.boundingRect(contours[p[0]])
x -=10
y -=10
w +=20
h +=20
cv.rectangle(img, (x,y),(x+w,y+h),color, 3)
img = cv.resize( img,( 1500, 700), interpolation = cv.INTER_AREA)
cv.imshow("result", img)
cv.waitKey(0)

Bulk removing unwanted parts of images

I have downloaded a number of images (1000) from a website but they each have a black and white ruler running along 1 or 2 edges and some have these catalogue number tickets. I need these elements removed, the ruler at the very least.
Example images of coins:
The images all have the ruler in slightly different places so i cant just preform the same crop on them.
So I tried to remove the black and replace it with white using this code
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
im = Image.open('image-0.jpg')
im = im.convert('RGBA')
data = np.array(im) # "data" is a height x width x 4 numpy array
red, green, blue, alpha = data.T # Temporarily unpack the bands for readability
# Replace black with white
black_areas = (red < 150) & (blue < 150) & (green < 150)
data[..., :-1][black_areas.T] = (255, 255, 255) # Transpose back needed
im2 = Image.fromarray(data)
im2.show()
but it pretty much just removed half the coin as well:
I was having a read of some posts on opencv but though I'd see if there was a simpler way I'd missed first.
So I have taken a look at your problem and I have found a solution for your two images you provided, I hope it works for you other images as well but it is always hard to tell as it can be different on an individual basis. This solution is using OpenCV for preprocessing and contour detection to get the 2nd and 3rd largest elements in your picture (largest is the bounding box around the edges) which should be your coins. Then I create a box around those two items and add some padding before I crop to size.
So we start off with preprocessing:
import numpy as np
import cv2
img = cv2.imread(r'<PATH TO YOUR IMAGE>')
img = cv2.resize(img, None, fx=3, fy=3)
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(imgray, (5, 5), 0)
ret, thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
Still rather basic, we make the image bigger so it is easier to detect contours, then we turn it into grayscale, blur it and apply thresholding to it so we turn all grey values either white or black. This then gives us the following image:
We now do contour detection, get the areas around our contours and sort them by the biggest area. Then we drop the biggest one as it is the box around the whole image and take the 2nd and 3rd biggest. And then get the x,y,w,h values we are interested in.
contours, hierarchy = cv2.findContours(
thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
areas = []
for cnt in contours:
area = cv2.contourArea(cnt)
areas.append((area, cnt))
areas.sort(key=lambda x: x[0], reverse=True)
areas.pop(0)
x, y, w, h = cv2.boundingRect(areas[0][1])
x2, y2, w2, h2 = cv2.boundingRect(areas[1][1])
If we draw a rectangle around those contours:
Now we take those coordinates and create a box around both of them. This might need some minor adjusting as I just quickly took the bigger width of the two and not the corresponding one for the right coin but since I added extra padding it should be fine in most cases. And finally crop to size:
pad = 15
img = img[(min(y, y2) - pad) : (max(y, y2) + max(h, h2) + pad),
(min(x, x2) - pad) : (max(x, x2) + max(w, w2) + pad)]
I hope this helps you to understand how you could achieve what you want, I tried it on both your images and it worked well for them. It might need some adjustments and depending on how your other images look the simple approach of taking the two biggest objects (apart from image bounding box) might be turned into something more sophisticated to detect the cricular shapes or something along those lines. Alternatively you could try to detect the rulers and crop from their position inwards. You will have to decide after you have done this on more example images in your dataset.
If you're looking for a robust solution, you should try something like Max Kaha's response, since it'll provide you with greater fine tuning.
Since the rulers tend to be left with just a little bit of text after your "black to white" filter, a quick solution is to use erosion followed by a dilation to create a mask for your images, and then apply the mask to the original image.
Pillow offers that with the ImageFilter class. Here's your code with a few modifications that'll achieve that:
from PIL import Image, ImageFilter
import numpy as np
import matplotlib.pyplot as plt
WHITE = 255, 255, 255
input_image = Image.open('image.png')
input_image = input_image.convert('RGBA')
input_data = np.array(input_image) # "data" is a height x width x 4 numpy array
red, green, blue, alpha = input_data.T # Temporarily unpack the bands for readability
# Replace black with white
thresh = 30
black_areas = (red < thresh) & (blue < thresh) & (green < thresh)
input_data[..., :-1][black_areas.T] = WHITE # Transpose back needed
erosion_factor = 5
# dilation is bigger to avoid cropping the objects of interest
dilation_factor = 11
erosion_filter = ImageFilter.MaxFilter(erosion_factor)
dilation_filter = ImageFilter.MinFilter(dilation_factor)
eroded = Image.fromarray(input_data).filter(erosion_filter)
dilated = eroded.filter(dilation_filter)
mask_threshold = 220
# the mask is black on regions to be hidden
mask = dilated.convert('L').point(lambda x: 255 if x < mask_threshold else 0)
# create base image
output_image = Image.new('RGBA', input_image.size, WHITE)
# paste only the desired regions
output_image.paste(input_image, mask=mask)
output_image.show()
You should also play around with the black to white threshold and the erosion/dilation factors to try and find the best fit for most of your images.

How can I find the center of the pattern and the distribution of a color around it on python with opencv/skimage?

I have an image, that I want to process. I'm using Opencv and skimage. My goal is to find the distribution of the red dots around the barycenter of all the dots. I proceed as follows : first I select the color, and then I binarize the image that I obtain. Eventually, I would just count the red pixel that are on the rings with a certain width around that barycenter, in order to have an average distribution with regards to the radius supposing a cylindrical symmetry.
My issue is that I have no idea how to find the position of the barycenter.
I would also like to know if there is an short way to count the red pixels in the rings.
Here is my code :
import cv2
import matplotlib.pyplot as plt
from skimage import io, filters, measure, color, external
I'm uploading the image :
sph = cv2.imread('image_sper.jpg')
sph = cv2.cvtColor(sph, cv2.COLOR_BGR2RGB)
plt.imshow(sph)
plt.show()
I want to select the red color. Following https://realpython.com/python-opencv-color-spaces/, I'm converting it in HSV, and I'm using a mask.
hsv_sph = cv2.cvtColor(sph, cv2.COLOR_RGB2HSV)
light_red = (1, 100, 100)
dark_red = (18, 255, 255)
mask = cv2.inRange(hsv_sph, light_red, dark_red)
result = cv2.bitwise_and(sph, sph, mask=mask)
And here is the result :
plt.imshow(result)
plt.show()
Now I'm binarizing the image, since it'll be easier to process it afterwards.
red_image = result[:,:,1]
red_th = filters.threshold_otsu(red_image)
red_mask = red_image > red_th;
red_mask.dtype ;
io.imshow(red_mask);
And here we are :
What I would like some help now to find the barycenter of the white pixels.
Thx
Edit : The binarization gives the image boolean values False/True for the pixels. I don't know how to transform them to 0/1 pixels. If False was 0 and True 1, a code to find the barycenter would be :
np.shape(red_mask)
(* (321L, 316L) *)
bari=0
barj=0
N=0
for i in range(321):
for j in range(316):
bari=bari+red_mask[i,j]*i
barj=barj+red_mask[i,j]*j
N=N+red_mask[i,j]
bari=bari/N
barj=barj/N
Another question that should have been asked here: http://answers.opencv.org/questions/
But, let's go!
The process that I have implemented uses mostly structural analysis (https://docs.opencv.org/3.3.1/d3/dc0/group__imgproc__shape.html#ga17ed9f5d79ae97bd4c7cf18403e1689a)
First I got your image:
import cv2
import matplotlib.pyplot as plt
import numpy as np
from skimage import io, filters, measure, color, external
sph = cv2.imread('points.png')
ret,thresh = cv2.threshold(sph,200,255,cv2.THRESH_BINARY)
Then eroded and converted it for noise reduction
kernel = np.ones((2,2),np.uint8)
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
opening = cv2.cvtColor(opening, cv2.COLOR_BGR2GRAY);
opening = cv2.convertScaleAbs(opening)
Then used "cv::findContours (InputOutputArray image, OutputArrayOfArrays contours, OutputArray hierarchy, int mode, int method, Point offset=Point())" to find all blobs.
After that, just calculate the center of each region and do a weighted average based on the contour area. This way, I got the points centroid (X:143.4202820443726 , Y:154.56471750651224).
im2, contours, hierarchy = cv2.findContours(opening, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
areas = []
centersX = []
centersY = []
for cnt in contours:
areas.append(cv2.contourArea(cnt))
M = cv2.moments(cnt)
centersX.append(int(M["m10"] / M["m00"]))
centersY.append(int(M["m01"] / M["m00"]))
full_areas = np.sum(areas)
acc_X = 0
acc_Y = 0
for i in range(len(areas)):
acc_X += centersX[i] * (areas[i]/full_areas)
acc_Y += centersY[i] * (areas[i]/full_areas)
print (acc_X, acc_Y)
cv2.circle(sph, (int(acc_X), int(acc_Y)), 5, (255, 0, 0), -1)
plt.imshow(sph)
plt.show()

Detect mishapen blobs in python using OpenCV

I would like to detect two blobs in the following image:
Original:
I want to have the inside detected like this:
I also want the outside circle detected:
But I'm applying OpenCV's simple blob detection right now and it is not giving me the desired results. This is my code:
# Set up the detector with default parameters.
detector = cv2.SimpleBlobDetector()
# Detect blobs.
keypoints = detector.detect(image)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(image, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
final = Image.fromarray(im_with_keypoints)
final.show()
But this is what the blob detector detects:
Hugh Circle detection in OpenCV also doesn't correctly identify the two shapes.
Update:
I've also tried ellipse fitting, but instead of detecting either of the blobs, it detects some random line in the image. Here is the code I used for ellipse fitting.
ret,thresh = cv2.threshold(image,127,255,0)
contours,hierarchy = cv2.findContours(thresh, 1, 2)
cnt = contours[0]
M = cv2.moments(cnt)
print M
cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
ellipse = cv2.fitEllipse(cnt)
cv2.ellipse(im2,ellipse,(0,255,0),2)
final = Image.fromarray(image)
final.show()
Any help in detecting these blobs is appreciated.
For detecting the inner blob, you can also try clustering and MSERs, because the region looks flat. I downsampled a cropped version of your image and applied these techniques.
Downsampled image
Here I use kmeans with 10 clusters. The drawback is you have to specify the number of clusters.
Here I use MSER. It is more robust.
The code is in c++. Note that you have to scale the outputs to see the details.
Mat im = imread("2L6hP.png", 0);
Mat dw;
pyrDown(im, dw);
// kmeans with 10 clusters
int k = 10;
Mat rgb32fc, lbl;
dw.convertTo(rgb32fc, CV_32F);
int imsize[] = {rgb32fc.rows, rgb32fc.cols};
Mat color = rgb32fc.reshape(1, rgb32fc.rows*rgb32fc.cols);
kmeans(color, k, lbl, TermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0), 2, KMEANS_PP_CENTERS);
Mat lbl2d = lbl.reshape(1, 2, imsize);
Mat lbldisp; // clustered result
lbl2d.convertTo(lbldisp, CV_8U, 1);
// MSER
MSER mser;
vector<vector<Point>> regions;
mser(dw, regions);
Mat regionsMat = Mat::zeros(dw.rows, dw.cols, CV_8U); // MSER result
for (size_t i = 0; i < regions.size(); i++)
{
for (Point pt: regions[i])
{
uchar& val = regionsMat.at<uchar>(pt);
if (val > 0)
{
val += 1;
}
else
{
val = 1;
}
}
}

Categories

Resources