I'm using the OpenCV library for Python to detect the circles in an image. As a test case, I'm using the following image:
bottom of can:
I've written the following code, which should display the image before detection, then display the image with the detected circles added:
import cv2
import numpy as np
image = cv2.imread('can.png')
image_rgb = image.copy()
image_copy = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
grayscaled_image = cv2.cvtColor(image_copy, cv2.COLOR_GRAY2BGR)
cv2.imshow("confirm", grayscaled_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
circles = cv2.HoughCircles(image_copy, cv2.HOUGH_GRADIENT, 1.3, 20, param1=60, param2=33, minRadius=10,maxRadius=28)
if circles is not None:
print("FOUND CIRCLES")
circles = np.round(circles[0, :]).astype("int")
print(circles)
for (x, y, r) in circles:
cv2.circle(image, (x, y), r, (255, 0, 0), 4)
cv2.rectangle(image, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
cv2.imshow("Test", image + image_rgb)
cv2.waitKey(0)
cv2.destroyAllWindows()
I get this:resultant image
I feel that my problem lies in the usage of the HoughCircles() function. It's usage is:
cv2.HoughCircles(image, method, dp, minDist[, circles[, param1[, param2[, minRadius[, maxRadius]]]]])
where minDist is a value greater than 0 that requires detected circles to be a certain distance from one another. With this requirement, it would be impossible for me to properly detect all of the circles on the bottom of the can, as the center of each circle is in the same place. Would contours be a solution? How can I convert contours to circles so that I may use the coordinates of their center points? What should I do to best detect the circle objects for each ring in the bottom of the can?
Not all but a majority of the circles can be detected by adaptive thresholding the image, finding the contours and then fitting a minimum enclosing circle for contours having area greater than a threshold
import cv2
import numpy as np
block_size,constant_c ,min_cnt_area = 9,1,400
img = cv2.imread('viMmP.png')
img_gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(img_gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,block_size,constant_c)
thresh_copy = thresh.copy()
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
if cv2.contourArea(cnt)>min_cnt_area:
(x,y),radius = cv2.minEnclosingCircle(cnt)
center = (int(x),int(y))
radius = int(radius)
cv2.circle(img,center,radius,(255,0,0),1)
cv2.imshow("Thresholded Image",thresh_copy)
cv2.imshow("Image with circles",img)
cv2.waitKey(0)
Now this script yields the result:
But there are certain trade-offs like, if the block_size and constant_c are changed to 11 and 2 respectively then the script yields:
You should try applying erosion with a kernel of proper shape to separate the overlapping circles in the thresholded image
You may look at the following links to understand more about adaptive thresholding and contours:
Threshlding examples: http://docs.opencv.org/3.1.0/d7/d4d/tutorial_py_thresholding.html
Thresholding reference: http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html
Contour Examples:
http://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html
Related
I'm using OpenCV houghcircles to identify all the circles (both hollow and filled). Follow is my code:
import numpy as np
import cv2
img = cv2.imread('images/32x32.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
bilateral = cv2.bilateralFilter(gray,10,50,50)
minDist = 30
param1 = 30
param2 = 50
minRadius = 5
maxRadius = 100
circles = cv2.HoughCircles(bilateral, cv2.HOUGH_GRADIENT, 1, minDist, param1=param1, param2=param2, minRadius=minRadius, maxRadius=maxRadius)
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
cv2.circle(img, (i[0], i[1]), i[2], (0, 0, 255), 2)
# Show result for testing:
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Test input image 1:
Test output image1:
As you can see I'm able identity most of the circles except for few. What am I missing here? I've tried varying the parameters but this is the best i could get.
Also, if I use even more compact circles the script does not identify any circles whatsoever.
An alternative idea is to use find contour method and chek whether the contour is a circle using appox as below.
import cv2
img = cv2.imread('32x32.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
inputImageCopy = img.copy()
# Find the circle blobs on the binary mask:
contours, hierarchy = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Use a list to store the center and radius of the target circles:
detectedCircles = []
# Look for the outer contours:
for i, c in enumerate(contours):
# Approximate the contour to a circle:
(x, y), radius = cv2.minEnclosingCircle(c)
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
if len(approx)>5: # check if the contour is circle
# Compute the center and radius:
center = (int(x), int(y))
radius = int(radius)
# Draw the circles:
cv2.circle(inputImageCopy, center, radius, (0, 0, 255), 2)
# Store the center and radius:
detectedCircles.append([center, radius])
cv2.imshow("Circles", inputImageCopy)
cv2.waitKey(0)
cv2.destroyAllWindows()
I solved your problem. Using same code as your. No needed to modified. I changed value from 50 to 30. That all.
#!/usr/bin/python39
#OpenCV 4.5.5 Raspberry Pi 3/B/4B-w/4/8GB RAM, Bullseye,v11.
#Date: 19th April, 2022
import numpy as np
import cv2
img = cv2.imread('fill_circles.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
bilateral = cv2.bilateralFilter(gray,10,50,50)
minDist = 30
param1 = 30
param2 = 30
minRadius = 5
maxRadius = 100
circles = cv2.HoughCircles(bilateral, cv2.HOUGH_GRADIENT, 1, minDist, param1=param1, param2=param2, minRadius=minRadius, maxRadius=maxRadius)
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
cv2.circle(img, (i[0], i[1]), i[2], (0, 0, 255), 2)
cv2.imwrite('lego.png', img)
# Show result for testing:
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:
If you can always get(or create from real image) such "clean" image, the bounding-box of the grid(2-dimensional array of circles) region can be easily obtained.
Therefore, you can know the rectangular area of interest (and the angle of rotation of the grid, if rotation is possible).
If you examine the pixels along the axial direction of the rectangle, you can easily find out how many circles are lined up and the diameter of the circle.
Because lines that all pixel are black are gaps between adjacent row(or column).
(Sum up the pixel values along the direction of the axis. Whether it is 0 or not tells you whether the line passes over the grid-cell or not.)
If necessary, you may check that the shape of the contour in each gird-cell is really circular.
is it possible to get the coordinates of the incomplete circle? i am using opencv and python. so i can find the most of the circles.
But i have no clue how can i detect the incomplete cirlce in the picture.
I am looking for a simple way to solve it.
import sys
import cv2 as cv
import numpy as np
## [load]
default_file = 'captcha2.png'
# Loads an image
src = cv.imread(cv.samples.findFile(default_file), cv.IMREAD_COLOR)
## [convert_to_gray]
# Convert it to gray
gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
## [convert_to_gray]
## [reduce_noise]
# Reduce the noise to avoid false circle detection
gray = cv.medianBlur(gray, 3)
## [reduce_noise]
## [houghcircles]
#rows = gray.shape[0]
circles = cv.HoughCircles(gray, cv.HOUGH_GRADIENT, 1, 5,
param1=1, param2=35,
minRadius=1, maxRadius=30)
## [houghcircles]
## [draw]
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0, :]:
center = (i[0], i[1])
# circle center
cv.circle(src, center, 1, (0, 100, 100), 3)
# circle outline
radius = i[2]
cv.circle(src, center, radius, (255, 0, 255), 3)
## [draw]
## [display]
cv.imshow("detected circles", src)
cv.waitKey(0)
## [display]
Hi - there is an other Picture. I want the x and y cords of the incomplete circle, light blue on the lower left.
Here the original Pic:
You need to remove the colorful background of your image and display only circles.
One approach is:
Get the binary mask of the input image
Apply Hough Circle to detect the circles
Binary mask:
Using the binary mask, we will detect the circles:
Code:
# Load the libraries
import cv2
import numpy as np
# Load the image
img = cv2.imread("r5lcN.png")
# Copy the input image
out = img.copy()
# Convert to the HSV color space
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Get binary mask
msk = cv2.inRange(hsv, np.array([0, 0, 130]), np.array([179, 255, 255]))
# Detect circles in the image
crc = cv2.HoughCircles(msk, cv2.HOUGH_GRADIENT, 1, 10, param1=50, param2=25, minRadius=0, maxRadius=0)
# Ensure circles were found
if crc is not None:
# Convert the coordinates and radius of the circles to integers
crc = np.round(crc[0, :]).astype("int")
# For each (x, y) coordinates and radius of the circles
for (x, y, r) in crc:
# Draw the circle
cv2.circle(out, (x, y), r, (0, 255, 0), 4)
# Print coordinates
print("x:{}, y:{}".format(x, y))
# Display
cv2.imshow("out", np.hstack([img, out]))
cv2.waitKey(0)
Output:
x:178, y:60
x:128, y:22
x:248, y:20
x:378, y:52
x:280, y:60
x:294, y:46
x:250, y:44
x:150, y:62
Explanation
We have three chance for finding the thresholding:
Simple Threshold result:
Adaptive Threshold
Binary mask
As we can see the third option gave us a suitable result. Of course, you could get the desired result with other options, but it might take a long time for finding the suitable parameters. Then we applied Hough circles, played with parameter values, and got the desired result.
Update
For the second uploaded image, you can detect the semi-circle by reducing the first and second parameters of the Hough circle.
crc = cv2.HoughCircles(msk, cv2.HOUGH_GRADIENT, 1, 10, param1=10, param2=15, minRadius=0, maxRadius=0)
Replacing the above line in the main code will result in:
Console result
x:238, y:38
x:56, y:30
x:44, y:62
x:208, y:26
I am trying to detect the count of water pipes in this picture. For this, I am trying to use OpenCV and Python-based detection. The results, I am getting is a little confusing to me because the spread of circles is way too large and inaccurate.
The code
import numpy as np
import argparse
import cv2
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required = True, help = "Path to the image")
args = vars(ap.parse_args())
# load the image, clone it for output, and then convert it to grayscale
image = cv2.imread(args["image"])
output = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
#detect circles in the image
#circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2, param1=40,minRadius=10,maxRadius=35)
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 8.5,70,minRadius=0,maxRadius=70)
#print(len(circles[0][0]))
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# count = count+1
# print(count)
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
# cv2.imshow("output", np.hstack([output]))
cv2.imwrite('output.jpg',np.hstack([output]),[cv2.IMWRITE_JPEG_QUALITY, 70])
cv2.waitKey(0)
After I run this, I do see a lot of circles detected, however, the results are complete haywire.
My question is, how do I improve this detection. Which parameters are specifically needed to optimize in the HoughCircles method to achieve greater accuracy? Or, should I take the approach of annotating hundreds of similar images via bounding boxes and then train them over a full-blown CNN like Yolo to perform detection?
Taking the approach mentioned in answer number 2 from here Measuring the diameter pictures of holes in metal parts, photographed with telecentric, monochrome camera with opencv . I got this output. This looks close to performing a count but misses on lot of actual pipes during the brightness transformation of the image.
The most important parameters for your HoughCircles call are:
param1: because you are using cv2.HOUGH_GRADIENT, param1 is the higher threshold for the edge detection algorithm and param1 / 2 is the lower threshold.
param2: it represents the accumulator threshold, so the lower the value, the more circles will be returned.
minRadius and maxRadius: the blue circles in the example have a diameter of roughly 20 pixels, so using 70 pixels for maxRadius is the reason why so many circles are being returned by the algorithm.
minDist: the minimum distance between the centers of two circles.
The parameterization defined below:
circles = cv2.HoughCircles(gray,
cv2.HOUGH_GRADIENT,
minDist=6,
dp=1.1,
param1=150,
param2=15,
minRadius=6,
maxRadius=10)
returns:
You could do an adaptive threshold as preprocessing. This basically looks for areas that are relatively brighter than the neighboring pixels, your global threshold loses some of the pipes, this keeps them a little better.
import cv2
import matplotlib.pyplot as plt
import numpy as np
img = cv2.imread('a2MTm.jpg')
blur_hor = cv2.filter2D(img[:, :, 0], cv2.CV_32F, kernel=np.ones((11,1,1), np.float32)/11.0, borderType=cv2.BORDER_CONSTANT)
blur_vert = cv2.filter2D(img[:, :, 0], cv2.CV_32F, kernel=np.ones((1,11,1), np.float32)/11.0, borderType=cv2.BORDER_CONSTANT)
mask = ((img[:,:,0]>blur_hor*1.2) | (img[:,:,0]>blur_vert*1.2)).astype(np.uint8)*255
plt.imshow(mask)
You can then carry on with the same post processing steps.
Here are some example processing steps:
circles = cv2.HoughCircles(mask,
cv2.HOUGH_GRADIENT,
minDist=8,
dp=1,
param1=150,
param2=12,
minRadius=4,
maxRadius=10)
output = img.copy()
for (x, y, r) in circles[0, :, :]:
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
You can adjust the parameters to get what you would like, read about the parameters here.
Instead of using cv2.HoughCircles another approach would be to use contour filtering. We can threshold the image then filter using aspect ratio, contour area, and radius of the blob. Here's the result:
Count: 344
Code
import cv2
image = cv2.imread('1.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,27,3)
cnts = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
count = 0
for c in cnts:
area = cv2.contourArea(c)
x,y,w,h = cv2.boundingRect(c)
ratio = w/h
((x, y), r) = cv2.minEnclosingCircle(c)
if ratio > .85 and ratio < 1.20 and area > 50 and area < 120 and r < 7:
cv2.circle(image, (int(x), int(y)), int(r), (36, 255, 12), -1)
count += 1
print('Count: {}'.format(count))
cv2.imshow('thresh', thresh)
cv2.imshow('image', image)
cv2.waitKey()
I have looked at several pages regarding optimizing circle detection using opencv in python. All seem to be specific to the individual circumstances of a given picture. What are some starting points for each of the parameters for cv2.HoughCircles? Since I am not sure what recommended values are, I have attempted looping over ranges but this is not producing any promising results. Why can't I detect any of the circles in this image?
import cv2
import numpy as np
image = cv2.imread('IMG_stack.png')
output = image.copy()
height, width = image.shape[:2]
maxWidth = int(width/10)
minWidth = int(width/20)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2, 20,param1=50,param2=50,minRadius=minWidth,maxRadius=maxWidth)
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circlesRound = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circlesRound:
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.imwrite(filename = 'test.circleDraw.png', img = output)
cv2.imwrite(filename = 'test.circleDrawGray.png', img = gray)
else:
print ('No circles found')
This should be a straight forward circle detection, but all of the circles detected are not even close.
The main parameters that you should pay attention are minDist, minRadius and maxRadius.
Analyzing the radius first: you have an image that is 12 circles wide and 8 circles tall, which gives you a diameter of roughly width/12 for each circle, or a radius of (width/12)/2. The constraints that you have used allowed the algorithm to detect circles way bigger or smaller than necessary, therefore you should use a parameterization that is better fit for your image. In this case, I have used an interval [0.9 * radius, 1.1 * radius].
As there is no overlapping, you could say that the distance between two circles is at least the diameter, so minDist could be set to something like 2*minRadius.
This implementation is basically the same as yours, just updating those 3 parameters:
%matplotlib inline
import cv2
import numpy as np
import matplotlib.pyplot as plt
image = cv2.imread('data/balls.jpg')
output = image.copy()
height, width = image.shape[:2]
maxRadius = int(1.1*(width/12)/2)
minRadius = int(0.9*(width/12)/2)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
circles = cv2.HoughCircles(image=gray,
method=cv2.HOUGH_GRADIENT,
dp=1.2,
minDist=2*minRadius,
param1=50,
param2=50,
minRadius=minRadius,
maxRadius=maxRadius
)
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circlesRound = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circlesRound:
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
plt.imshow(output)
else:
print ('No circles found')
The result is:
Normally circle detection can be done using traditional image processing methods such as thresholding + contour detection, hough circles, or contour fitting but since your circles are overlapping/touching, watershed segmentation may be better. Here's a good resource.
import cv2
import numpy as np
from skimage.feature import peak_local_max
from skimage.morphology import watershed
from scipy import ndimage
# Load in image, convert to gray scale, and Otsu's threshold
image = cv2.imread('1.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Remove small noise by filtering using contour area
cnts = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
if cv2.contourArea(c) < 1000:
cv2.drawContours(thresh,[c], 0, (0,0,0), -1)
cv2.imshow('thresh', thresh)
# Compute Euclidean distance from every binary pixel
# to the nearest zero pixel then find peaks
distance_map = ndimage.distance_transform_edt(thresh)
local_max = peak_local_max(distance_map, indices=False, min_distance=20, labels=thresh)
# Perform connected component analysis then apply Watershed
markers = ndimage.label(local_max, structure=np.ones((3, 3)))[0]
labels = watershed(-distance_map, markers, mask=thresh)
# Iterate through unique labels
for label in np.unique(labels):
if label == 0:
continue
# Create a mask
mask = np.zeros(gray.shape, dtype="uint8")
mask[labels == label] = 255
# Find contours and determine contour area
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
c = max(cnts, key=cv2.contourArea)
cv2.drawContours(image, [c], -1, (36,255,12), -1)
cv2.imshow('image', image)
cv2.waitKey()
I am working on a project which ask me to detect text area in an image. This is the result I achieved until now using the code below.
Original Image
Result
The code is the following:
import cv2
import numpy as np
# read and scale down image
img = cv2.pyrDown(cv2.imread('C:\\Users\\Work\\Desktop\\test.png', cv2.IMREAD_UNCHANGED))
# threshold image
ret, threshed_img = cv2.threshold(cv2.cvtColor(img, cv2.COLOR_BGR2GRAY),
127, 255, cv2.THRESH_BINARY)
# find contours and get the external one
image, contours, hier = cv2.findContours(threshed_img, cv2.RETR_TREE,
cv2.CHAIN_APPROX_SIMPLE)
# with each contour, draw boundingRect in green
# a minAreaRect in red and
# a minEnclosingCircle in blue
for c in contours:
# get the bounding rect
x, y, w, h = cv2.boundingRect(c)
# draw a green rectangle to visualize the bounding rect
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), thickness=1, lineType=8, shift=0)
# get the min area rect
#rect = cv2.minAreaRect(c)
#box = cv2.boxPoints(rect)
# convert all coordinates floating point values to int
#box = np.int0(box)
# draw a red 'nghien' rectangle
#cv2.drawContours(img, [box], 0, (0, 0, 255))
# finally, get the min enclosing circle
#(x, y), radius = cv2.minEnclosingCircle(c)
# convert all values to int
#center = (int(x), int(y))
#radius = int(radius)
# and draw the circle in blue
#img = cv2.circle(img, center, radius, (255, 0, 0), 2)
print(len(contours))
cv2.drawContours(img, contours, -1, (255, 255, 0), 1)
cv2.namedWindow('contours', 0)
cv2.imshow('contours', img)
while(cv2.waitKey()!=ord('q')):
continue
cv2.destroyAllWindows()
As you can see, this can do more than I need. Look for commented parts if you need more.
By the way, what I need is to bound every text area in a single rectangle not (near) every char which the script is finding. Filter the single number or letter and to round everything in a single box.
For example, the first sequence in a box, the second in another one and so on.
I searched a bit and I found something about "filter rectangle area". I don't know if it is useful for my purpose.
Tooked a look also at some of the first result on Google but most of them don't work very well. I guess the code need to be tweaked a bit but I am a newbie in OpenCV world.
Solved using the following code.
import cv2
# Load the image
img = cv2.imread('image.png')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# smooth the image to avoid noises
gray = cv2.medianBlur(gray,5)
# Apply adaptive threshold
thresh = cv2.adaptiveThreshold(gray,255,1,1,11,2)
thresh_color = cv2.cvtColor(thresh,cv2.COLOR_GRAY2BGR)
# apply some dilation and erosion to join the gaps - change iteration to detect more or less area's
thresh = cv2.dilate(thresh,None,iterations = 15)
thresh = cv2.erode(thresh,None,iterations = 15)
# Find the contours
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# For each contour, find the bounding rectangle and draw it
for cnt in contours:
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
cv2.rectangle(thresh_color,(x,y),(x+w,y+h),(0,255,0),2)
# Finally show the image
cv2.imshow('img',img)
cv2.imshow('res',thresh_color)
cv2.waitKey(0)
cv2.destroyAllWindows()
Parameters that need to be modified to obtain the result below is numbers of iterations in erode and dilate functions.
Lower values will create more bounding rectangles around (nearly) every digit/character.
Result