remove demarcation from text image - image processing - python

Hi I need to write a program that remove demarcation from gray scale image(image with text in it)
i read about thresholding and blurring but still i dont see how can i do it.
my image is an image of a hebrew text like that:
and i need to remove the demarcation(assuming that the demarcation is the smallest element in the image) the output need to be something like that
I want to write the code in python using opencv, what topics do i need to learn to be able to do that, and how?
thank you.
Edit:
I can use only cv2 functions

The symbols you want to remove are significantly smaller than all other shapes, you can use that to determine witch ones to remove.
First use threshold to convert the image to binary. Next, you can use findContours to detect the shapes and then contourArea to determine if the shape is larger than a threshold.
Finally you can can create a mask to remove the unwanted shapes, draw the larger symbols on a new image or draw the smaller symbols in white over the original symbols in the original image - making them disappear. I used that last technique in the code below.
Result:
Code:
import cv2
# load image as grayscale
img = cv2.imread('1MioS.png',0)
# convert to binary. Inverted, so you get white symbols on black background
_ , thres = cv2.threshold(img, 200, 255, cv2.THRESH_BINARY_INV)
# find contours in the thresholded image (this gives all symbols)
contours, hierarchy = cv2.findContours(thres, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# loop through the contours, if the size of the contour is below a threshold,
# draw a white shape over it in the input image
for cnt in contours:
if cv2.contourArea(cnt) < 250:
cv2.drawContours(img,[cnt],0,(255),-1)
# display result
cv2.imshow('res', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Update
To find the largest contour, you can loop through them and keep track of the largest value:
maxArea = 0
for cnt in contours:
currArea = cv2.contourArea(cnt)
if currArea > maxArea:
maxArea = currArea
print(maxArea)
I also whipped up a little more complex version, that creates a sorted list of the indexes and sizes of the contours. Then it looks for the largest relative difference in size of all contours, so you know which contours are 'small' and 'large'. I do not know if this works for all letters / fonts.
# create a list of the indexes of the contours and their sizes
contour_sizes = []
for index,cnt in enumerate(contours):
contour_sizes.append([index,cv2.contourArea(cnt)])
# sort the list based on the contour size.
# this changes the order of the elements in the list
contour_sizes.sort(key=lambda x:x[1])
# loop through the list and determine the largest relative distance
indexOfMaxDifference = 0
currentMaxDifference = 0
for i in range(1,len(contour_sizes)):
sizeDifference = contour_sizes[i][1] / contour_sizes[i-1][1]
if sizeDifference > currentMaxDifference:
currentMaxDifference = sizeDifference
indexOfMaxDifference = i
# loop through the list again, ending (or starting) at the indexOfMaxDifference, to draw the contour
for i in range(0, indexOfMaxDifference):
cv2.drawContours(img,contours,contour_sizes[i][0] ,(255),-1)
To get the background color you can do use minMaxLoc. This returns the lowest color value and it's position of an image (also the max value, but you don't need that). If you apply it to the thresholded image - where the background is black -, it will return the location of a background pixel (big odds it will be (0,0) ). You can then look up this pixel in the original color image.
# get the location of a pixel with background color
min_val, _, min_loc, _ = cv2.minMaxLoc(thres)
# load color image
img_color = cv2.imread('1MioS.png')
# get bgr values of background
b,g,r = img_color[min_loc]
# convert from numpy object
background_color = (int(b),int(g),int(r))
and then to draw the contours
cv2.drawContours(img_color,contours,contour_sizes[i][0],background_color,-1)
and of course
cv2.imshow('res', img_color)

This looks like a problem for template matching since you have what looks like a known font and can easily understand what the characters and/or demarcations are. Check out https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_template_matching/py_template_matching.html
Admittedly, the tutorial talks about finding the match; modification is up to you. In that case, you know the exact shape of the template itself, so using that information along with the location of the match, just overwrite the image data with the appropriate background color (based on the examples above, 255).

You can solve it by removing all the small clusters.
I found a Python solution (using OpenCV) here.
For supporting smaller fonts, I added the following heuristic:
"The largest size of the demarcation cluster is 1/500 of the largest letter cluster".
The heuristic can be refined, by statistical analysts (or improved by other heuristics, such as demarcation locations relative to the letters).
import numpy as np
import cv2
I = cv2.imread('Goodluck.png', cv2.IMREAD_GRAYSCALE)
J = 255 - I # Invert I
img = cv2.threshold(J, 127, 255, cv2.THRESH_BINARY)[1] # Convert to binary
# https://answers.opencv.org/question/194566/removing-noise-using-connected-components/
nlabel,labels,stats,centroids = cv2.connectedComponentsWithStats(img, connectivity=8)
labels_small = []
areas_small = []
# Find largest cluster:
max_size = np.max(stats[:, cv2.CC_STAT_AREA])
thresh_size = max_size / 500 # Set the threshold to maximum cluster size divided by 500.
for i in range(1, nlabel):
if stats[i, cv2.CC_STAT_AREA] < thresh_size:
labels_small.append(i)
areas_small.append(stats[i, cv2.CC_STAT_AREA])
mask = np.ones_like(labels, dtype=np.uint8)
for i in labels_small:
I[labels == i] = 255
cv2.imshow('I', I)
cv2.waitKey(0)
Here is a MATLAB code sample (kept threshold = 200):
clear
I = imbinarize(rgb2gray(imread('בהצלחה.png')));
figure;imshow(I);
J = ~I;
%Clustering
CC = bwconncomp(J);
%Cover all small clusters with zewros.
for i = 1:CC.NumObjects
C = CC.PixelIdxList{i}; %Cluster coordinates.
%Fill small clusters with zeros.
if numel(C) < 200
J(C) = 0;
end
end
J = ~J;
figure;imshow(J);
Result:

Related

Extract most central area in a Binary Image

I am processing binary images, and was previously using this code to find the largest area in the binary image:
# Use the hue value to convert to binary
thresh = 20
thresh, thresh_img = cv2.threshold(h, thresh, 255, cv2.THRESH_BINARY)
cv2.imshow('thresh', thresh_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Finding Contours
# Use a copy of the image since findContours alters the image
contours, _ = cv2.findContours(thresh_img.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
#Extract the largest area
c = max(contours, key=cv2.contourArea)
This code isn't really doing what I need it to do, now I think it would better to extract the most central area in the binary image.
Binary Image
Largest Image
This is currently what the code is extracting, but I am hoping to get the central circle in the first binary image extracted.
OpenCV comes with a point-polygon test function (for contours). It even gives a signed distance, if you ask for that.
I'll find the contour that is closest to the center of the picture. That may be a contour actually overlapping the center of the picture.
Timings, on my quadcore from 2012, give or take a millisecond:
findContours: ~1 millisecond
all pointPolygonTests and argmax: ~1 millisecond
mask = cv.imread("fkljm.png", cv.IMREAD_GRAYSCALE)
(height, width) = mask.shape
ret, mask = cv.threshold(mask, 128, 255, cv.THRESH_BINARY) # required because the sample picture isn't exactly clean
# get contours
contours, hierarchy = cv.findContours(mask, cv.RETR_LIST | cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
center = (np.array([width, height]) - 1) / 2
# find contour closest to center of picture
distances = [
cv.pointPolygonTest(contour, center, True) # looking for most positive (inside); negative is outside
for contour in contours
]
iclosest = np.argmax(distances)
print("closest contour is", iclosest, "with distance", distances[iclosest])
# draw closest contour
canvas = cv.cvtColor(mask, cv.COLOR_GRAY2BGR)
cv.drawContours(image=canvas, contours=[contours[iclosest]], contourIdx=-1, color=(0, 255, 0), thickness=5)
closest contour is 45 with distance 65.19202405202648
a cv.floodFill() on the center point can also quickly yield a labeling on that blob... assuming the mask is positive there. Otherwise, there needs to be search.
(cx, cy) = center.astype(int)
assert mask[cy,cx], "floodFill not applicable"
# trying cv.floodFill on the image center
mask2 = mask >> 1 # turns everything else gray
cv.floodFill(image=mask2, mask=None, seedPoint=center.astype(int), newVal=255)
# use (mask2 == 255) to identify that blob
This also takes less than a millisecond.
Some practically faster approaches might involve a pyramid scheme (low-res versions of the mask) to quickly identify areas of the picture that are candidates for an exact test (distance/intersection).
Test target pixel. Hit (positive)? Done.
Calculate low-res mask. Per block, if any pixel is positive, block is positive.
Find positive blocks, sort by distance, examine closer all those that are within sqrt(2) * blocksize of the best distance.
There are several ways you define "most central." I chose to define it as the region with the closest distance to the point you're searching for. If the point is inside the region, then that distance will be zero.
I also chose to do this with a pixel-based approach rather than a polygon-based approach, like you're doing with findContours().
Here's a step-by-step breakdown of what this code is doing.
Load the image, put it into grayscale, and threshold it. You're already doing these things.
Identify connected components of the image. Connected components are places where there are white pixels which are directly connected to other white pixels. This breaks up the image into regions.
Using np.argwhere(), convert a true/false mask into an array of coordinates.
For each coordinate, compute the Euclidean distance between that point and search_point.
Find the minimum within each region.
Across all regions, find the smallest distance.
import cv2
import numpy as np
img = cv2.imread('test197_img.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh_img = cv2.threshold(gray,127,255,cv2.THRESH_BINARY)
n_groups, comp_grouped = cv2.connectedComponents(thresh_img)
components = []
search_point = [600, 150]
for i in range(1, n_groups):
mask = (comp_grouped == i)
component_coords = np.argwhere(mask)[:, ::-1]
min_distance = np.sqrt(((component_coords - search_point) ** 2).sum(axis=1)).min()
components.append({
'mask': mask,
'min_distance': min_distance,
})
closest = min(components, key=lambda x: x['min_distance'])['mask']
Output:

OpenCV image numbers identification

Sorry for bad english. I want to make condition, that image with 12 in left corner = image with 12 in right corner and != image with 21.
I need a fast way to determine this, cause there are many pics and they refresh.
I tried to use counting pixels of specific image:
result = np.count_nonzero(np.all(original > (0,0,0), axis=2))
(why I use >(0,0,0) instead of == (255,255,255)? there are grey shadows near white symbols, that eyes can't see)
This way doesn't see a difference between 12 and 21.
I tried the second way, compare new images with templates, but it one see a huge difference between 12 and 12 in left-right corners!
original = ('auto/5or.png' )
template= cv2.imread( 'auto/5t.png' )
res = cv2.matchTemplate( original, template, cv2.TM_CCOEFF_NORMED )
I didn't try yet some difficult method of determining digits, cause I think - this is too slow, even on my little pics. (I may mistake).
I have digits only from 0 to 30, I have all templates, examples, they are differ only with location inside black square.
Any thoughts? Thanks in advance.
If you don't want the position of the digits in the image to make a difference, you can threshold the image to black and white and find the bounding box and crop to it so your digits are always in the same place - then just difference the images or use what you were using before:
#!/usr/local/bin/python3
import numpy as np
from PIL import Image
# Open image, greyscale and threshold
im=np.array(Image.open('21.png').convert('L'))
# Mask of white pixels
mask = im.copy()
mask[mask<128] = 0 # Threshold pixels < 128 down to black
# Coordinates of white pixels
coords = np.argwhere(mask)
# Bounding box of white pixels
x0, y0 = coords.min(axis=0)
x1, y1 = coords.max(axis=0) + 1
# Crop to bbox
cropped = im[x0:x1, y0:y1]
# Save
Image.fromarray(cropped).save('result.png')
That gives you this:
Obviously crop your template images as well.
I am less familiar with OpenCV in Python, but it would look something like this:
import cv2
# Load image
img = cv2.imread('21.png',0)
# Threshold at 127
ret,thresh = cv2.threshold(img,127,255,0)
# Get contours
im2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Get bounding box
cnt = contours[0]
x,y,w,h = cv2.boundingRect(cnt)

finding largest connected component using opencv 2.4 in python 2.7

I am writing code in python 2.7.12 using opencv '2.4.9.1'.
I have a 2d numpy array containing values in range [0,255].
My aim is to find largest region containing value in range[x,y]
I found
How to use python OpenCV to find largest connected component in a single channel image that matches a specific value? as pretty well-explained .
Only, the catch is - it is meant for opencv 3 .
I can try to write a function of this type
[pseudo code]
def get_component(x,y,list):
append x,y to list
visited[x][y]=1
if(x+1<m && visited[x+1][y]==0)
get_component(x+1,y,list)
if(y+1<n && visited[x][y+1]==0)
get_component(x,y+1,list)
if(x+1<m)&&(y+1<n)&&visited[x+1][y+1]==0
get_component(x+1,y+1,list)
return
MAIN
biggest_component = NULL
biggest_component_size = 0
low = lowest_value_in_user_input_range
high = highest_value_in_user_input_range
matrix a = gray image of size mxn
matrix visited = all value '0' of size mxn
for x in range(m):
for y in range(n):
list=NULL
if(a[x][y]>=low) && (a[x][y]<=high) && visited[x][y]==1:
get_component(x,y,list)
if (list.size>biggest_component_size)
biggest_component = list
Get maximum x , maximum y , min x and min y from above list containing coordinates of every point of largest component to make rectangle R .
Mission accomplished !
[/pseudo code]
Such an approach will not be efficient, I think.
Can you suggest functions for doing the same with my setup ?
Thanks.
Happy to see my answer linked! Indeed, connectedComponentsWithStats() and even connectedComponents() are OpenCV 3+ functions, so you can't use them. Instead, the easy thing to do is just use findContours().
You can calculate moments() of each contour, and included in the moments is the area of the contour.
Important note: The OpenCV function findContours() uses 8-way connectivity, not 4-way (i.e. it also checks diagonal connectivity, not just up, down, left, right). If you need 4-way, you'd need to use a different approach. Let me know if that's the case and I can update..
In the spirit of the other post, here's the general approach:
Binarize your image with the thresholds you're interested in.
Run cv2.findContours() to get the contour of each distinct component in the image.
For each contour, calculate the cv2.moments() of the contour and keep the maximum area contour (m00 in the dict returned from moments() is the area of the contour).
Either keep the contour as a list of points if that's what you need, otherwise draw them on a new blank image if you want it as a mask.
I lack creativity today, so you get the cameraman as our example image as you didn't provide one.
import cv2
import numpy as np
img = cv2.imread('cameraman.png', cv2.IMREAD_GRAYSCALE)
Now, let's binarize to get some separated blobs:
bin_img = cv2.inRange(img, 50, 80)
Now let's find the contours.
contours = cv2.findContours(bin_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]
# For OpenCV 3+ use:
# contours = cv2.findContours(bin_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[1]
Now for the main bit; looping through the contours and finding the largest one:
max_area = 0
max_contour_index = 0
for i, contour in enumerate(contours):
contour_area = cv2.moments(contour)['m00']
if contour_area > max_area:
max_area = contour_area
max_contour_index = i
So now we have an index max_contour_index of the largest contour by area, so you can access the largest contour directly just by doing contours[max_contour_index]. You could of course just sort the contours list by the contour area and grab the first (or last, depending on sort order). If you want to make a mask of the one component, you can use
cv2.drawContours(new_blank_image, contours, max_contour_index, color=255, thickness=-1)
Note the -1 will fill the contour as opposed to outlining it. Here's an example drawing the contour over the original image:
Looks about right.
All in one function:
def largest_component_mask(bin_img):
"""Finds the largest component in a binary image and returns the component as a mask."""
contours = cv2.findContours(bin_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]
# should be [1] if OpenCV 3+
max_area = 0
max_contour_index = 0
for i, contour in enumerate(contours):
contour_area = cv2.moments(contour)['m00']
if contour_area > max_area:
max_area = contour_area
max_contour_index = i
labeled_img = np.zeros(bin_img.shape, dtype=np.uint8)
cv2.drawContours(labeled_img, contours, max_contour_index, color=255, thickness=-1)
return labeled_img

Merge MSER detected objetcs (OpenCV, Python)

I am working on this image as source:
Applying the next code...
import cv2
import numpy as np
mser = cv2.MSER_create()
img = cv2.imread('C:\\Users\\Link\\Desktop\\test2.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
vis = img.copy()
regions, _ = mser.detectRegions(gray)
hulls = [cv2.convexHull(p.reshape(-1, 1, 2)) for p in regions]
cv2.polylines(vis, hulls, 1, (0, 255, 0))
mask = np.zeros((img.shape[0], img.shape[1], 1), dtype=np.uint8)
for contour in hulls:
cv2.drawContours(mask, [contour], -1, (255, 255, 255), -1)
text_only = cv2.bitwise_and(img, img, mask=mask)
cv2.imshow('img', vis)
cv2.waitKey(0)
cv2.imshow('img', mask)
cv2.waitKey(0)
cv2.imshow('img', text_only)
cv2.waitKey(0)
cv2.imwrite('C:\\Users\\Link\\Desktop\\test_o\\1.png', text_only)
...I am obtaining this as result (mask):
The question is this:
how to merge into a single object the number 5 in the number series (157661546) as long as it is divided in the mask image ?
Thanks
Have a look here, it seems like the exact answer.
Here instead there is my version of the above code fine tuned for text extraction (with masking too).
Below there is the original code from the previous article, "ported" to python 3, opencv 3, added mser and bounding boxes. The main difference with my version is how the grouping distance is defined: mine is text-oriented while the one below is a free geometrical distance.
import sys
import cv2
import numpy as np
def find_if_close(cnt1,cnt2):
row1,row2 = cnt1.shape[0],cnt2.shape[0]
for i in range(row1):
for j in range(row2):
dist = np.linalg.norm(cnt1[i]-cnt2[j])
if abs(dist) < 25: # <-- threshold
return True
elif i==row1-1 and j==row2-1:
return False
img = cv2.imread(sys.argv[1])
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
cv2.imshow('input', img)
ret,thresh = cv2.threshold(gray,127,255,0)
mser=False
if mser:
mser = cv2.MSER_create()
regions = mser.detectRegions(thresh)
hulls = [cv2.convexHull(p.reshape(-1, 1, 2)) for p in regions[0]]
contours = hulls
else:
thresh = cv2.bitwise_not(thresh) # wants black bg
im2,contours,hier = cv2.findContours(thresh,cv2.RETR_EXTERNAL,2)
cv2.drawContours(img, contours, -1, (0,0,255), 1)
cv2.imshow('base contours', img)
LENGTH = len(contours)
status = np.zeros((LENGTH,1))
print("Elements:", len(contours))
for i,cnt1 in enumerate(contours):
x = i
if i != LENGTH-1:
for j,cnt2 in enumerate(contours[i+1:]):
x = x+1
dist = find_if_close(cnt1,cnt2)
if dist == True:
val = min(status[i],status[x])
status[x] = status[i] = val
else:
if status[x]==status[i]:
status[x] = i+1
unified = []
maximum = int(status.max())+1
for i in range(maximum):
pos = np.where(status==i)[0]
if pos.size != 0:
cont = np.vstack(contours[i] for i in pos)
hull = cv2.convexHull(cont)
unified.append(hull)
cv2.drawContours(img,contours,-1,(0,0,255),1)
cv2.drawContours(img,unified,-1,(0,255,0),2)
#cv2.drawContours(thresh,unified,-1,255,-1)
for c in unified:
(x,y,w,h) = cv2.boundingRect(c)
cv2.rectangle(img, (x,y), (x+w,y+h), (255, 0, 0), 2)
cv2.imshow('result', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Sample output (the yellow blob is below the binary threshold conversion so it's ignored). Red: original contours, green: unified ones, blue: bounding boxes.
Probably there is no need to use MSER as a simple findContours may work fine.
------------------------
Starting from here there is my old answer, before I found the above code. I'm leaving it anyway as it describe a couple of different approaches that may be easier/more appropriate for some scenarios.
A quick and dirty trick is to add a small gaussian blur and a high threshold before the MSER (or some dilute/erode if you prefer fancy things). In practice you just make the text bolder so that it fills small gaps. Obviously you can later discard this version and crop from the original one.
Otherwise, if your text is in lines, you may try to detect the average line center (make an histogram of Y coordinates and find the peaks for example). Then, for each line, look for fragments with a close average X. Quite fragile if text is noisy/complex.
If you do not need to split each letter, getting the bounding box for the whole word, may be easier: just split in groups based on a maximum horizontal distance between fragments (using the leftmost/rightmost points of the contour). Then use the leftmost and rightmost boxes within each group to find the whole bounding box. For multiline text first group by centroids Y coordinate.
Implementation notes:
Opencv allows you to create histograms but you probably can get away with something like this (worked for me on a similar task):
def histogram(vals, th=4, bins=400):
hist = np.zeros(bins)
for y_center in vals:
bucket = int(round(y_center / 2.)) <-- change this "2."
hist[bucket-1] += 1
print("hist: ", hist)
hist = np.where(hist > th, hist, 0)
return hist
Here my histogram is just an array with 400 buckets (my image was 800px high so each bucket catches two pixels, that is where the "2." comes from). Vals are the Y coordinates of the centroids of each fragment (you may want to ignore very small elements when you build this list). The th threshold is there just to remove some noise. You should get something like this:
0,0,0,5,22,0,0,0,0,43,7,0,0,0
This list describes, moving top to bottom, how many fragments are at each location.
Now I ran another pass to merge the peaks into a single value (just scan the array and sum while it is non-zero and reset the count on first zero) getting something like this {y:count}:
{9:27, 20:50}
Now I know I have two text rows at y=9 and y=20. Now, or before, you assign each fragment to on line (with again an 8px threshold in my case). Now you can process each line on its own, finding "words". BTW, I have your identical problem with broken letters that's why I came here looking for MSER :). Notice that if you find the whole bounding box for the word this problem happens only on the first/last letters: the other broken letters just falls inside the word box anyway.
Here is a reference for the erode/dilate thing, but gaussian blur/th worked for me.
UPDATE: I've noticed that there is something wrong in this line:
regions = mser.detectRegions(thresh)
I pass in the already thresholded image(!?). This is not relevant for the aggregation part but keep in mind that the mser part is not being used as expected.

Python OpenCV detect a white object from a binary image and crop it

My goal is detecting a piece of white paper from this binary image and then crop this white paper and make a new subset binary image just for this white paper.
Now my Python code with OpenCV can find this white paper. For the first step, I created a mask for finding this white paper:
As you guys can see, the small white noise and small piece have been removed. And then the problem become how I can crop this white paper object from this binary image for making a new subset binary image?
My current code is:
import cv2
import numpy as np
QR = cv2.imread('IMG_0352.TIF', 0)
mask = np.zeros(QR.shape,np.uint8)
contours, hierarchy = cv2.findContours(QR,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
if cv2.contourArea(cnt)>1000000:
cv2.drawContours(mask,[cnt],0,255,-1)
Looking for the cnt var, there are four elements, but they are nonsense to me.
I used code to fit a box:
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
The box information doesn't seem right.
Thanks for any suggestions.
Follow up:
I have figured out this problem, which is very easy. The code is attached:
import cv2
import numpy as np
QR_orig = cv2.imread('CamR_IMG_0352.TIF', 0)
QR = cv2.imread('IMG_0352.TIF', 0) # read the QR code binary image as grayscale image to make sure only one layer
mask = np.zeros(QR.shape,np.uint8) # mask image the final image without small pieces
# using findContours func to find the none-zero pieces
contours, hierarchy = cv2.findContours(QR,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
# draw the white paper and eliminate the small pieces (less than 1000000 px). This px count is the same as the QR code dectection
for cnt in contours:
if cv2.contourArea(cnt)>1000000:
cv2.drawContours(mask,[cnt],0,255,-1) # the [] around cnt and 3rd argument 0 mean only the particular contour is drawn
# Build a ROI to crop the QR
x,y,w,h = cv2.boundingRect(cnt)
roi=mask[y:y+h,x:x+w]
# crop the original QR based on the ROI
QR_crop = QR_orig[y:y+h,x:x+w]
# use cropped mask image (roi) to get rid of all small pieces
QR_final = QR_crop * (roi/255)
cv2.imwrite('QR_final.TIF', QR_final)
the contour object is an arbitrary vector (list) of points that enclose the object detected.
An easy brain dead way of accomplishing this is to walk through all the pixels after your thresholding and simply copy the white ones.
I believe findContours() alters the image ( side effect ) so check QR.
However, you need to (usually) get the biggest contour.
Example:
# Choose largest contour
best = 0
maxsize = 0
count = 0
for cnt in contours:
if cv2.contourArea(cnt) > maxsize :
maxsize = cv2.contourArea(cnt)
best = count
count = count + 1
x,y,w,h = cv2.boundingRect(cnt[best])
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
I actually figured out the solution of this problem, which obviously is very simple!!
import cv2
import numpy as np
QR_orig = cv2.imread('CamR_IMG_0352.TIF', 0)
QR = cv2.imread('IMG_0352.TIF', 0) # read the QR code binary image as grayscale image to make sure only one layer
mask = np.zeros(QR.shape,np.uint8) # mask image the final image without small pieces
# using findContours func to find the none-zero pieces
contours, hierarchy = cv2.findContours(QR,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
# draw the white paper and eliminate the small pieces (less than 1000000 px). This px count is the same as the QR code dectection
for cnt in contours:
if cv2.contourArea(cnt)>1000000:
cv2.drawContours(mask,[cnt],0,255,-1) # the [] around cnt and 3rd argument 0 mean only the particular contour is drawn
# Build a ROI to crop the QR
x,y,w,h = cv2.boundingRect(cnt)
roi=mask[y:y+h,x:x+w]
# crop the original QR based on the ROI
QR_crop = QR_orig[y:y+h,x:x+w]
# use cropped mask image (roi) to get rid of all small pieces
QR_final = QR_crop * (roi/255)
cv2.imwrite('QR_final.TIF', QR_final)

Categories

Resources