Bounding box detection for characters / digits - python

I have images, which look like the following:
I want to find the bounding boxes for the 8 digits. My first try was to use cv2 with the following code:
import cv2
import matplotlib.pyplot as plt
import cvlib as cv
from cvlib.object_detection import draw_bbox
im = cv2.imread('31197402.png')
bbox, label, conf = cv.detect_common_objects(im)
output_image = draw_bbox(im, bbox, label, conf)
plt.imshow(output_image)
plt.show()
Unfortunately that doesn't work. Does anyone have an idea?

The problem in your solution is likely the input image, which is very poor in quality. There’s hardly any contrast between the characters and the background. The blob detection algorithm from cvlib is probably failing to distinguish between character blobs and background, producing a useless binary mask. Let’s try to solve this using purely OpenCV.
I propose the following steps:
Apply adaptive threshold to get a reasonably good binary mask.
Clean the binary mask from blob noise using an area filter.
Improve the quality of the binary image using morphology.
Get the outer contours of each character and fit a bounding rectangle to each character blob.
Crop each character using the previously calculated bounding rectangle.
Let’s see the code:
# importing cv2 & numpy:
import numpy as np
import cv2
# Set image path
path = "C:/opencvImages/"
fileName = "mrrm9.png"
# Read input image:
inputImage = cv2.imread(path+fileName)
inputCopy = inputImage.copy()
# Convert BGR to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
From here there’s not much to discuss, just reading the BGR image and converting it to grayscale. Now, let’s apply an adaptive threshold using the gaussian method. This is the tricky part, as the parameters are adjusted manually depending on the quality of the input. The way the method works is dividing the image into a grid of cells of windowSize, it then applies a local threshold to found the optimal separation between foreground and background. An additional constant, indicated by windowConstant can be added to the threshold to fine tune the output:
# Set the adaptive thresholding (gasussian) parameters:
windowSize = 31
windowConstant = -1
# Apply the threshold:
binaryImage = cv2.adaptiveThreshold(grayscaleImage, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, windowSize, windowConstant)
You get this nice binary image:
Now, as you can see, the image has some blob noise. Let’s apply an area filter to get rid of the noise. The noise is smaller than the target blobs of interest, so we can easy filter them based on area, like this:
# Perform an area filter on the binary blobs:
componentsNumber, labeledImage, componentStats, componentCentroids = \
cv2.connectedComponentsWithStats(binaryImage, connectivity=4)
# Set the minimum pixels for the area filter:
minArea = 20
# Get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remainingComponentLabels = [i for i in range(1, componentsNumber) if componentStats[i][4] >= minArea]
# Filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
filteredImage = np.where(np.isin(labeledImage, remainingComponentLabels) == True, 255, 0).astype('uint8')
This is the filtered image:
We can improve the quality of this image with some morphology. Some of the characters seem to be broken (Check out the first 3 - it is broken in two separated blobs). We can join them applying a closing operation:
# Set kernel (structuring element) size:
kernelSize = 3
# Set operation iterations:
opIterations = 1
# Get the structuring element:
maxKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
# Perform closing:
closingImage = cv2.morphologyEx(filteredImage, cv2.MORPH_CLOSE, maxKernel, None, None, opIterations, cv2.BORDER_REFLECT101)
This is the "closed" image:
Now, you want to get the bounding boxes for each character. Let’s detect the outer contour of each blob and fit a nice rectangle around it:
# Get each bounding box
# Find the big contours/blobs on the filtered image:
contours, hierarchy = cv2.findContours(closingImage, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
contours_poly = [None] * len(contours)
# The Bounding Rectangles will be stored here:
boundRect = []
# Alright, just look for the outer bounding boxes:
for i, c in enumerate(contours):
if hierarchy[0][i][3] == -1:
contours_poly[i] = cv2.approxPolyDP(c, 3, True)
boundRect.append(cv2.boundingRect(contours_poly[i]))
# Draw the bounding boxes on the (copied) input image:
for i in range(len(boundRect)):
color = (0, 255, 0)
cv2.rectangle(inputCopy, (int(boundRect[i][0]), int(boundRect[i][1])), \
(int(boundRect[i][0] + boundRect[i][2]), int(boundRect[i][1] + boundRect[i][3])), color, 2)
The last for loop is pretty much optional. It fetches each bounding rectangle from the list and draws it on the input image, so you can see each individual rectangle, like this:
Let's visualize that on the binary image:
Additionally, if you want to crop each character using the bounding boxes we just got, you do it like this:
# Crop the characters:
for i in range(len(boundRect)):
# Get the roi for each bounding rectangle:
x, y, w, h = boundRect[i]
# Crop the roi:
croppedImg = closingImage[y:y + h, x:x + w]
cv2.imshow("Cropped Character: "+str(i), croppedImg)
cv2.waitKey(0)
This is how you can get the individual bounding boxes. Now, maybe you are trying to pass these images to an OCR. I tried passing the filtered binary image (after the closing operation) to pyocr (That’s the OCR I’m using) and I get this as output string: 31197402
The code I used to get the OCR of the closed image is this:
# Set the OCR libraries:
from PIL import Image
import pyocr
import pyocr.builders
# Set pyocr tools:
tools = pyocr.get_available_tools()
# The tools are returned in the recommended order of usage
tool = tools[0]
# Set OCR language:
langs = tool.get_available_languages()
lang = langs[0]
# Get string from image:
txt = tool.image_to_string(
Image.open(path + "closingImage.png"),
lang=lang,
builder=pyocr.builders.TextBuilder()
)
print("Text is:"+txt)
Be aware that the OCR receives black characters on white background, so you must invert the image first.

Related

Removing the underline of a URL in the image of a text message screenshot with python opencv and matplotlib

I have a screenshot received from an iPhone, both dark and light mode.
I need to use OCR to extract the URL but am unable to do so with the underlining that appears.
What would be the best way to remove the horizontal lines from the message? Except the phone number, it doesn't matter if other parts of the screenshot are distorted.
I've tried approaches as described in
Removing Horizontal Lines in image (OpenCV, Python, Matplotlib)
https://docs.opencv.org/3.2.0/d1/dee/tutorial_moprh_lines_detection.html
https://legacy.imagemagick.org/discourse-server/viewtopic.php?t=22338
And none seem to work well, at all.
Here's a possible solution for your problem. I'm using mock screenshots, since, like I suggested, it is better to use lossless images to get a better result. The main idea here is to extract the color of the text box and to fill the rest of the image with that color, then threshold the image. By doing this, we will reduce the intensity variation and obtain a better thresholded image - since the image histogram will contain fewer intensity values. These are the steps:
Crop the image to a ROI (Region Of Interest)
Get the colors in that ROI via K-Means
Get the color of the text box
Flood-fill the ROI with the color of the text box
Apply Otsu's thresholding to get a binary image
Get OCR of the image
Suppose this is our test images, one uses a a "light" theme while the other uses a "dark" theme:
I'll be using pyocr as OCR engine. Let's use image one, the code would be this:
# imports:
from PIL import Image
import numpy as np
import cv2
import pyocr
import pyocr.builders
tools = pyocr.get_available_tools()
# The tools are returned in the recommended order of usage
tool = tools[0]
langs = tool.get_available_languages()
lang = langs[0]
# image path
path = "D://opencvImages//"
fileName = "mockText.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Set the ROI location:
roiX = 0
roiY = 235
roiWidth = 750
roiHeight = 1080
# Crop the ROI:
smsROI = grayscaleImage[roiY:roiHeight, roiX:roiWidth]
The first bit crops the ROI - everything that is of interest, leaving out the "header" and the "footer" of the image, where's there's info that we really don't need. This is the current ROI:
Wouldn't be nice to (approximately) get all the colors used in the image? Fortunately that's what Color Quantization gives us - a reduced pallet of the average colors present in an image, provided the number of the colors we are looking for. Let's apply K-Means and use 3 clusters to group this colors.
In our test images, most of the pixels are background - so, the largest cluster of pixels will belong to the background. The text represents the smallest cluster of pixels. That leaves the remaining cluster our target - the color of the text box. Let's apply K-Means, then. We need to format the data before, though, because K-Means needs float re-arranged arrays:
# Reshape the data to width x height, number of channels:
kmeansData = smsROI.reshape((-1,1))
# convert the data to np.float32
kmeansData = np.float32(kmeansData)
# define criteria, number of clusters(K) and apply kmeans():
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 5, 1.0)
# Define number of clusters (3 colors):
K = 3
# Run K-means:
_, _, center = cv2.kmeans(kmeansData, K, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# Convert the centers to uint8:
center = np.uint8(center)
# Sort centers from small to largest:
center = sorted(center, reverse=False)
# Get text color and min color:
textBoxColor = int(center[1][0])
minColor = min(center)[0]
print("Minimum Color is: "+str(minColor))
print("Text Box Color is: "+str(textBoxColor))
The info of interest is in center. That's where our colors are. After sorting this list and getting the minimum color value (that I'll use later to distinguish between a light and a dark theme) we can print the values. For the first test image, these values are:
Minimum Color is: 23
Text Box Color is: 225
Alright, so far so good. We have the color of the text box. Let's use that and flood-fill the entire ROI at position (x=0, y=0):
# Apply flood-fill at seed point (0,0):
cv2.floodFill(smsROI, mask=None, seedPoint=(0, 0), newVal=textBoxColor)
The result is this:
Very nice. Let's apply Otsu's thresholding on this bad boy:
# Threshold via Otsu:
_, binaryImage = cv2.threshold(smsROI, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
Now, here comes the minColor part. If you are processing a dark theme screenshot and threshold it you will get white text on black background. If you were to process a light theme screenshot you would get black text on white background. We will always produce the same no matter the input: white text and black background. Let's check the min color, if this equals 0 (black) you just received a dark theme screenshot and you don't need to invert the image. Otherwise, invert the image:
# Process "Dark Theme / Light Theme":
if minColor != 0:
# Invert image if is not already inverted:
binaryImage = 255 - binaryImage
cv2.imshow("binaryImage", binaryImage)
cv2.waitKey(0)
For our first test image, the result is:
Notice the little bits of small noise. Let's apply an area filter (function defined at the end of the post) to get rid of pixels below a certain area threshold:
# Run a minimum area filter:
minArea = 10
binaryImage = areaFilter(minArea, binaryImage)
This is the filtered image:
Very nice. Lastly, I write this image and use pyocr to get the text as a string:
cv2.imwrite(path + "ocrText.png", binaryImage)
txt = tool.image_to_string(
Image.open(path + "ocrText.png"),
lang=lang,
builder=pyocr.builders.TextBuilder()
)
print("Image text is: "+txt)
Which results in:
Image text is: 301248 is your Amazon
verification code
If you test the second image you get the same exact result. This is the definition and implementation of the areaFilter function:
def areaFilter(minArea, inputImage):
# Perform an area filter on the binary blobs:
componentsNumber, labeledImage, componentStats, componentCentroids = \
cv2.connectedComponentsWithStats(inputImage, connectivity=4)
# Get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remainingComponentLabels = [i for i in range(1, componentsNumber) if componentStats[i][4] >= minArea]
# Filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
filteredImage = np.where(np.isin(labeledImage, remainingComponentLabels) == True, 255, 0).astype('uint8')
return filteredImage

Pytesseract doesn't recognize decimal points

I'm trying to read the text in this image that contains also decimal points and decimal numbers
in this way:
img = cv2.imread(path_to_image)
print(pytesseract.image_to_string(img))
and what I get is:
73-82
Primo: 50 —
I've tried to specify also the italian language but the result is pretty similar:
73-82 _
Primo: 50
Searching through other questions on stackoverflow I found that the reading of the decimal numbers can be improved by using a whitelist, in this case tessedit_char_whitelist='0123456789.', but I want to read also the words in the image. Any idea on how to improve the reading of decimal numbers?
I would suggest passing tesseract every row of text as separate image.
For some reason it seams to solve the decimal point issue...
Convert image from grayscale to black and white using cv2.threshold.
Use cv2.dilate morphological operation with very long horizontal kernel (merge blocks across horizontal direction).
Use find contours - each merged row is going to be in a separate contour.
Find bounding boxes of the contours.
Sort the bounding boxes according to the y coordinate.
Iterate bounding boxes, and pass slices to pytesseract.
Here is the code:
import numpy as np
import cv2
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe' # I am using Windows
path_to_image = 'image.png'
img = cv2.imread(path_to_image, cv2.IMREAD_GRAYSCALE) # Read input image as Grayscale
# Convert to binary using automatic threshold (use cv2.THRESH_OTSU)
ret, thresh = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
# Dilate thresh for uniting text areas into blocks of rows.
dilated_thresh = cv2.dilate(thresh, np.ones((3,100)))
# Find contours on dilated_thresh
cnts = cv2.findContours(dilated_thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2] # Use index [-2] to be compatible to OpenCV 3 and 4
# Build a list of bounding boxes
bounding_boxes = [cv2.boundingRect(c) for c in cnts]
# Sort bounding boxes from "top to bottom"
bounding_boxes = sorted(bounding_boxes, key=lambda b: b[1])
# Iterate bounding boxes
for b in bounding_boxes:
x, y, w, h = b
if (h > 10) and (w > 10):
# Crop a slice, and inverse black and white (tesseract prefers black text).
slice = 255 - thresh[max(y-10, 0):min(y+h+10, thresh.shape[0]), max(x-10, 0):min(x+w+10, thresh.shape[1])]
text = pytesseract.image_to_string(slice, config="-c tessedit"
"_char_whitelist=abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890-:."
" --psm 3"
" ")
print(text)
I know it's not the most general solution, but it manages to solve the sample you have posted.
Please treat the answer as a conceptual solution - finding a robust solution might be very challenging.
Results:
Thresholder image after dilate:
First slice:
Second slice:
Third slice:
Output text:
7.3-8.2
Primo:50
You can easily recognize by down-sampling the image.
If you down-sample by 0.5, result will be:
Now if you read:
7.3 - 8.2
Primo: 50
I got the result by using pytesseract 0.3.7 version (current)
Code:
# Load the libraries
import cv2
import pytesseract
# Load the image
img = cv2.imread("s9edQ.png")
# Convert to the gray-scale
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Down-sample
gry = cv2.resize(gry, (0, 0), fx=0.5, fy=0.5)
# OCR
txt = pytesseract.image_to_string(gry)
print(txt)
Explanation:
The input-image contains a little bit of an artifact. You can see it on the right part of the image. On the other hand, the current image is perfect for OCR recognition. You need to use the pre-preprocessing method when the data from the image is not visible or corrupted. Please read the followings:
Image processing
Page-segmentation-mode

Local Contrast Enhancement for Digit Recognition with cv2 / pytesseract

I want to use pytesseract to read digits from images. The images look as follows:
The digits are dotted and in order to be able to use pytesseract, I need black connected digits on a white background. To do so, I thought about using erode and dilate as preprocessing techniques. As you can see, the images are similar, yet quite different in certain aspects. For example, the dots in the first image are darker than the background, while the dots in the second are whiter. That means, in the first image I can use erode to get black connected lines and in the second image I can use dilate to get white connected lines and then inverse the colors. This leads to the following results:
Using an appropriate threshold, the first image can easily be read with pytesseract. The second image, whoever, is more tricky. The problem is, that for example parts of the "4" are darker than the background around the three. So a simple threshold is not going to work. I need something like local threshold or local contrast enhancement. Does anybody have an idea here?
Edit:
OTSU, mean threshold and gaussian threshold lead to the following results:
Your images are pretty low res, but you can try a method called gain division. The idea is that you try to build a model of the background and then weight each input pixel by that model. The output gain should be relatively constant during most of the image.
After gain division is performed, you can try to improve the image by applying an area filter and morphology. I only tried your first image, because it is the "least worst".
These are the steps to get the gain-divided image:
Apply a soft median blur filter to get rid of high frequency noise.
Get the model of the background via local maximum. Apply a very strong close operation, with a big structuring element (I’m using a rectangular kernel of size 15).
Perform gain adjustment by dividing 255 between each local maximum pixel. Weight this value with each input image pixel.
You should get a nice image where the background illumination is pretty much normalized, threshold this image to get a binary mask of the characters.
Now, you can improve the quality of the image with the following, additional steps:
Threshold via Otsu, but add a little bit of bias. (This, unfortunately, is a manual step depending on the input).
Apply an area filter to filter out the smaller blobs of noise.
Let's see the code:
import numpy as np
import cv2
# image path
path = "C:/opencvImages/"
fileName = "iA904.png"
# Reading an image in default mode:
inputImage = cv2.imread(path+fileName)
# Remove small noise via median:
filterSize = 5
imageMedian = cv2.medianBlur(inputImage, filterSize)
# Get local maximum:
kernelSize = 15
maxKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
localMax = cv2.morphologyEx(imageMedian, cv2.MORPH_CLOSE, maxKernel, None, None, 1, cv2.BORDER_REFLECT101)
# Perform gain division
gainDivision = np.where(localMax == 0, 0, (inputImage/localMax))
# Clip the values to [0,255]
gainDivision = np.clip((255 * gainDivision), 0, 255)
# Convert the mat type from float to uint8:
gainDivision = gainDivision.astype("uint8")
# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(gainDivision, cv2.COLOR_BGR2GRAY)
This is what gain division gets you:
Note that the lighting is more balanced. Now, let's apply a little bit of contrast enhancement:
# Contrast Enhancement:
grayscaleImage = np.uint8(cv2.normalize(grayscaleImage, grayscaleImage, 0, 255, cv2.NORM_MINMAX))
You get this, which creates a little bit more contrast between the foreground and the background:
Now, let's try to threshold this image to get a nice, binary mask. As I suggested, try Otsu's thresholding but add (or subtract) a little bit of bias to the result. This step, as mentioned, is dependent on the quality of your input:
# Threshold via Otsu + bias adjustment:
threshValue, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
threshValue = 0.9 * threshValue
_, binaryImage = cv2.threshold(grayscaleImage, threshValue, 255, cv2.THRESH_BINARY)
You end up with this binary mask:
Invert this and filter out the small blobs. I set an area threshold value of 10 pixels:
# Invert image:
binaryImage = 255 - binaryImage
# Perform an area filter on the binary blobs:
componentsNumber, labeledImage, componentStats, componentCentroids = \
cv2.connectedComponentsWithStats(binaryImage, connectivity=4)
# Set the minimum pixels for the area filter:
minArea = 10
# Get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remainingComponentLabels = [i for i in range(1, componentsNumber) if componentStats[i][4] >= minArea]
# Filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
filteredImage = np.where(np.isin(labeledImage, remainingComponentLabels) == True, 255, 0).astype("uint8")
And this is the final binary mask:
If you plan on sending this image to an OCR, you might want to apply some morphology first. Maybe a closing to try and join the dots that make up the characters. Also be sure to train your OCR classifier with a font that is close to what you are actually trying to recognize. This is the (inverted) mask after a size 3 rectangular closing operation with 3 iterations:
Edit:
To get the last image, process the filtered output as follows:
# Set kernel (structuring element) size:
kernelSize = 3
# Set operation iterations:
opIterations = 3
# Get the structuring element:
maxKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
# Perform closing:
closingImage = cv2.morphologyEx(filteredImage, cv2.MORPH_CLOSE, maxKernel, None, None, opIterations, cv2.BORDER_REFLECT101)
# Invert image to obtain black numbers on white background:
closingImage = 255 - closingImage

Bulk removing unwanted parts of images

I have downloaded a number of images (1000) from a website but they each have a black and white ruler running along 1 or 2 edges and some have these catalogue number tickets. I need these elements removed, the ruler at the very least.
Example images of coins:
The images all have the ruler in slightly different places so i cant just preform the same crop on them.
So I tried to remove the black and replace it with white using this code
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
im = Image.open('image-0.jpg')
im = im.convert('RGBA')
data = np.array(im) # "data" is a height x width x 4 numpy array
red, green, blue, alpha = data.T # Temporarily unpack the bands for readability
# Replace black with white
black_areas = (red < 150) & (blue < 150) & (green < 150)
data[..., :-1][black_areas.T] = (255, 255, 255) # Transpose back needed
im2 = Image.fromarray(data)
im2.show()
but it pretty much just removed half the coin as well:
I was having a read of some posts on opencv but though I'd see if there was a simpler way I'd missed first.
So I have taken a look at your problem and I have found a solution for your two images you provided, I hope it works for you other images as well but it is always hard to tell as it can be different on an individual basis. This solution is using OpenCV for preprocessing and contour detection to get the 2nd and 3rd largest elements in your picture (largest is the bounding box around the edges) which should be your coins. Then I create a box around those two items and add some padding before I crop to size.
So we start off with preprocessing:
import numpy as np
import cv2
img = cv2.imread(r'<PATH TO YOUR IMAGE>')
img = cv2.resize(img, None, fx=3, fy=3)
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(imgray, (5, 5), 0)
ret, thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
Still rather basic, we make the image bigger so it is easier to detect contours, then we turn it into grayscale, blur it and apply thresholding to it so we turn all grey values either white or black. This then gives us the following image:
We now do contour detection, get the areas around our contours and sort them by the biggest area. Then we drop the biggest one as it is the box around the whole image and take the 2nd and 3rd biggest. And then get the x,y,w,h values we are interested in.
contours, hierarchy = cv2.findContours(
thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
areas = []
for cnt in contours:
area = cv2.contourArea(cnt)
areas.append((area, cnt))
areas.sort(key=lambda x: x[0], reverse=True)
areas.pop(0)
x, y, w, h = cv2.boundingRect(areas[0][1])
x2, y2, w2, h2 = cv2.boundingRect(areas[1][1])
If we draw a rectangle around those contours:
Now we take those coordinates and create a box around both of them. This might need some minor adjusting as I just quickly took the bigger width of the two and not the corresponding one for the right coin but since I added extra padding it should be fine in most cases. And finally crop to size:
pad = 15
img = img[(min(y, y2) - pad) : (max(y, y2) + max(h, h2) + pad),
(min(x, x2) - pad) : (max(x, x2) + max(w, w2) + pad)]
I hope this helps you to understand how you could achieve what you want, I tried it on both your images and it worked well for them. It might need some adjustments and depending on how your other images look the simple approach of taking the two biggest objects (apart from image bounding box) might be turned into something more sophisticated to detect the cricular shapes or something along those lines. Alternatively you could try to detect the rulers and crop from their position inwards. You will have to decide after you have done this on more example images in your dataset.
If you're looking for a robust solution, you should try something like Max Kaha's response, since it'll provide you with greater fine tuning.
Since the rulers tend to be left with just a little bit of text after your "black to white" filter, a quick solution is to use erosion followed by a dilation to create a mask for your images, and then apply the mask to the original image.
Pillow offers that with the ImageFilter class. Here's your code with a few modifications that'll achieve that:
from PIL import Image, ImageFilter
import numpy as np
import matplotlib.pyplot as plt
WHITE = 255, 255, 255
input_image = Image.open('image.png')
input_image = input_image.convert('RGBA')
input_data = np.array(input_image) # "data" is a height x width x 4 numpy array
red, green, blue, alpha = input_data.T # Temporarily unpack the bands for readability
# Replace black with white
thresh = 30
black_areas = (red < thresh) & (blue < thresh) & (green < thresh)
input_data[..., :-1][black_areas.T] = WHITE # Transpose back needed
erosion_factor = 5
# dilation is bigger to avoid cropping the objects of interest
dilation_factor = 11
erosion_filter = ImageFilter.MaxFilter(erosion_factor)
dilation_filter = ImageFilter.MinFilter(dilation_factor)
eroded = Image.fromarray(input_data).filter(erosion_filter)
dilated = eroded.filter(dilation_filter)
mask_threshold = 220
# the mask is black on regions to be hidden
mask = dilated.convert('L').point(lambda x: 255 if x < mask_threshold else 0)
# create base image
output_image = Image.new('RGBA', input_image.size, WHITE)
# paste only the desired regions
output_image.paste(input_image, mask=mask)
output_image.show()
You should also play around with the black to white threshold and the erosion/dilation factors to try and find the best fit for most of your images.

Python OpenCV detect a white object from a binary image and crop it

My goal is detecting a piece of white paper from this binary image and then crop this white paper and make a new subset binary image just for this white paper.
Now my Python code with OpenCV can find this white paper. For the first step, I created a mask for finding this white paper:
As you guys can see, the small white noise and small piece have been removed. And then the problem become how I can crop this white paper object from this binary image for making a new subset binary image?
My current code is:
import cv2
import numpy as np
QR = cv2.imread('IMG_0352.TIF', 0)
mask = np.zeros(QR.shape,np.uint8)
contours, hierarchy = cv2.findContours(QR,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
if cv2.contourArea(cnt)>1000000:
cv2.drawContours(mask,[cnt],0,255,-1)
Looking for the cnt var, there are four elements, but they are nonsense to me.
I used code to fit a box:
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
The box information doesn't seem right.
Thanks for any suggestions.
Follow up:
I have figured out this problem, which is very easy. The code is attached:
import cv2
import numpy as np
QR_orig = cv2.imread('CamR_IMG_0352.TIF', 0)
QR = cv2.imread('IMG_0352.TIF', 0) # read the QR code binary image as grayscale image to make sure only one layer
mask = np.zeros(QR.shape,np.uint8) # mask image the final image without small pieces
# using findContours func to find the none-zero pieces
contours, hierarchy = cv2.findContours(QR,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
# draw the white paper and eliminate the small pieces (less than 1000000 px). This px count is the same as the QR code dectection
for cnt in contours:
if cv2.contourArea(cnt)>1000000:
cv2.drawContours(mask,[cnt],0,255,-1) # the [] around cnt and 3rd argument 0 mean only the particular contour is drawn
# Build a ROI to crop the QR
x,y,w,h = cv2.boundingRect(cnt)
roi=mask[y:y+h,x:x+w]
# crop the original QR based on the ROI
QR_crop = QR_orig[y:y+h,x:x+w]
# use cropped mask image (roi) to get rid of all small pieces
QR_final = QR_crop * (roi/255)
cv2.imwrite('QR_final.TIF', QR_final)
the contour object is an arbitrary vector (list) of points that enclose the object detected.
An easy brain dead way of accomplishing this is to walk through all the pixels after your thresholding and simply copy the white ones.
I believe findContours() alters the image ( side effect ) so check QR.
However, you need to (usually) get the biggest contour.
Example:
# Choose largest contour
best = 0
maxsize = 0
count = 0
for cnt in contours:
if cv2.contourArea(cnt) > maxsize :
maxsize = cv2.contourArea(cnt)
best = count
count = count + 1
x,y,w,h = cv2.boundingRect(cnt[best])
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
I actually figured out the solution of this problem, which obviously is very simple!!
import cv2
import numpy as np
QR_orig = cv2.imread('CamR_IMG_0352.TIF', 0)
QR = cv2.imread('IMG_0352.TIF', 0) # read the QR code binary image as grayscale image to make sure only one layer
mask = np.zeros(QR.shape,np.uint8) # mask image the final image without small pieces
# using findContours func to find the none-zero pieces
contours, hierarchy = cv2.findContours(QR,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
# draw the white paper and eliminate the small pieces (less than 1000000 px). This px count is the same as the QR code dectection
for cnt in contours:
if cv2.contourArea(cnt)>1000000:
cv2.drawContours(mask,[cnt],0,255,-1) # the [] around cnt and 3rd argument 0 mean only the particular contour is drawn
# Build a ROI to crop the QR
x,y,w,h = cv2.boundingRect(cnt)
roi=mask[y:y+h,x:x+w]
# crop the original QR based on the ROI
QR_crop = QR_orig[y:y+h,x:x+w]
# use cropped mask image (roi) to get rid of all small pieces
QR_final = QR_crop * (roi/255)
cv2.imwrite('QR_final.TIF', QR_final)

Categories

Resources