handwritten circular annotation removal from scanned image - python

I have these images containing the handwritten circular annotation on the printed text images. I want to remove these annotations from the input image. I have tried to apply some of the thresholding methods as discussed in many threads on StackOverflow, but my results are not as I expected.
However, the method that I am using works really well if the annotation is marked by a blue pen but when the annotation is marked by a black pen then the method of thresholding and erosion won’t produce the output as expected.
Here is a sample image of my achieved results on blue annotations with the thresholding and erosion method
Image (input on the left and output on the right)
Code
import cv2
import numpy as np
from google.colab.patches import cv2_imshow
img = cv2.imread("/content/Scan_0101.jpg")
cv2_imshow(img)
wimg = img[:, :, 0]
ret,thresh = cv2.threshold(wimg,120,255,cv2.THRESH_BINARY)
cv2_imshow(thresh)
kernel = np.ones((3, 3), np.uint8)
erosion = cv2.erode(thresh, kernel, iterations = 1)
mask = cv2.bitwise_or(erosion, thresh)
#cv2_imshow(erosion)
white = np.ones(img.shape,np.uint8)*255
white[:, :, 0] = mask
white[:, :, 1] = mask
white[:, :, 2] = mask
result = cv2.bitwise_or(img, white)
erosion = cv2.erode(result, kernel, iterations = 1)
Here is a sample image of my achieved results on black annotations with the thresholding and erosion method
Image (input on the left and output on the right)
Any suggested approach for this problem? or how this code can be modified to produce the required results.

You must understand that as the gray values in the text and those of the hand writings are in the same range, no thresholding method in the world can work.
In fact, no algorithm at all can succeed without "hints" on what characters look like or don't look like. Even the stroke thickness is not distinctive enough.
The only possible indication is that the circles are made of a smooth and long stroke. And removing them where they cross the characters is just impossible.

Some Parts of handwritten circles (on line spacing regions) may be able to extract, with the assumption "many letters align on same line". In your image, upper and lower part of the circle will be extracted, I think.
Then, if you track the black line with starting from the extracted part (with assuming smooth curvature), it may be able to detect the connected handwritten circle.
However... in real, I think such process will encounter many difficulties : especially regarding the fact that characters will be cut off by removing curve.

Related

Removing transparent watermark from an image - python

I am trying to remove a transparent watermark from an image.
Here is my sample image:
I would like to remove the text "Watermark" from the image. As you can see, the text is transparent. So I would like to replace that text to the original background.
Something like this would be my desired output:
I tried some examples (I am currently using cv2, if other libraries can solve the problem please also recommend), but none of them where near from succeeding. I know the way to go would be to have a mask (like in this post), but they all already have masked images, but I don't.
Here is what I tried to do to have a mask, I turned down the saturation to black and white, and created an image "imagemask.jpg", then tried going through the pixels with a for loop:
mask = cv2.imread('imagemask.jpg')
new = []
rows, cols, _ = mask.shape
for i in range(rows):
new.append([])
#print(i)
for j in range(cols):
k = img[i, j]
#print(k)
if all(x in range(110, 130) for x in k):
new[-1].append((255, 255, 255))
else:
new[-1].append((0, 0, 0))
cv2.imwrite('finalmask.jpg', np.array(new))
Then after that wanted to use the code for the mask, but I realized the "finalmask.jpg" is a complete mess... so I didn't try using the code for the mask.
Is this actually possible? I have been trying for around 3 hours but receiving no luck...
This is not trivial, my friend. To add insult to injury, your image is very low-res, compressed and has a nasty glare - that won't help processing at all. Please, look at your input and set your expectations accordingly. With that said, let's try to get the best result with what we have. These are the steps I propose:
Try to segment the watermark text from the image
Filter the segmentation mask and try to get a binary mask as clean as possible
Use the text mask to in-paint the offending area using the input image as reference
Now, the tricky part, as you already saw, is segmenting the text. After trying out some techniques and color spaces, I found that the CMYK color space - particularly the K channel - offers promising results. The text is reasonably clear and we can try an Adaptive Thresholding on this, let's take a look:
# Imports
import cv2
import numpy as np
# Read image
imagePath = "D://opencvImages//"
img = cv2.imread(imagePath+"0f5zZm.jpg")
# Store a deep copy for the inpaint operation:
originalImg = img.copy()
# Convert to float and divide by 255:
imgFloat = img.astype(np.float) / 255.
# Calculate channel K:
kChannel = 1 - np.max(imgFloat, axis=2)
OpenCV does not offer BGR to CMYK conversion directly, so I manually had to get the K channel using the conversion formula. It is very straightforward. The K (or Key) channel represents pixels of the lowest intensity (black) with color white. Meaning that the text, which is almost white, will be rendered in black... This is the K Channel of the input:
You see how the darker pixels on the input are almost white here? That's nice, it seems to get a clear separation between the text and everything else. It's a shame that we have some big nasty glare on the right side. Anyway, the conversion involves float operations, so gotta be careful with data types. Maybe we can improve this image with a little brightness/contrast adjustment. Just a little bit, I'm just trying to separate more the text from that nasty glare:
# Apply a contrast/brightness adjustment on Channel K:
alpha = 0
beta = 1.2
adjustedK = cv2.normalize(kChannel, None, alpha, beta, cv2.NORM_MINMAX, cv2.CV_32F)
# Convert back to uint 8:
adjustedK = (255*adjustedK).astype(np.uint8)
This is the adjusted image:
There's a little bit more separation between the text and the glare, it seems. Alright, let's apply an Adaptive Thresholding on this bad boy to get an initial segmentation mask:
# Adaptive Thresholding on adjusted Channel K:
windowSize = 21
windowConstant = 11
binaryImg = cv2.adaptiveThreshold(adjustedK, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, windowSize, windowConstant)
You see I'm using a not-so-big windowSize here for the thresholding? Feel free to tune out these parameters if you like. This is the binary image I get:
Yeah, there's a lot of noise. Here's what I propose to get a cleaner mask: There's some obvious blobs that are bigger than the text. Likewise, there are other blobs that are smaller than the text. Let's locate the big blobs and the small blobs and subtract them. The resulting image should contain the text, if we set our parameters correctly. Let's see:
# Get the biggest blobs on the image:
minArea = 180
bigBlobs = areaFilter(minArea, binaryImg)
# Filter the smallest blobs on the image:
minArea = 20
smallBlobs = areaFilter(minArea, binaryImg)
# Let's try to isolate the text:
textMask = smallBlobs - bigBlobs
cv2.imshow("Text Mask", textMask)
cv2.waitKey(0)
Here I'm using a helper function called areaFilter. This function returns all the blobs of an image that are above a minimum area threshold. I'll post the function at the end of the answer. In the meantime, check out these cool images:
Big blobs:
Filtered small blobs:
The difference between them:
Sadly, it seems that some portions of the characters didn't survive the filtering operations. That's because the intersection of the glare and the text is too much for the algorithm to get a clear separation. Something that could benefit the result of the in-painting is a subtle blur on this mask, to get rid of that compression alias. Let's apply some Gaussian Blur to smooth the mask a little bit:
# Blur the mask a little bit to get a
# smoother inpanting result:
kernelSize = (3, 3)
textMask = cv2.GaussianBlur(textMask, kernelSize, cv2.BORDER_DEFAULT)
The kernel is not that big, I just want a subtle effect. This is the result:
Finally, let's apply the in-painting:
# Apply the inpaint method:
inpaintRadius = 10
inpaintMethod = cv2.INPAINT_TELEA
result = cv2.inpaint(originalImg, textMask, inpaintRadius, inpaintMethod)
cv2.imshow("Inpaint Result", result)
cv2.waitKey(0)
This is the final result:
Well, is not that bad, considering the input image. You can try to further improve the result adjusting some values, but the reality of this life, my dude, is that the input image is not that great to begin with. Here's the areaFilter function:
def areaFilter(minArea, inputImage):
# Perform an area filter on the binary blobs:
componentsNumber, labeledImage, componentStats, componentCentroids = \
cv2.connectedComponentsWithStats(inputImage, connectivity=4)
# Get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remainingComponentLabels = [i for i in range(1, componentsNumber) if componentStats[i][4] >= minArea]
# Filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
filteredImage = np.where(np.isin(labeledImage, remainingComponentLabels) == True, 255, 0).astype('uint8')
return filteredImage

Detecting a horizontal line in an image

Problem:
I'm working with a dataset that contains many images that look something like this:
Now I need all these images to be oriented horizontally or vertically, such that the color palette is either at the bottom or the right side of the image. This can be done by simply rotating the image, but the tricky part is figuring out which images should be rotated and which shouldn't.
What I have tried:
I thought that the best way to do this, is by detecting the white line that separates the the color palette from the image. I decided to rotate all images that have the palette at the bottom such that they have it at the right side.
# yes I am mixing between PIL and opencv (I like the PIL resizing more)
# resize image to be 128 by 128 pixels
img = img.resize((128, 128), PIL.Image.BILINEAR)
img = np.array(img)
# perform edge detection, not sure if these are the best parameters for Canny
edges = cv2.Canny(img, 30, 50, 3, apertureSize=3)
has_line = 0
# take numpy slice of the area where the white line usually is
# (not always exactly in the same spot which probably has to do with the way I resize my image)
for line in edges[75:80]:
# check if most of one of the lines contains white pixels
counts = np.bincount(line)
if np.argmax(counts) == 255:
has_line = True
# rotate if we found such a line
if has_line == True:
s = np.rot90(s)
An example of it working correctly:
An example of it working incorrectly:
This works maybe on 98% of images but there are some cases where it will rotate images that shouldn't be rotated or not rotate images that should be rotated. Maybe there is an easier way to do this, or maybe a more elaborate way that is more consistent? I could do it manually but I'm dealing with a lot of images. Thanks for any help and/or comments.
Here are some images where my code fails for testing purposes:
You can start by thresholding your image by setting a very high threshold like 250 to take advantage of the property that your lines are white. This will make all the background black. Now create a special horizontal kernel with a shape like (1, 15) and erode your image with it. What this will do is remove the vertical lines from the image and only the horizontal lines will be left.
import cv2
import numpy as np
img = cv2.imread('horizontal2.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 250, 255, cv2.THRESH_BINARY)
kernel_hor = np.ones((1, 15), dtype=np.uint8)
erode = cv2.erode(thresh, kernel_hor)
As stated in the question the color palates can only be on the right or the bottom. So we can test to check how many contours does the right region has. For this just divide the image in half and take the right part. Before finding contours dilate the result to fill in any gaps with a normal (3, 3) kernel. Using the cv2.RETR_EXTERNAL find the contours and count how many we have found, if greater than a certain number the image is correct side up and there is no need to rotate.
right = erode[:, erode.shape[1]//2:]
kernel = np.ones((3, 3), dtype=np.uint8)
right = cv2.dilate(right, kernel)
cnts, _ = cv2.findContours(right, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if len(cnts) > 3:
print('No need to rotate')
else:
print('rotate')
#ADD YOUR ROTATE CODE HERE
P.S. I tested for all four images you have provided and it worked well. If in case it does not work for any image let me know.

removing black dots from image using OpenCV and Python

I am trying to compare two images and need to pre-process/clean one of them which is a scanned copy before comparing with a digital copy.
Scanned copy /
Digital copy
I ran this code on the scanned image and got an output which has numerous black dots. Not sure how to clean these up so that I can compare with the digital copy
img = cv2.multiply(img, 1.2)
kernel = np.ones((1, 1), np.uint8)
img = cv2.erode(img, kernel, iterations=1)
kernel1 = np.zeros( (9,9), np.float32)
kernel1[4,4] = 2.0
boxFilter = np.ones( (9,9), np.float32) / 81.0
kernel1 = kernel1 - boxFilter
img = cv2.filter2D(img, -1, kernel1)
below is the output I got
Try apply filter in frequency domain, your image after FFT will have regular bright dots, because your image noise. If you will remove these dots and make inverse FFT transform you will remove dots from your image. Check this examples please: example1 , example2 and example3.
Yes. #Andrey method is the right way of solving the problem.
I have tried removing the high frequency dots in the frequency domain and here is an example of how it will look like if done correctly
Original Image in grayscale.
After running FFT on the image
Removing all high frequency noise. Of course this is done manually by drawing a black circle around the noise source. You can design your program to detect local bright spot and remove them cleanly.
Here is the final result after inverse FFT of the above frequency image. Some what degraded due to the crude way of me removing the noise but it should give you a rough idea of how it can be done.
Only the area around the dots will be affected by this process, leaving all other pattern in their original form.

OpenCV (Python): Construct Rectangle from thresholded image

The image below shows an aerial photo of a house block (re-oriented with the longest side vertical), and the same image subjected to Adaptive Thresholding and Difference of Gaussians.
Images: Base; Adaptive Thresholding; Difference of Gaussians
The roof-print of the house is obvious (to the human eye) on the AdThresh image: it's a matter of connecting some obvious dots. In the sample image, finding the blue-bounded box below -
Image with desired rectangle marked in blue
I've had a crack at implementing HoughLinesP() and findContours(), but get nothing sensible (probably because there's some nuance that I'm missing). The python script-chunk that fails to find anything remotely like the blue box, is as follows:
import cv2
import numpy as np
from matplotlib import pyplot as plt
# read in full (RGBA) image - to get alpha layer to use as mask
img = cv2.imread('rotated_12.png', cv2.IMREAD_UNCHANGED)
grey = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Otsu's thresholding after Gaussian filtering
blur_base = cv2.GaussianBlur(grey,(9,9),0)
blur_diff = cv2.GaussianBlur(grey,(15,15),0)
_,thresh1 = cv2.threshold(grey,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
thresh = cv2.adaptiveThreshold(grey,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,11,2)
DoG_01 = blur_base - blur_diff
edges_blur = cv2.Canny(blur_base,70,210)
# Find Contours
(ed, cnts,h) = cv2.findContours(grey, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:4]
for c in cnts:
approx = cv2.approxPolyDP(c, 0.1*cv2.arcLength(c, True), True)
cv2.drawContours(grey, [approx], -1, (0, 255, 0), 1)
# Hough Lines
minLineLength = 30
maxLineGap = 5
lines = cv2.HoughLinesP(edges_blur,1,np.pi/180,20,minLineLength,maxLineGap)
print "lines found:", len(lines)
for line in lines:
cv2.line(grey,(line[0][0], line[0][1]),(line[0][2],line[0][3]),(255,0,0),2)
# plot all the images
images = [img, thresh, DoG_01]
titles = ['Base','AdThresh','DoG01']
for i in xrange(len(images)):
plt.subplot(1,len(images),i+1),plt.imshow(images[i],'gray')
plt.title(titles[i]), plt.xticks([]), plt.yticks([])
plt.savefig('a_edgedetect_12.png')
cv2.destroyAllWindows()
I am trying to set things up without excessive parameterisation. I'm wary of 'tailoring' an algorithm for just this one image since this process will be run on hundreds of thousands of images (with roofs/rooves of different colours which may be less distinguishable from background). That said, I would love to see a solution that 'hit' the blue-box target - that way I could at the very least work out what I've done wrong.
If anyone has a quick-and-dirty way to do this sort of thing, it would be awesome to get a Python code snippet to work with.
The 'base' image ->
Base Image
You should apply the following:
1. Contrast Limited Adaptive Histogram Equalization-CLAHE and convert to gray-scale.
2. Gaussian Blur & Morphological transforms (dialation, erosion, etc) as mentioned by #bad_keypoints. This will help you get rid of the background noise. This is the most tricky step as the results will depend on the order in which you apply (first Gaussian Blur and then Morphological transforms or vice versa) and the window sizes you choose for this purpose.
3. Apply Adaptive thresholding
4. Apply Canny's Edge detection
5. Find contour having four corner points
As said earlier you need to tweak with input parameters of these functions and also need to validate these parameters with other images. As it might be possible that it will work for this case but not for other cases. Based on trial and error you need to fix the parameter values.

processing different quality images with opencv

I am analyzing an image for finding brown objects in an image. I am thresholding an image and taking darkest parts as brown cells. However depending on the quality of an image objects cannot be identified sometimes. Is there any solution for that in OpenCV Python, such as pre-processing the gray scale image and defining what brown means for that particular image?
The code that I am using to find brown dots is as follows:
def countBrownDots(imageFile):
im = cv2.imread(imageFile)
#changing color space
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
gray = increaseBrighntness(gray)
l1,thresh = cv2.threshold(gray,10,255,cv2.THRESH_BINARY_INV)
thresh = ndimage.gaussian_filter(thresh, 16)
l2,thresh = cv2.threshold(thresh,70,255,cv2.THRESH_BINARY)
thresh = ndimage.gaussian_filter(thresh, 16)
cv2.imshow("thresh22",thresh)
rmax = pymorph.regmax(thresh)
nim = pymorph.overlay(thresh, rmax)
seeds,nr_nuclei = ndimage.label(rmax)
cv2.imshow("original",im)
cv2.imshow("browns",nim)
Here is an input image example:
Have a look at the image in HSV color space, here are the 3 planes stacked side by side
Although people have suggested segmenting on the basis of hue, there is actually more discriminative information in the saturation and value planes. For this particular image you would probably get a better result with the gray scale (i.e. value plane) than with the hue. However that is no reason to discard the color information.
As proof of concept (using Gimp) for color segmentation, I just randomly picked a brown spot and changed all colors with a color distance of less than 60 from that spot to green to get this:
If you play with the parameters a bit you will probably get what you want. Then write the code.
I tried pre-processing mean shift filtering to posterize the image, but that didn't really help.

Categories

Resources