How to erase the dotted watermark from set of similar images? - python

I want to automate the task of entering set of images into a number generating system & before that i like to remove a dotted watermark which is common across these images.
I tried using google, tesseract & abby reader, but I found that the image part that does not contain the watermark is recognized well, but the part that is watermarked is almost impossible to recognize.
I would like to remove the watermark using image processing. I already tried few sample codes of opencv, python, matlab etc but none matching my requirements...
Here is a sample code in Python that I tried which changes the brightness & darkness:
import cv2
import numpy as np
img = cv2.imread("d:\\Docs\\WFH_Work\\test.png")
alpha = 2.5
beta = -250
new = alpha * img + beta
new = np.clip(new, 0, 255).astype(np.uint8)
cv2.imshow("my window", new)
Unusually, i dont know the watermark of this image consists how many pixels. Is there a way to get rid of this watermark OR make digits dark and lower the darkness of watermark via code?
Here is watermarked image

I am using dilate to remove the figures, then find the edge to detect watermark. Remove it by main gray inside watermark
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('test.png', 0)
kernel = np.ones((10,10),np.uint8)
dilation = cv2.dilate(img,kernel,iterations = 1)
erosion = cv2.erode(dilation,kernel,iterations = 1)
plt.imshow(erosion, cmap='gray')
plt.show()
#contour
gray = cv2.bilateralFilter(erosion, 11, 17, 17)
edged = cv2.Canny(gray, 30, 200)
plt.imshow(edged, cmap='gray')
plt.show()

Related

How do I get openCV to detect this chess board I made?

I've tried using the findChessboardCorners function in open CV python. But it's not working.
These are the images I'm trying to get it to detect these images.
board.jpg:
board2.jpg:
I want it to be able to detect where the squares are and if a piece is on it.
So far I've tried
import cv2 as cv
import numpy as np
def rescaleFrame(frame, scale=0.75):
#rescale image
width = int(frame.shape[1] * scale)
height = int(frame.shape[0] * scale)
dimensions = (width,height)
return cv.resize(frame, dimensions, interpolation=cv.INTER_AREA)
img = cv.imread("board2.jpg")
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
ret, corners = cv.findChessboardCorners(gray, (8,8),None)
if ret == True:
# Draw and display the corners
img = cv.drawChessboardCorners(img, (8,8), corners,ret)
img=rescaleFrame(img)
cv.imshow("board",img)
v.waitKey(0)
I was expect it to work like how this tutorial shows
The function findChessboardCorners is used to calibrate cameras using a black-and-white chessboard pattern. As far as I know, is not designed to detect the corners of a chess board with chess pieces on it.
This site shows an example of calibration "chess boards." And this site shows how these calibration chess boards are used, this example uses the ROS Library.
You can still use OpenCV but will need to try other functions. Assuming you took the photos yourself, you've also made the problem harder on yourself by using a background that has a lot of lines and corners, meaning you'll have to differentiate between those corners and corners on the board. You can also see that the top corners of the board behind the rooks are occluded. If you can retake the photos, I would take a top-down photo and do it on a blank surface that contrasts with the chessboard.
One example of corner detection in OpenCV is Harris corner detection. I wrote up a short example for you. You'll need to play around with this and other corner detection methods to see what works best. I found that adding a sobel filter to strength the lines in your image gave much better results. But it's still going to detect corners in the background and the corners on the pieces. You'll need to figure out how to filter those out.
import cv2 as cv
from matplotlib import pyplot as plt
import numpy as np
def sobel(src_image, kernel_size):
grad_x = cv.Sobel(src_image, cv.CV_16S, 1, 0, ksize=kernel_size, scale=1,
delta=0, borderType=cv.BORDER_DEFAULT)
grad_y = cv.Sobel(src_image, cv.CV_16S, 0, 1, ksize=kernel_size, scale=1,
delta=0, borderType=cv.BORDER_DEFAULT)
abs_grad_x = cv.convertScaleAbs(grad_x)
abs_grad_y = cv.convertScaleAbs(grad_y)
grad = cv.addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0)
return grad
def process_image(src_image_path):
# load the image
src_image = cv.imread(src_image_path)
# convert to RGB (otherwise when you display this image the colors will look incorrect)
src_image = cv.cvtColor(src_image, cv.COLOR_BGR2RGB)
# convert to grayscale before attempting corner detection
src_gray = cv.cvtColor(src_image, cv.COLOR_BGR2GRAY)
# standard technique to eliminate noise
blur_image = cv.blur(src_gray,(3,3))
# strengthen the appearance of lines in the image
sobel_image = sobel(blur_image, 3)
# detect corners
corners = cv.cornerHarris(sobel_image, 2, 3, 0.04)
# for visualization to make corners easier to see
corners = cv.dilate(corners, None)
# overlay on a copy of the source image
dest_image = np.copy(src_image)
dest_image[corners>0.01*corners.max()]=[0,0,255]
return dest_image
src_image_path = "board1.jpg"
dest_image = process_image(src_image_path)
plt.imshow(dest_image)
plt.show()

How to automatically detect a specific feature from one image and map it to another mask image? Then how to smoothen only the corners of the image?

Using the dlib library I was able to mask the mouth feature from one image (masked).
masked
Similarly, I have another cropped image of the mouth that does not have the mask (colorlip).
colorlip
I had scaled and replaced the images (replaced) and using np.where as shown in the code below.
replaced
#Get the values of the lip and the target mask
lip = pred_toblackscreen[bbox_lip[0]:bbox_lip[1], bbox_lip[2]:bbox_lip[3],:]
target = roi[bbox_mask[0]:bbox_mask[1], bbox_mask[2]:bbox_mask[3],:]
cv2.namedWindow('masked', cv2.WINDOW_NORMAL)
cv2.imshow('masked', target)
#Resize the lip to be the same scale/shape as the mask
lip_h, lip_w, _ = lip.shape
target_h, target_w, _ = target.shape
fy = target_h / lip_h
fx = target_w / lip_w
scaled_lip = cv2.resize(lip,(0,0),fx=fx,fy=fy)
cv2.namedWindow('colorlip', cv2.WINDOW_NORMAL)
cv2.imshow('colorlip', scaled_lip)
update = np.where(target==[0,0,0],scaled_lip,target)
cv2.namedWindow('replaced', cv2.WINDOW_NORMAL)
cv2.imshow('replaced', update)
But the feature shape (lip) in 'colorlip' does not match the 'masked' image. So, there is a misalignment and the edges of the mask look sharp as if the image has been overlayed. How to solve this problem? And how to make the final replaced image look more subtle and normal?
**Update #2: OpenCV Image Inpainting to smooth jagged borders.
OpenCV python inpainting should help with rough borders. Using the mouth landmark model, mouth segmentation mask from DL model or anything that was used the border location can be found. From that draw border with a small chosen width around the mouth contour in a new image and use it as a mask for inpainting. The mask I provided need to be inverted to work.
In input masks one of the mask is wider, one has shadow and last one is narrow. The six output images are generated with radius value of 5 and 20 for all three masks.
Code
import numpy as np
# import cv2 as cv2
# import cv2
import cv2.cv2 as cv2
img = cv2.imread('images/lip_img.png')
#mask = cv2.imread('images/lip_img_border_mask.png',0)
mask = cv2.imread('images/lip_img_border_mask2.png',0)
#mask = cv2.imread('images/lip_img_border_mask3.png',0)
mask = np.invert(mask)
# Choose appropriate method and radius.
radius = 20
dst = cv2.inpaint(img, mask, radius, cv2.INPAINT_TELEA)
# dst = cv2.inpaint(img, mask, radius, cv2.INPAINT_NS)
cv2.imwrite('images/inpainted_lip.jpg', dst)
cv2.imshow('dst',dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
Input Image and Masks
Output Images
**Update #1: Added Deep Image harmonization based blending methods.
Try OpenCV seamless cloning for subtle replacement and getting rid of sharp edges. Also deep learning based image inpainting on sharp corners or combining it with seamless clone may provide better results.
Deep learning based Image Harmonization can be another approach to blend together two images such that the cropped part matches the style of background image. Even in this case the pixel intensity will change to match the background but blending will be smoother. Links are added to bottom of the post.
Example
This code example is based on learnopencv seamless cloning example,
# import cv2
from cv2 import cv2
import numpy as np
src = cv2.imread("images/src_img.jpg")
dst = cv2.imread("images/dest_img.jpg")
src_mask = cv2.imread("images/src_img_rough_mask.jpg")
src_mask = np.invert(src_mask)
cv2.namedWindow('src_mask', cv2.WINDOW_NORMAL)
cv2.imshow('src_mask', src_mask)
cv2.waitKey(0)
# Where to place image.
center = (500,500)
# Clone seamlessly.
output = cv2.seamlessClone(src, dst, src_mask, center, cv2.NORMAL_CLONE)
# Write result
cv2.imwrite("images/opencv-seamless-cloning-example.jpg", output)
cv2.namedWindow('output', cv2.WINDOW_NORMAL)
cv2.imshow('output', output)
cv2.waitKey(0)
Source Image
Rough Mask Image
Destination Image
Final Image
Reference
https://docs.opencv.org/4.5.4/df/da0/group__photo__clone.html
https://learnopencv.com/seamless-cloning-using-opencv-python-cpp/
https://learnopencv.com/face-swap-using-opencv-c-python/
https://github.com/JiahuiYu/generative_inpainting
https://docs.opencv.org/4.x/df/d3d/tutorial_py_inpainting.html
Deep Image Harmonization
https://github.com/bcmi/Image-Harmonization-Dataset-iHarmony4
https://github.com/wasidennis/DeepHarmonization
https://github.com/saic-vul/image_harmonization
https://github.com/wuhuikai/GP-GAN
https://github.com/junleen/RainNet
https://github.com/bcmi/BargainNet-Image-Harmonization
https://github.com/vinthony/s2am

Create image mask in Python for DNG and processing

I have a RAW image that is saved as .dng from a phone's camera. I want to segment the colors with the OpenCV library in Python. The picture is primarily black and green and I want to get the values of the green parts of the image. I've not worked with images in this way and am completely clueless. The tutorial I am following says to convert the image to H.S.V. color space and to use a mask, but I'm running into problems with the mask, if not in other steps. I'm using Google Colabs.
import cv2
import matplotlib.pyplot as plt
import numpy as np
import os
from google.colab import drive
import imageio
import scipy.misc
import skimage.filters
import skimage.metrics
from PIL import Image
# Colabs...
!pip install rawpy
import rawpy
# Colabs...
!pip install ExifRead
import exifread
#image
plate = rawpy.imread('/content/drive/MyDrive/Colab Notebooks/Copy of 0724201706a.dng')
#EXIF
plate_x = open('/content/drive/MyDrive/Colab Notebooks/Copy of 0724201706a.dng', 'rb')
#There are several lines returned. I've left this out for now...
plate_tags = exifread.process_file(plate_x)
plate_tags
plt.imshow(plate.raw_image)
plate_rgb = plate.postprocess( use_camera_wb=True)
plt.imshow(plate_rgb)
plate_rgb.shape
(5312, 2988, 3)
These are a slightly edited RGB, the green channel, and blue channel of the RGB image.
Histograms of the values for each channel in R.G.B. image. The other channels are 0, but green has various values.
I supplied all this info to try to describe the RAW image and the R.G.B.
The tutorial says to convert to the H.S.V. color space. I saw somewhere that the image comes in as B.G.R., so I tried two approaches:
plateRGB_hsv = cv2.cvtColor(plate_rgb, cv2.COLOR_RGB2HSV)
plateBGR_hsv = cv2.cvtColor(plate_rgb, cv2.COLOR_BGR2HSV)
# A lower and upper threshold for mask
hsv_green_lo = (59, 100, 135) #h = 50, s = 100, v = 135)
hsv_green_hi = (75, 250, 255) #h = 75, s = 250, v = 255)
plateRGB_hsv.shape
(5312, 2988, 3)
# Create mask
green_thr = cv2.inRange(plateRGB_hsv, hsv_green_lo, hsv_green_hi)
# Apply mask
img_msk = cv2.bitwise_and(plateRGB_hsv, plateRGB_hsv, green_thr)
plt.subplot(1,2,1)
plt.imshow(green_thr)
plt.subplot(1,2,2)
plt.imshow(img_msk)
plt.show()
Output of the inRange (mask layer creation) and bitwise_and (mask application).
rgb_out = cv2.bitwise_and(plate_rgb, plate_rgb, green_thr)
plt.imshow(rgb_out)
plt.plot()
Apply mask and this is output.
So I didn't seem to create the mask properly? And with the bad mask, there was no change when bitwise_and ran it looks like? I don't know why the mask failed. Is the fact that the R.G.B. or H.S.V. is in three channels complicating the mask and mask application?
The image is here.
EDIT after comments and submitted answer:
I was not clear about what I want my output to look like. I said "green", but really I want it to look like this:
I made a new array with just the green channel as advised.
green_c = plate_rgb[...,1]
But now, I'm confused about how to create a mask. Since the array is just one level, I think of it as a "layer", like in G.I.S. or GIMP, how do I change the unwanted values to black? Sorry if this is obvious. I'm still pretty new to Python.
I am not really sure what you think the problem is. Basically, the Red and Blue channels of your image are empty (look at their mean values in the output below) and you may as well discard them and just use the Green channel as your mask.
#!/usr/bin/env python3
import rawpy
import matplotlib.pyplot as plt
import numpy as np
# Load and process raw DNG
plate = rawpy.imread('raw.dng')
rgb = plate.postprocess()
# Show what we've got
print(f'Dimensions: {rgb.shape}, dtype: {rgb.dtype}')
R = rgb[...,0]
print(f'R channel: min={R.min()}, mean={R.mean()}, max={R.max()}')
G = rgb[...,1]
print(f'G channel: min={G.min()}, mean={G.mean()}, max={G.max()}')
B = rgb[...,2]
print(f'B channel: min={B.min()}, mean={B.mean()}, max={B.max()}')
# Display green channel
plt.imshow(G, cmap='gray')
plt.show()
Output for your image
Dimensions: (5312, 2988, 3), dtype: uint8
R channel: min=0, mean=0.013673103558813567, max=255
G channel: min=0, mean=69.00267554908389, max=255
B channel: min=0, mean=0.017269189710649828, max=255
Keywords: Python, image processing, rawpy, DNG, Adobe DNG format.

Seperate two pictures with black beam in imshow

I want to show two images and seperate them visually with a black beam in Python.
My Problem is, that I get not the original colors with the cv2.imshow()function in the Plot-Window.
Here is my Code:
import cv2
import numpy as np
imgloc = 'path\Dosen_py.png'
img = cv2.imread(imgloc)
hight = np.shape(img)[0]
beam = np.zeros((hight,10,3))
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray_3_channel = cv2.cvtColor(gray, cv2.COLOR_GRAY2BGR)
horizontal = np.hstack((img,beam,gray_3_channel))
small = cv2.resize(horizontal, (0,0), fx=0.5, fy=0.5)
cv2.imwrite("combi.png",small)
cv2.imshow("Combi",small)
cv2.waitKey()
After running the code i get the following picture in the plot window:
The saved "combi.png"-file shows the right colors:
If I plot the two pictures without the black beam, i get the original colours,too.
Does anyone know whats wrong with this black beam?
System: Windows 10
IDE: Spyder (Python 2.7)
The default np.ndarray.dtype is np.float64, while for image, it should be np.uint8.
This line:
beam = np.zeros((hight,10,3))
and then, the beam、horixxx and small all are np.float64. So you display the float64. But when write, truncated to np.uint8.
It should be changed to:
beam = np.zeros((hight,10,3), np.uint8)

Python2: Resize image plotted using Matplotlib

I'm coding using Jupyter Notebook with Python 2 + OpenCV 3, and I need to show my results using images. The images are very small and it's hard to observe the results.
from matplotlib import pyplot as plt
import cv2
thresh = 127
maxValue = 255
file_path = "dados/page1.jpg"
%matplotlib notebook
image = cv2.imread(file_path)
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
plt.title("First image")
plt.imshow(gray_image)
Output image
Image is too small. How can I zoom it?
As usual you can set the figure size using
plt.figure(figsize=(8,12))
The maximal figure size is (50,50), however you need to choose sensible values yourself depending on your screen size.

Categories

Resources