I want to get the image Difference for the print which is captured using camera.
I tried many solution using python libraries: opencv, image-magic, etc.
The solution I found for image comparison is for better accuracy is:
move image : left to right and look for minimum difference.
move image : right to left and look for minimum difference.
move image : top to bottom and look for minimum difference.
move image : bottom to top and look for minimum difference.
Condition to capture Image :
1. camera will never move (mounted over a fix stand).
2. Object is placed manually over a white sheet, thus the object will never be properly aligned. (slight variation in angle every time, as it is manual )
Image Sample captured using camera for the bellow code :
Image sample 1: white Dots :
Image sample 2: as original image
Image sample 3: black dots
Accepted Output for print with white dots is not available, but it should only mark the difference(defect) :
Currently I am using following Image-magic command for image difference:
compare -highlight-color black -fuzz 5% -metric AE Image_1.png Image_2.png -compose src diff.png
Code :
import subprocess
# -fuzz 5% # ignore minor difference between two images
cmd = 'compare -highlight-color black -fuzz 5% -metric AE Input.png output.png -compose src diff.png '
subprocess.call(cmd, shell=True)
Output after difference is incorrect as the comparison works pixel to pixel, it is not smart enough to mark only the real difference:
The above solution which I mention will work to get required difference as output, but there is no library or image-magic command available for such image comparison.
Any python code OR Image-magic command for doing this?
It seems you are doing some defect detection task. The first solution comes in my mind is the image registration technique.
First try to take the images in the same conditions (lighting, camera angle and ...) (one of your provided images is bigger 2 pixels).
Then you should register two images and match one to the other one, like this
Then wrap them with the help of homography matrix, and generate an aligned image, in this case, the result is like this:
Then take the difference of aligned image with the query image and threshold it, the result:
As I said if you try to take your frames with more precise, the registration result will be better and cause more accurate performance.
The codes for each part: (mostly taken from here).
import cv2
import numpy as np
MAX_FEATURES = 1000
GOOD_MATCH_PERCENT = 0.5
def alignImages(im1, im2):
# Convert images to grayscale
im1Gray = cv2.cvtColor(im1, cv2.COLOR_BGR2GRAY)
im2Gray = cv2.cvtColor(im2, cv2.COLOR_BGR2GRAY)
# Detect ORB features and compute descriptors.
orb = cv2.ORB_create(MAX_FEATURES)
keypoints1, descriptors1 = orb.detectAndCompute(im1Gray, None)
keypoints2, descriptors2 = orb.detectAndCompute(im2Gray, None)
# Match features.
matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
matches = matcher.match(descriptors1, descriptors2, None)
# Sort matches by score
matches.sort(key=lambda x: x.distance, reverse=False)
# Remove not so good matches
numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
matches = matches[:numGoodMatches]
# Draw top matches
imMatches = cv2.drawMatches(im1, keypoints1, im2, keypoints2, matches, None)
cv2.imwrite("matches.jpg", imMatches)
# Extract location of good matches
points1 = np.zeros((len(matches), 2), dtype=np.float32)
points2 = np.zeros((len(matches), 2), dtype=np.float32)
for i, match in enumerate(matches):
points1[i, :] = keypoints1[match.queryIdx].pt
points2[i, :] = keypoints2[match.trainIdx].pt
# Find homography
h, mask = cv2.findHomography(points1, points2, cv2.RANSAC)
# Use homography
height, width, channels = im2.shape
im1Reg = cv2.warpPerspective(im1, h, (width, height))
return im1Reg
if __name__ == '__main__':
# Read reference image
refFilename = "vv9gFl.jpg"
imFilename = "uP3CYl.jpg"
imReference = cv2.imread(refFilename, cv2.IMREAD_COLOR)
im = cv2.imread(imFilename, cv2.IMREAD_COLOR)
# Registered image will be resotred in imReg.
# The estimated homography will be stored in h.
imReg = alignImages(im, imReference)
# Write aligned image to disk.
outFilename = "aligned.jpg"
cv2.imwrite(outFilename, imReg)
for image difference and thresholding:
alined = cv2.imread("aligned.jpg" , 0)
alined = alined[:, :280]
b = cv2.imread("vv9gFl.jpg", 0 )
b = b[:, :280]
print (alined.shape)
print (b.shape)
diff = cv2.absdiff(alined, b)
cv2.imwrite("diff.png", diff)
threshold = 25
alined[np.where(diff > threshold)] = 255
alined[np.where(diff <= threshold)] = 0
cv2.imwrite("threshold.png", diff)
If you have lots of images and want to do defect detecting task I suggest using Denoising Autoencoder to train a deep artificial neural network. Read more here.
Although you do not want point-by-point processing, here is a subimage-search compare using Imagemagick. It pads one image after cropping off the black and then shifts the smaller to find the best match locations with the larger.
crop image1:
convert image1.jpg -gravity north -chop 0x25 image1c.png
crop and pad image2:
convert image2.jpg -gravity north -chop 0x25 -gravity center -bordercolor "rgb(114,151,157)" -border 20x20 image2c.png
do subimage search
compare -metric rmse -subimage-search image2c.png image1c.png null:
1243.41 (0.0189732) # 22,20
now shift and get difference between the two images
convert image2c.png image1c.png -geometry +22+20 -compose difference -composite -shave 22x20 -colorspace gray -auto-level +level-colors white,red diff.png
ADDITION:
If you want to just use compare, then you need to add -fuzz 15% to the compare command:
compare -metric rmse -fuzz 15% -subimage-search image2c.png image1c.png diff.png
Two images are produced. The difference image is the first, so look at diff-0.png
Related
I have a screenshot received from an iPhone, both dark and light mode.
I need to use OCR to extract the URL but am unable to do so with the underlining that appears.
What would be the best way to remove the horizontal lines from the message? Except the phone number, it doesn't matter if other parts of the screenshot are distorted.
I've tried approaches as described in
Removing Horizontal Lines in image (OpenCV, Python, Matplotlib)
https://docs.opencv.org/3.2.0/d1/dee/tutorial_moprh_lines_detection.html
https://legacy.imagemagick.org/discourse-server/viewtopic.php?t=22338
And none seem to work well, at all.
Here's a possible solution for your problem. I'm using mock screenshots, since, like I suggested, it is better to use lossless images to get a better result. The main idea here is to extract the color of the text box and to fill the rest of the image with that color, then threshold the image. By doing this, we will reduce the intensity variation and obtain a better thresholded image - since the image histogram will contain fewer intensity values. These are the steps:
Crop the image to a ROI (Region Of Interest)
Get the colors in that ROI via K-Means
Get the color of the text box
Flood-fill the ROI with the color of the text box
Apply Otsu's thresholding to get a binary image
Get OCR of the image
Suppose this is our test images, one uses a a "light" theme while the other uses a "dark" theme:
I'll be using pyocr as OCR engine. Let's use image one, the code would be this:
# imports:
from PIL import Image
import numpy as np
import cv2
import pyocr
import pyocr.builders
tools = pyocr.get_available_tools()
# The tools are returned in the recommended order of usage
tool = tools[0]
langs = tool.get_available_languages()
lang = langs[0]
# image path
path = "D://opencvImages//"
fileName = "mockText.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Set the ROI location:
roiX = 0
roiY = 235
roiWidth = 750
roiHeight = 1080
# Crop the ROI:
smsROI = grayscaleImage[roiY:roiHeight, roiX:roiWidth]
The first bit crops the ROI - everything that is of interest, leaving out the "header" and the "footer" of the image, where's there's info that we really don't need. This is the current ROI:
Wouldn't be nice to (approximately) get all the colors used in the image? Fortunately that's what Color Quantization gives us - a reduced pallet of the average colors present in an image, provided the number of the colors we are looking for. Let's apply K-Means and use 3 clusters to group this colors.
In our test images, most of the pixels are background - so, the largest cluster of pixels will belong to the background. The text represents the smallest cluster of pixels. That leaves the remaining cluster our target - the color of the text box. Let's apply K-Means, then. We need to format the data before, though, because K-Means needs float re-arranged arrays:
# Reshape the data to width x height, number of channels:
kmeansData = smsROI.reshape((-1,1))
# convert the data to np.float32
kmeansData = np.float32(kmeansData)
# define criteria, number of clusters(K) and apply kmeans():
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 5, 1.0)
# Define number of clusters (3 colors):
K = 3
# Run K-means:
_, _, center = cv2.kmeans(kmeansData, K, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# Convert the centers to uint8:
center = np.uint8(center)
# Sort centers from small to largest:
center = sorted(center, reverse=False)
# Get text color and min color:
textBoxColor = int(center[1][0])
minColor = min(center)[0]
print("Minimum Color is: "+str(minColor))
print("Text Box Color is: "+str(textBoxColor))
The info of interest is in center. That's where our colors are. After sorting this list and getting the minimum color value (that I'll use later to distinguish between a light and a dark theme) we can print the values. For the first test image, these values are:
Minimum Color is: 23
Text Box Color is: 225
Alright, so far so good. We have the color of the text box. Let's use that and flood-fill the entire ROI at position (x=0, y=0):
# Apply flood-fill at seed point (0,0):
cv2.floodFill(smsROI, mask=None, seedPoint=(0, 0), newVal=textBoxColor)
The result is this:
Very nice. Let's apply Otsu's thresholding on this bad boy:
# Threshold via Otsu:
_, binaryImage = cv2.threshold(smsROI, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
Now, here comes the minColor part. If you are processing a dark theme screenshot and threshold it you will get white text on black background. If you were to process a light theme screenshot you would get black text on white background. We will always produce the same no matter the input: white text and black background. Let's check the min color, if this equals 0 (black) you just received a dark theme screenshot and you don't need to invert the image. Otherwise, invert the image:
# Process "Dark Theme / Light Theme":
if minColor != 0:
# Invert image if is not already inverted:
binaryImage = 255 - binaryImage
cv2.imshow("binaryImage", binaryImage)
cv2.waitKey(0)
For our first test image, the result is:
Notice the little bits of small noise. Let's apply an area filter (function defined at the end of the post) to get rid of pixels below a certain area threshold:
# Run a minimum area filter:
minArea = 10
binaryImage = areaFilter(minArea, binaryImage)
This is the filtered image:
Very nice. Lastly, I write this image and use pyocr to get the text as a string:
cv2.imwrite(path + "ocrText.png", binaryImage)
txt = tool.image_to_string(
Image.open(path + "ocrText.png"),
lang=lang,
builder=pyocr.builders.TextBuilder()
)
print("Image text is: "+txt)
Which results in:
Image text is: 301248 is your Amazon
verification code
If you test the second image you get the same exact result. This is the definition and implementation of the areaFilter function:
def areaFilter(minArea, inputImage):
# Perform an area filter on the binary blobs:
componentsNumber, labeledImage, componentStats, componentCentroids = \
cv2.connectedComponentsWithStats(inputImage, connectivity=4)
# Get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remainingComponentLabels = [i for i in range(1, componentsNumber) if componentStats[i][4] >= minArea]
# Filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
filteredImage = np.where(np.isin(labeledImage, remainingComponentLabels) == True, 255, 0).astype('uint8')
return filteredImage
I have images, which look like the following:
I want to find the bounding boxes for the 8 digits. My first try was to use cv2 with the following code:
import cv2
import matplotlib.pyplot as plt
import cvlib as cv
from cvlib.object_detection import draw_bbox
im = cv2.imread('31197402.png')
bbox, label, conf = cv.detect_common_objects(im)
output_image = draw_bbox(im, bbox, label, conf)
plt.imshow(output_image)
plt.show()
Unfortunately that doesn't work. Does anyone have an idea?
The problem in your solution is likely the input image, which is very poor in quality. There’s hardly any contrast between the characters and the background. The blob detection algorithm from cvlib is probably failing to distinguish between character blobs and background, producing a useless binary mask. Let’s try to solve this using purely OpenCV.
I propose the following steps:
Apply adaptive threshold to get a reasonably good binary mask.
Clean the binary mask from blob noise using an area filter.
Improve the quality of the binary image using morphology.
Get the outer contours of each character and fit a bounding rectangle to each character blob.
Crop each character using the previously calculated bounding rectangle.
Let’s see the code:
# importing cv2 & numpy:
import numpy as np
import cv2
# Set image path
path = "C:/opencvImages/"
fileName = "mrrm9.png"
# Read input image:
inputImage = cv2.imread(path+fileName)
inputCopy = inputImage.copy()
# Convert BGR to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
From here there’s not much to discuss, just reading the BGR image and converting it to grayscale. Now, let’s apply an adaptive threshold using the gaussian method. This is the tricky part, as the parameters are adjusted manually depending on the quality of the input. The way the method works is dividing the image into a grid of cells of windowSize, it then applies a local threshold to found the optimal separation between foreground and background. An additional constant, indicated by windowConstant can be added to the threshold to fine tune the output:
# Set the adaptive thresholding (gasussian) parameters:
windowSize = 31
windowConstant = -1
# Apply the threshold:
binaryImage = cv2.adaptiveThreshold(grayscaleImage, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, windowSize, windowConstant)
You get this nice binary image:
Now, as you can see, the image has some blob noise. Let’s apply an area filter to get rid of the noise. The noise is smaller than the target blobs of interest, so we can easy filter them based on area, like this:
# Perform an area filter on the binary blobs:
componentsNumber, labeledImage, componentStats, componentCentroids = \
cv2.connectedComponentsWithStats(binaryImage, connectivity=4)
# Set the minimum pixels for the area filter:
minArea = 20
# Get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remainingComponentLabels = [i for i in range(1, componentsNumber) if componentStats[i][4] >= minArea]
# Filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
filteredImage = np.where(np.isin(labeledImage, remainingComponentLabels) == True, 255, 0).astype('uint8')
This is the filtered image:
We can improve the quality of this image with some morphology. Some of the characters seem to be broken (Check out the first 3 - it is broken in two separated blobs). We can join them applying a closing operation:
# Set kernel (structuring element) size:
kernelSize = 3
# Set operation iterations:
opIterations = 1
# Get the structuring element:
maxKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
# Perform closing:
closingImage = cv2.morphologyEx(filteredImage, cv2.MORPH_CLOSE, maxKernel, None, None, opIterations, cv2.BORDER_REFLECT101)
This is the "closed" image:
Now, you want to get the bounding boxes for each character. Let’s detect the outer contour of each blob and fit a nice rectangle around it:
# Get each bounding box
# Find the big contours/blobs on the filtered image:
contours, hierarchy = cv2.findContours(closingImage, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
contours_poly = [None] * len(contours)
# The Bounding Rectangles will be stored here:
boundRect = []
# Alright, just look for the outer bounding boxes:
for i, c in enumerate(contours):
if hierarchy[0][i][3] == -1:
contours_poly[i] = cv2.approxPolyDP(c, 3, True)
boundRect.append(cv2.boundingRect(contours_poly[i]))
# Draw the bounding boxes on the (copied) input image:
for i in range(len(boundRect)):
color = (0, 255, 0)
cv2.rectangle(inputCopy, (int(boundRect[i][0]), int(boundRect[i][1])), \
(int(boundRect[i][0] + boundRect[i][2]), int(boundRect[i][1] + boundRect[i][3])), color, 2)
The last for loop is pretty much optional. It fetches each bounding rectangle from the list and draws it on the input image, so you can see each individual rectangle, like this:
Let's visualize that on the binary image:
Additionally, if you want to crop each character using the bounding boxes we just got, you do it like this:
# Crop the characters:
for i in range(len(boundRect)):
# Get the roi for each bounding rectangle:
x, y, w, h = boundRect[i]
# Crop the roi:
croppedImg = closingImage[y:y + h, x:x + w]
cv2.imshow("Cropped Character: "+str(i), croppedImg)
cv2.waitKey(0)
This is how you can get the individual bounding boxes. Now, maybe you are trying to pass these images to an OCR. I tried passing the filtered binary image (after the closing operation) to pyocr (That’s the OCR I’m using) and I get this as output string: 31197402
The code I used to get the OCR of the closed image is this:
# Set the OCR libraries:
from PIL import Image
import pyocr
import pyocr.builders
# Set pyocr tools:
tools = pyocr.get_available_tools()
# The tools are returned in the recommended order of usage
tool = tools[0]
# Set OCR language:
langs = tool.get_available_languages()
lang = langs[0]
# Get string from image:
txt = tool.image_to_string(
Image.open(path + "closingImage.png"),
lang=lang,
builder=pyocr.builders.TextBuilder()
)
print("Text is:"+txt)
Be aware that the OCR receives black characters on white background, so you must invert the image first.
I need to detect the vein junctions of wings bee (the image is just one example). I use opencv - python.
ps: maybe the image lost a little bit of quality, but the image is all connected with one pixel wide.
This is an interesting question. The result I got is not perfect, but it might be a good start. I filtered the image with a kernel that only looks at the edges of the kernel. The idea being, that a junction has at least 3 lines that cross the kernel-edge, where regular lines only have 2. This means that when the kernel is over a junction, the resulting value will be higher, so a threshold will reveal them.
Due to the nature of the lines there are some value positives and some false negatives. A single joint will most likely be found several times, so you'll have to account for that. You can make them unique by drawing small dots and detecting those dots.
Result:
Code:
import cv2
import numpy as np
# load the image as grayscale
img = cv2.imread('xqXid.png',0)
# make a copy to display result
im_or = img.copy()
# convert image to larger datatyoe
img.astype(np.int32)
# create kernel
kernel = np.ones((7,7))
kernel[2:5,2:5] = 0
print(kernel)
#apply kernel
res = cv2.filter2D(img,3,kernel)
# filter results
loc = np.where(res > 2800)
print(len(loc[0]))
#draw circles on found locations
for x in range(len(loc[0])):
cv2.circle(im_or,(loc[1][x],loc[0][x]),10,(127),5)
#display result
cv2.imshow('Result',im_or)
cv2.waitKey(0)
cv2.destroyAllWindows()
Note: you can try to tweak the kernel and the threshold. For example, with the code above I got 126 matches. But when I use
kernel = np.ones((5,5))
kernel[1:4,1:4] = 0
with threshold
loc = np.where(res > 1550)
I got 33 matches in these locations:
You can use Harris corner detector algorithm to detect vein junction in above image. Compared to the previous techniques, Harris corner detector takes the differential of the corner score into account with reference to direction directly, instead of using shifting patches for every 45 degree angles, and has been proved to be more accurate in distinguishing between edges and corners (Source: wikipedia).
code:
img = cv2.imread('wings-bee.png')
# convert image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = np.float32(gray)
'''
args:
img - Input image, it should be grayscale and float32 type.
blockSize - It is the size of neighbourhood considered for corner detection
ksize - Aperture parameter of Sobel derivative used.
k - Harris detector free parameter in the equation.
'''
dst = cv2.cornerHarris(gray, 9, 5, 0.04)
# result is dilated for marking the corners
dst = cv2.dilate(dst,None)
# Threshold for an optimal value, it may vary depending on the image.
img_thresh = cv2.threshold(dst, 0.32*dst.max(), 255, 0)[1]
img_thresh = np.uint8(img_thresh)
# get the matrix with the x and y locations of each centroid
centroids = cv2.connectedComponentsWithStats(img_thresh)[3]
stop_criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# refine corner coordinates to subpixel accuracy
corners = cv2.cornerSubPix(gray, np.float32(centroids), (5,5), (-1,-1), stop_criteria)
for i in range(1, len(corners)):
#print(corners[i])
cv2.circle(img, (int(corners[i,0]), int(corners[i,1])), 5, (0,255,0), 2)
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
output:
You can check the theory behind Harris Corner detector algorithm from here.
I have some hundreds of images (scanned documents), most of them are skewed. I wanted to de-skew them using Python.
Here is the code I used:
import numpy as np
import cv2
from skimage.transform import radon
filename = 'path_to_filename'
# Load file, converting to grayscale
img = cv2.imread(filename)
I = cv2.cvtColor(img, COLOR_BGR2GRAY)
h, w = I.shape
# If the resolution is high, resize the image to reduce processing time.
if (w > 640):
I = cv2.resize(I, (640, int((h / w) * 640)))
I = I - np.mean(I) # Demean; make the brightness extend above and below zero
# Do the radon transform
sinogram = radon(I)
# Find the RMS value of each row and find "busiest" rotation,
# where the transform is lined up perfectly with the alternating dark
# text and white lines
r = np.array([np.sqrt(np.mean(np.abs(line) ** 2)) for line in sinogram.transpose()])
rotation = np.argmax(r)
print('Rotation: {:.2f} degrees'.format(90 - rotation))
# Rotate and save with the original resolution
M = cv2.getRotationMatrix2D((w/2,h/2),90 - rotation,1)
dst = cv2.warpAffine(img,M,(w,h))
cv2.imwrite('rotated.jpg', dst)
This code works well with most of the documents, except with some angles: (180 and 0) and (90 and 270) are often detected as the same angle (i.e it does not make difference between (180 and 0) and (90 and 270)). So I get a lot of upside-down documents.
Here is an example:
The resulted image that I get is the same as the input image.
Is there any suggestion to detect if an image is upside down using Opencv and Python?
PS: I tried to check the orientation using EXIF data, but it didn't lead to any solution.
EDIT:
It is possible to detect the orientation using Tesseract (pytesseract for Python), but it is only possible when the image contains a lot of characters.
For anyone who may need this:
import cv2
import pytesseract
print(pytesseract.image_to_osd(cv2.imread(file_name)))
If the document contains enough characters, it is possible for Tesseract to detect the orientation. However, when the image has few lines, the orientation angle suggested by Tesseract is usually wrong. So this can not be a 100% solution.
Python3/OpenCV4 script to align scanned documents.
Rotate the document and sum the rows. When the document has 0 and 180 degrees of rotation, there will be a lot of black pixels in the image:
Use a score keeping method. Score each image for it's likeness to a zebra pattern. The image with the best score has the correct rotation. The image you linked to was off by 0.5 degrees. I omitted some functions for readability, the full code can be found here.
# Rotate the image around in a circle
angle = 0
while angle <= 360:
# Rotate the source image
img = rotate(src, angle)
# Crop the center 1/3rd of the image (roi is filled with text)
h,w = img.shape
buffer = min(h, w) - int(min(h,w)/1.15)
roi = img[int(h/2-buffer):int(h/2+buffer), int(w/2-buffer):int(w/2+buffer)]
# Create background to draw transform on
bg = np.zeros((buffer*2, buffer*2), np.uint8)
# Compute the sums of the rows
row_sums = sum_rows(roi)
# High score --> Zebra stripes
score = np.count_nonzero(row_sums)
scores.append(score)
# Image has best rotation
if score <= min(scores):
# Save the rotatied image
print('found optimal rotation')
best_rotation = img.copy()
k = display_data(roi, row_sums, buffer)
if k == 27: break
# Increment angle and try again
angle += .75
cv2.destroyAllWindows()
How to tell if the document is upside down? Fill in the area from the top of the document to the first non-black pixel in the image. Measure the area in yellow. The image that has the smallest area will be the one that is right-side-up:
# Find the area from the top of page to top of image
_, bg = area_to_top_of_text(best_rotation.copy())
right_side_up = sum(sum(bg))
# Flip image and try again
best_rotation_flipped = rotate(best_rotation, 180)
_, bg = area_to_top_of_text(best_rotation_flipped.copy())
upside_down = sum(sum(bg))
# Check which area is larger
if right_side_up < upside_down: aligned_image = best_rotation
else: aligned_image = best_rotation_flipped
# Save aligned image
cv2.imwrite('/home/stephen/Desktop/best_rotation.png', 255-aligned_image)
cv2.destroyAllWindows()
Assuming you did run the angle-correction already on the image, you can try the following to find out if it is flipped:
Project the corrected image to the y-axis, so that you get a 'peak' for each line. Important: There are actually almost always two sub-peaks!
Smooth this projection by convolving with a gaussian in order to get rid of fine structure, noise, etc.
For each peak, check if the stronger sub-peak is on top or at the bottom.
Calculate the fraction of peaks that have sub-peaks on the bottom side. This is your scalar value that gives you the confidence that the image is oriented correctly.
The peak finding in step 3 is done by finding sections with above average values. The sub-peaks are then found via argmax.
Here's a figure to illustrate the approach; A few lines of you example image
Blue: Original projection
Orange: smoothed projection
Horizontal line: average of the smoothed projection for the whole image.
here's some code that does this:
import cv2
import numpy as np
# load image, convert to grayscale, threshold it at 127 and invert.
page = cv2.imread('Page.jpg')
page = cv2.cvtColor(page, cv2.COLOR_BGR2GRAY)
page = cv2.threshold(page, 127, 255, cv2.THRESH_BINARY_INV)[1]
# project the page to the side and smooth it with a gaussian
projection = np.sum(page, 1)
gaussian_filter = np.exp(-(np.arange(-3, 3, 0.1)**2))
gaussian_filter /= np.sum(gaussian_filter)
smooth = np.convolve(projection, gaussian_filter)
# find the pixel values where we expect lines to start and end
mask = smooth > np.average(smooth)
edges = np.convolve(mask, [1, -1])
line_starts = np.where(edges == 1)[0]
line_endings = np.where(edges == -1)[0]
# count lines with peaks on the lower side
lower_peaks = 0
for start, end in zip(line_starts, line_endings):
line = smooth[start:end]
if np.argmax(line) < len(line)/2:
lower_peaks += 1
print(lower_peaks / len(line_starts))
this prints 0.125 for the given image, so this is not oriented correctly and must be flipped.
Note that this approach might break badly if there are images or anything not organized in lines in the image (maybe math or pictures). Another problem would be too few lines, resulting in bad statistics.
Also different fonts might result in different distributions. You can try this on a few images and see if the approach works. I don't have enough data.
You can use the Alyn module. To install it:
pip install alyn
Then to use it to deskew images(Taken from the homepage):
from alyn import Deskew
d = Deskew(
input_file='path_to_file',
display_image='preview the image on screen',
output_file='path_for_deskewed image',
r_angle='offest_angle_in_degrees_to_control_orientation')`
d.run()
Note that Alyn is only for deskewing text.
I am analyzing multiple images and need to be able to tell if they are shifted compared to a reference image. The purpose is to tell if the camera moved at all in between capturing images. I would ideally like to be able to correct the shift in order to still do the analysis, but at a minimum I need to be able to determine if an image is shifted and discard it if it's beyond a certain threshold.
Here are some examples of the shifts in an image I would like to detect:
I will use the first image as a reference and then compare all of the following images to it to figure out if they are shifted. The images are gray-scale (they are just displayed in color using a heat-map) and are stored in a 2-D numpy array. Any ideas how I can do this? I would prefer to use the packages I already have installed (scipy, numpy, PIL, matplotlib).
As Lukas Graf hints, you are looking for cross-correlation. It works well, if:
The scale of your images does not change considerably.
There is no rotation change in the images.
There is no significant illumination change in the images.
For plain translations cross-correlation is very good.
The simplest cross-correlation tool is scipy.signal.correlate. However, it uses the trivial method for cross-correlation, which is O(n^4) for a two-dimensional image with side length n. In practice, with your images it'll take very long.
The better too is scipy.signal.fftconvolve as convolution and correlation are closely related.
Something like this:
import numpy as np
import scipy.signal
def cross_image(im1, im2):
# get rid of the color channels by performing a grayscale transform
# the type cast into 'float' is to avoid overflows
im1_gray = np.sum(im1.astype('float'), axis=2)
im2_gray = np.sum(im2.astype('float'), axis=2)
# get rid of the averages, otherwise the results are not good
im1_gray -= np.mean(im1_gray)
im2_gray -= np.mean(im2_gray)
# calculate the correlation image; note the flipping of onw of the images
return scipy.signal.fftconvolve(im1_gray, im2_gray[::-1,::-1], mode='same')
The funny-looking indexing of im2_gray[::-1,::-1] rotates it by 180° (mirrors both horizontally and vertically). This is the difference between convolution and correlation, correlation is a convolution with the second signal mirrored.
Now if we just correlate the first (topmost) image with itself, we get:
This gives a measure of self-similarity of the image. The brightest spot is at (201, 200), which is in the center for the (402, 400) image.
The brightest spot coordinates can be found:
np.unravel_index(np.argmax(corr_img), corr_img.shape)
The linear position of the brightest pixel is returned by argmax, but it has to be converted back into the 2D coordinates with unravel_index.
Next, we try the same by correlating the first image with the second image:
The correlation image looks similar, but the best correlation has moved to (149,200), i.e. 52 pixels upwards in the image. This is the offset between the two images.
This seems to work with these simple images. However, there may be false correlation peaks, as well, and any of the problems outlined in the beginning of this answer may ruin the results.
In any case you should consider using a windowing function. The choice of the function is not that important, as long as something is used. Also, if you have problems with small rotation or scale changes, try correlating several small areas agains the surrounding image. That will give you different displacements at different positions of the image.
Another way to solve it is to compute sift points in both images, use RANSAC to get rid of outliers and then solve for translation using a least squares estimator.
as Bharat said as well another is using sift features and Ransac:
import numpy as np
import cv2
from matplotlib import pyplot as plt
def crop_region(path, c_p):
"""
This function crop the match region in the input image
c_p: corner points
"""
# 3 or 4 channel as the original
img = cv2.imread(path, -1)
# mask
mask = np.zeros(img.shape, dtype=np.uint8)
# fill the the match region
channel_count = img.shape[2]
ignore_mask_color = (255,)*channel_count
cv2.fillPoly(mask, c_p, ignore_mask_color)
# apply the mask
matched_region = cv2.bitwise_and(img, mask)
return matched_region
def features_matching(path_temp,path_train):
"""
Function for Feature Matching + Perspective Transformation
"""
img1 = cv2.imread(path_temp, 0) # template
img2 = cv2.imread(path_train, 0) # input image
min_match=10
# SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
# extract the keypoints and descriptors with SIFT
kps1, des1 = sift.detectAndCompute(img1,None)
kps2, des2 = sift.detectAndCompute(img2,None)
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1, des2, k=2)
# store all the good matches (g_matches) as per Lowe's ratio
g_match = []
for m,n in matches:
if m.distance < 0.7 * n.distance:
g_match.append(m)
if len(g_match)>min_match:
src_pts = np.float32([ kps1[m.queryIdx].pt for m in g_match ]).reshape(-1,1,2)
dst_pts = np.float32([ kps2[m.trainIdx].pt for m in g_match ]).reshape(-1,1,2)
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
matchesMask = mask.ravel().tolist()
h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)
img2 = cv2.polylines(img2, [np.int32(dst)], True, (0,255,255) , 3, cv2.LINE_AA)
else:
print "Not enough matches have been found! - %d/%d" % (len(g_match), min_match)
matchesMask = None
draw_params = dict(matchColor = (0,255,255),
singlePointColor = (0,255,0),
matchesMask = matchesMask, # only inliers
flags = 2)
# region corners
cpoints=np.int32(dst)
a, b,c = cpoints.shape
# reshape to standard format
c_p=cpoints.reshape((b,a,c))
# crop matching region
matching_region = crop_region(path_train, c_p)
img3 = cv2.drawMatches(img1, kps1, img2, kps2, g_match, None, **draw_params)
return (img3,matching_region)