I'm trying to find all the arrows on the original image using the below template image and draw a rectangle around them. I do not want to use Sift/Surf/homography/etc. Only template matching. I only want to use 1 template and not generate 360 individual 1 degree rotation templates as reference.
Template:
Original Image:
This is my code so far
import cv2
import numpy as np
import imutils
template = cv2.imread("C:\\Users\\Desktop\\All\\images\\template.png")
template = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
_ ,template = cv2.threshold(template,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
image = cv2.imread("C:\\Users\\Desktop\\All\\images\\original image.png")
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
_ ,image = cv2.threshold(image,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
MATCH_THRESH = 4000000
for degrees in range(0, 360, 1):
rotate = imutils.rotate_bound(template,degrees)
w, h = rotate.shape[::-1]
res = cv2.matchTemplate(image, rotate, cv2.TM_SQDIFF)
loc = np.where(res < MATCH_THRESH)
for pt in zip(*loc[::-1]):
rect = cv2.rectangle(image, pt, (pt[0] + w, pt[1] + h), (0,255,0), 4)
cv2.imshow("matches",image)
cv2.imshow("rectangle",rect)
cv2.waitKey(500)
cv2.destroyAllWindows()
print('Match for deg{}, pt({}, {}), sqdiff {}'.
format(degrees,pt[0],pt[1],res[pt[1],pt[0]]))
I take both the .png's and convert them to gray and then to black and white using Otsu, which should help with the template matching.
Next I rotate my template image in 1 degree steps through 360 degrees starting at 0. "degrees" saves what degree i'm currently at and affects the rotation of my template using this function, rotate = imutils.rotate_bound(template,degrees).
Then I run cv2.matchTemplate for each degree and save the location of points higher than a certain threshold and draw a rectangle around the found match based on the rotated templates size.
This is where i'm running into an issue. I cant seem to get it to display the rectangles.I know its finding the points because its stating so. I've tried every combination of cv2.imshow. Do you guys see something I don't?
Thank you.
You are not able to see the coloured rectangle because the picture you are trying to display it in is converted to grayscale. I suggest you store a copy before you convert to grayscale and use it to display the rectangles in.
Related
I work at a studio that does school photos and we are trying to make a script to eliminate the job of cropping each photo to a template. The photos we work with are fairly uniform but they vary in resolution and head position a bit. I took up the mantle of trying to write the script with my fairly limited Python knowledge and through a lot of trial and error and online resources I think I have got most of the way there.
At the moment I am trying to figure out the best way to have the image crop from the NumPy array with the head where I want and I just cant find a good flexible solution. The head needs to be positioned slightly differently for pose 1 and pose 2 so its needs to be easy to change on the fly (Probably going to implement some sort of simple GUI to input stuff like that, but for now I can just change the code).
I also need to be able to change the output resolution of the photo so they are all uniform (2000x2500). Anyone have any ideas?
At the moment this is my current code, it just saves the detected face square:
import cv2
import os.path
import glob
# Cascade path
cascPath = 'haarcascade_frontalface_default.xml'
# Create the haar cascade
faceCascade = cv2.CascadeClassifier(cascPath)
#Check for output folder and create if its not there
if not os.path.exists('output'):
os.makedirs('output')
# Read Images
images = glob.glob('*.jpg')
for c, i in enumerate(images):
image = cv2.imread(i, 1)
# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Find face(s) using cascade
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1, # size of groups
minNeighbors=5, # How many groups around are detected as face for it to be valid
minSize=(500, 500) # Min size in pixels for face
)
# Outputs number of faces found in image
print('Found {0} faces!'.format(len(faces)))
# Places a rectangle on face
for (x, y, w, h) in faces:
imgCrop = image[y:y+h,x:x+w]
if len(faces) > 0:
#Saves Images to output folder with OG name
cv2.imwrite('output/'+ i, imgCrop)
I can crop using it like this:
# Crop Padding
left = 300
right = 300
top = 400
bottom = 1000
for (x, y, w, h) in faces:
imgCrop = image[y-top:y+h+bottom, x-left:x+w+right]
but that outputs pretty random resolutions and changes based on the image resolution
TL;DR
To set a new resolution with the dimension, you can use cv2.resize. There may be a pixel loss so you can use the interpolation method.
The newly resized image may be in BGR format, so you may need to convert to RGB format.
cv2.resize(src=crop, dsize=(2000, 2500), interpolation=cv2.INTER_LANCZOS4)
crop = cv2.cvtColor(crop, cv2.COLOR_BGR2RGB) # Make sure the cropped image is in RGB format
cv2.imwrite("image-1.png", crop)
Suggestion:
One approach is using python's face-recognition library.
The approach is using two sample images for training.
Predict the next image based on training images.
For instance, The followings are the training images:
We want to predict the faces in the below image:
When we get the facial encodings of the training images and apply to the next image:
import face_recognition
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image, ImageDraw
# Load a sample picture and learn how to recognize it.
first_image = face_recognition.load_image_file("images/ex.jpg")
first_face_encoding = face_recognition.face_encodings(first_image)[0]
# Load a second sample picture and learn how to recognize it.
second_image = face_recognition.load_image_file("images/index.jpg")
sec_face_encoding = face_recognition.face_encodings(second_image)[0]
# Create arrays of known face encodings and their names
known_face_encodings = [
first_face_encoding,
sec_face_encoding
]
print('Learned encoding for', len(known_face_encodings), 'images.')
# Load an image with an unknown face
unknown_image = face_recognition.load_image_file("images/babes.jpg")
# Find all the faces and face encodings in the unknown image
face_locations = face_recognition.face_locations(unknown_image)
face_encodings = face_recognition.face_encodings(unknown_image, face_locations)
# Convert the image to a PIL-format image so that we can draw on top of it with the Pillow library
# See http://pillow.readthedocs.io/ for more about PIL/Pillow
pil_image = Image.fromarray(unknown_image)
# Create a Pillow ImageDraw Draw instance to draw with
draw = ImageDraw.Draw(pil_image)
# Loop through each face found in the unknown image
for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings):
matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
best_match_index = np.argmin(face_distances)
draw.rectangle(((left, top), (right, bottom)), outline=(0, 0, 255), width=5)
# Remove the drawing library from memory as per the Pillow docs
del draw
# Display the resulting image
plt.imshow(pil_image)
plt.show()
The output will be:
The above is my suggestion. When you create a new resolution with the current image, there will be a pixel loss. Therefore you need to use an interpolation method.
For instance: after finding the face locations, select the coordinates in the original image.
# Add after draw.rectangle function.
crop = unknown_image[top:bottom, left:right]
Set new resolution with the size 2000 x 2500 and interpolation with CV2.INTERN_LANCZOS4.
Possible Question: Why CV2.INTERN_LANCZOS4?
Of course, you can select whatever you like, but in this post CV2.INTERN_LANCZOS4 was suggested.
cv2.resize(src=crop, dsize=(2000, 2500), interpolation=cv2.INTER_LANCZOS4)
Save the image
crop = cv2.cvtColor(crop, cv2.COLOR_BGR2RGB) # Make sure the cropped image is in RGB format
cv2.imwrite("image-1.png", crop)
Outputs are around 4.3 MB Therefore I can't display in here.
From the final result, we clearly see and identify faces. The library precisely finds the faces in the image.
Here what you can do:
Either you can use the training images of your own-set, or you can use the example above.
Apply the face-recognition function for each image, using the trained face-locations and save the results in the directory.
here is how I got it to crop how I wanted, this is added right below the "output number of faces" function
#Get the face postion and output values into variables, might not be needed but I did it
for (x, y, w, h) in faces:
xdis = x
ydis = y
w = w
h = h
#Get scale value by dividing wanted head hight by detected head hight
ws = 600/w
hs = 600/h
#scale image to get head to right size, uses bilinear interpolation by default
scale = cv2.resize(image,(0,0),fx=hs,fy=ws)
#calculate head postion for given values
sxdis = int(xdis*ws) #applying scale to x distance and turning it into a integer
sydis = int(ydis*hs) #applying scale to y distance and turning it into a integer
sycent = sydis+300 #adding half head hight to get center
ystart = sycent-700 #subtract where you want the head center to be in pixels, this is for the vertical
yend = ystart+2500 #Add whatever you want vertical resolution to be
xcent = sxdis+300 #adding half head hight to get center
xstart = xcent-1000 #subtract where you want the head center to be in pixels, this is for the horizontal
xend = xstart+2000 #add whatever you want the horizontal resolution to be
#Crop the image
cropped = scale[ystart:yend, xstart:xend]
Its a mess but it works exactly how I wanted it to work.
ended up going with openCV instead of switching to python-Recognition because of speed but I might switch over if I can get multithreading to work in python-recognition.
The code I've produce to detect and correct skew is giving me inconsistent results. I'm currently working on a project which utilizes OCR text extraction on images (via Python and OpenCV), so removing skew is key if accurate results are desired. My code uses cv2.minAreaRect to detect skew.
The images I'm using are all identical (and will be in the future) so I'm unsure as to what is causing these inconsistencies. I've included two sets of before and after images (including the skew value from cv2.minAreaRect) where I applied my code, one showing successul removal of skew and showing skew was not removed (looks like it added even more skew).
Image 1 Before (-87.88721466064453)
Image 1 After (successful deskew)
Image 2 Before (-5.766754150390625)
Image 2 After (unsuccessful deskew)
My code is below. Note: I've worked with many more images than those I've included here. The detected skew thus far has always been in the ranges [-10, 0) or (-90, -80], so I attempted to account for this in my code.
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_gray = cv2.bitwise_not(img_gray)
thresh = cv2.threshold(img_gray, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
coords = np.column_stack(np.where(thresh > 0))
angle = cv2.minAreaRect(coords)[-1]
if (angle < 0 and angle >= -10):
angle = -angle #this was intended to undo skew for values in [-10, 0) by simply rotating using the opposite sign
else:
angle = (90 + angle)/2
(h, w) = img.shape[:2]
center = (w // 2, h // 2)
M = cv2.getRotationMatrix2D(center, angle, 1.0)
deskewed = cv2.warpAffine(img, M, (w, h), flags = cv2.INTER_CUBIC, borderMode = cv2.BORDER_REPLICATE)
I've looked through various posts and articles to find an adequate solution, but have been unsuccessful. This post was the most helpful in understanding the skew values, but even then I couldn't get very far.
A very good text deskew tool can be found in Python Wand, which uses ImageMagick. It is based upon the Radon transform.
Form 1:
Form 2:
from wand.image import Image
from wand.display import display
with Image(filename='form1.png') as img:
img.deskew(0.4*img.quantum_range)
img.save(filename='form1_deskew.png')
display(img)
with Image(filename='form2.png') as img:
img.deskew(0.4*img.quantum_range)
img.save(filename='form2_deskew.png')
display(img)
Form 1 deskewed:
Form 2 deskewed:
I already answered this here: How to deskew a scanned text page with ImageMagick?
Following is the piece of code that can help you deskew the image:
import numpy as np
from skimage import io
from skimage.transform import rotate
from skimage.color import rgb2gray
from deskew import determine_skew
from matplotlib import pyplot as plt
def deskew(_img):
image = io.imread(_img)
grayscale = rgb2gray(image)
angle = determine_skew(grayscale)
rotated = rotate(image, angle, resize=True) * 255
return rotated.astype(np.uint8)
def display_before_after(_original):
plt.subplot(1, 2, 1)
plt.imshow(io.imread(_original))
plt.subplot(1, 2, 2)
plt.imshow(deskew(_original))
display_before_after('img_35h.jpg')
Before:
After:
Reference and Source: http://aishelf.org/deskew/
I have an image of pots of the same size, the user must crop an area (let's say he crops the first pot (top left corner)) of the image and depending on the pattern designed or cropped by the user, I must automatically perform other cropping and save their coordinates. Is there another technique to do this without template matching or do you think I can improve my code to do it with template matching only?
So far I have tried with template matching and saved the coordinates of each corresponding matched, but as you can see in the attached image, the result is not quite satisfying because I don't match all the pots and for some area I draw several rectangles for just one pot (the more I lower the threshold). Any help is higly appreciated.
Here is my code
# import necessary dependies
import cv2
import numpy as np
# Read the source image
img_rgb = cv2.imread('image.jpg')
# Convert the source image to gray
img_gray = cv2.cvtColor(img_rgb, cv.COLOR_BGR2GRAY)
# Read the pattern image designed by the user
template = cv2.imread('mattern.png',0)
# Get the shape of the pattern image
w, h = template.shape[::-1]
# Apply cv2 template matching functon
res = cv2.matchTemplate(img_gray,template,cv.TM_CCOEFF_NORMED)
# List of coordinates of the matched templates
list_coordinates = []
labeled_coordinates = []
# Threshold applied to the template matching
threshold = 0.55
# Apply the threshold to cv2 the template matching
loc = np.where( res >= threshold)
# Directory to save the matched templates (pattern)
s_dir = "s_img/"
# Counter to add in the name of the saved image
i = 1
for pt in zip(*loc[::-1]):
# Draw a rectangle to area that satifies the condition
cv2.rectangle(img_rgb, pt, (pt[0] + w, pt[1] + h), (0,0,255), 1)
top_left = pt
# Crop every matched templates (pattern)
crop_img = img_rgb[top_left[1]:top_left[1] + h, top_left[0]:top_left[0]+w]
bottom_right = (top_left[0] + w, top_left[1] + h)
# Save the crop patterns for future use
cv2.imwrite(s_dir+"crop_1_"+str(i)+".png", crop_img)
# Label the coordinates for future use
labeled_coordinates = ["crop_1_"+str(i), top_left[0], top_left[1], bottom_right[0], bottom_right[1]]
# Add the coordinates in a list
list_coordinates.append(labeled_coordinates)
i += 1
cv2.imshow('template',template)
cv2.imshow('mathced',img_rgb)
cv2.waitKey(0)
cv2.destroyAllWindows()
I have some hundreds of images (scanned documents), most of them are skewed. I wanted to de-skew them using Python.
Here is the code I used:
import numpy as np
import cv2
from skimage.transform import radon
filename = 'path_to_filename'
# Load file, converting to grayscale
img = cv2.imread(filename)
I = cv2.cvtColor(img, COLOR_BGR2GRAY)
h, w = I.shape
# If the resolution is high, resize the image to reduce processing time.
if (w > 640):
I = cv2.resize(I, (640, int((h / w) * 640)))
I = I - np.mean(I) # Demean; make the brightness extend above and below zero
# Do the radon transform
sinogram = radon(I)
# Find the RMS value of each row and find "busiest" rotation,
# where the transform is lined up perfectly with the alternating dark
# text and white lines
r = np.array([np.sqrt(np.mean(np.abs(line) ** 2)) for line in sinogram.transpose()])
rotation = np.argmax(r)
print('Rotation: {:.2f} degrees'.format(90 - rotation))
# Rotate and save with the original resolution
M = cv2.getRotationMatrix2D((w/2,h/2),90 - rotation,1)
dst = cv2.warpAffine(img,M,(w,h))
cv2.imwrite('rotated.jpg', dst)
This code works well with most of the documents, except with some angles: (180 and 0) and (90 and 270) are often detected as the same angle (i.e it does not make difference between (180 and 0) and (90 and 270)). So I get a lot of upside-down documents.
Here is an example:
The resulted image that I get is the same as the input image.
Is there any suggestion to detect if an image is upside down using Opencv and Python?
PS: I tried to check the orientation using EXIF data, but it didn't lead to any solution.
EDIT:
It is possible to detect the orientation using Tesseract (pytesseract for Python), but it is only possible when the image contains a lot of characters.
For anyone who may need this:
import cv2
import pytesseract
print(pytesseract.image_to_osd(cv2.imread(file_name)))
If the document contains enough characters, it is possible for Tesseract to detect the orientation. However, when the image has few lines, the orientation angle suggested by Tesseract is usually wrong. So this can not be a 100% solution.
Python3/OpenCV4 script to align scanned documents.
Rotate the document and sum the rows. When the document has 0 and 180 degrees of rotation, there will be a lot of black pixels in the image:
Use a score keeping method. Score each image for it's likeness to a zebra pattern. The image with the best score has the correct rotation. The image you linked to was off by 0.5 degrees. I omitted some functions for readability, the full code can be found here.
# Rotate the image around in a circle
angle = 0
while angle <= 360:
# Rotate the source image
img = rotate(src, angle)
# Crop the center 1/3rd of the image (roi is filled with text)
h,w = img.shape
buffer = min(h, w) - int(min(h,w)/1.15)
roi = img[int(h/2-buffer):int(h/2+buffer), int(w/2-buffer):int(w/2+buffer)]
# Create background to draw transform on
bg = np.zeros((buffer*2, buffer*2), np.uint8)
# Compute the sums of the rows
row_sums = sum_rows(roi)
# High score --> Zebra stripes
score = np.count_nonzero(row_sums)
scores.append(score)
# Image has best rotation
if score <= min(scores):
# Save the rotatied image
print('found optimal rotation')
best_rotation = img.copy()
k = display_data(roi, row_sums, buffer)
if k == 27: break
# Increment angle and try again
angle += .75
cv2.destroyAllWindows()
How to tell if the document is upside down? Fill in the area from the top of the document to the first non-black pixel in the image. Measure the area in yellow. The image that has the smallest area will be the one that is right-side-up:
# Find the area from the top of page to top of image
_, bg = area_to_top_of_text(best_rotation.copy())
right_side_up = sum(sum(bg))
# Flip image and try again
best_rotation_flipped = rotate(best_rotation, 180)
_, bg = area_to_top_of_text(best_rotation_flipped.copy())
upside_down = sum(sum(bg))
# Check which area is larger
if right_side_up < upside_down: aligned_image = best_rotation
else: aligned_image = best_rotation_flipped
# Save aligned image
cv2.imwrite('/home/stephen/Desktop/best_rotation.png', 255-aligned_image)
cv2.destroyAllWindows()
Assuming you did run the angle-correction already on the image, you can try the following to find out if it is flipped:
Project the corrected image to the y-axis, so that you get a 'peak' for each line. Important: There are actually almost always two sub-peaks!
Smooth this projection by convolving with a gaussian in order to get rid of fine structure, noise, etc.
For each peak, check if the stronger sub-peak is on top or at the bottom.
Calculate the fraction of peaks that have sub-peaks on the bottom side. This is your scalar value that gives you the confidence that the image is oriented correctly.
The peak finding in step 3 is done by finding sections with above average values. The sub-peaks are then found via argmax.
Here's a figure to illustrate the approach; A few lines of you example image
Blue: Original projection
Orange: smoothed projection
Horizontal line: average of the smoothed projection for the whole image.
here's some code that does this:
import cv2
import numpy as np
# load image, convert to grayscale, threshold it at 127 and invert.
page = cv2.imread('Page.jpg')
page = cv2.cvtColor(page, cv2.COLOR_BGR2GRAY)
page = cv2.threshold(page, 127, 255, cv2.THRESH_BINARY_INV)[1]
# project the page to the side and smooth it with a gaussian
projection = np.sum(page, 1)
gaussian_filter = np.exp(-(np.arange(-3, 3, 0.1)**2))
gaussian_filter /= np.sum(gaussian_filter)
smooth = np.convolve(projection, gaussian_filter)
# find the pixel values where we expect lines to start and end
mask = smooth > np.average(smooth)
edges = np.convolve(mask, [1, -1])
line_starts = np.where(edges == 1)[0]
line_endings = np.where(edges == -1)[0]
# count lines with peaks on the lower side
lower_peaks = 0
for start, end in zip(line_starts, line_endings):
line = smooth[start:end]
if np.argmax(line) < len(line)/2:
lower_peaks += 1
print(lower_peaks / len(line_starts))
this prints 0.125 for the given image, so this is not oriented correctly and must be flipped.
Note that this approach might break badly if there are images or anything not organized in lines in the image (maybe math or pictures). Another problem would be too few lines, resulting in bad statistics.
Also different fonts might result in different distributions. You can try this on a few images and see if the approach works. I don't have enough data.
You can use the Alyn module. To install it:
pip install alyn
Then to use it to deskew images(Taken from the homepage):
from alyn import Deskew
d = Deskew(
input_file='path_to_file',
display_image='preview the image on screen',
output_file='path_for_deskewed image',
r_angle='offest_angle_in_degrees_to_control_orientation')`
d.run()
Note that Alyn is only for deskewing text.
Here is a dummy code:
def radon(img):
theta = np.linspace(-90., 90., 180, endpoint=False)
sinogram = skimage.transform.radon(img, theta=theta, circle=True)
return sinogram
# end def
I need to get the sinogram this code outputs without using skimage. But I am unable to find any implementation in python. Can you provide an implementation using only OpenCV, numpy or any other light-weight libraries?
Edit: I need this to get the dominating angle of the image. I am trying to fix the tilt before character segmentation for an OCR system. Examples are given below:
On the left side are the inputs, and on the right side are the desired output.
Edit 2: If you can provide any other ways to get this output, it will help too.
Edit 3: Some sample images:
https://drive.google.com/open?id=0B2MwGW-_t275Q2Nxb3k3TGg4N1U
Well, I had a similar problem.. After spending some time googling the issue, I found a solution that worked for me. I hope it helps.
import numpy as np
import cv2
from skimage.transform import radon
filename = 'your_filename'
# Load file, converting to grayscale
img = cv2.imread(filename)
I = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
h, w = I.shape
# If the resolution is high, resize the image to reduce processing time.
if (w > 640):
I = cv2.resize(I, (640, int((h / w) * 640)))
I = I - np.mean(I) # Demean; make the brightness extend above and below zero
# Do the radon transform
sinogram = radon(I)
# Find the RMS value of each row and find "busiest" rotation,
# where the transform is lined up perfectly with the alternating dark
# text and white lines
r = np.array([np.sqrt(np.mean(np.abs(line) ** 2)) for line in sinogram.transpose()])
rotation = np.argmax(r)
print('Rotation: {:.2f} degrees'.format(90 - rotation))
# Rotate and save with the original resolution
M = cv2.getRotationMatrix2D((w/2, h/2), 90 - rotation, 1)
dst = cv2.warpAffine(img, M, (w, h))
cv2.imwrite('rotated.jpg', dst)
Test:
Original image:
Rotated image: (rotation degree is -9°)
CREDITS:
Detecting rotation and line spacing of image of page of text using Radon transform
The problem is that after rotating the image, you will get some black borders. For your case, I think it will not affect the OCR processing.