My question is not too far off from the "Image Alignment (ECC) in OpenCV ( C++ / Python )" article.
I also found the following article about facial alignment to be very interesting, but WAY more complex than my problem.
Wow! I can really go down the rabbit-hole.
My question is WAY more simple.
I have a scanned document that I have treated as a "template". In this template I have manually mapped the pixel regions that I require info from as:
area = (x1,y1,x2,y2)
such that x1<x2, y1<y2.
Now, these regions are, as is likely obvious, a bit too specific to my "template".
All other files that I want to extract data from are mostly shifted by some unknown amount such that their true area for my desired data is:
area = (x1 + ε1, y1 + ε2, x2 + ε1, y2 + ε2)
Where ε1, ε2 are unknown in advance.
But the documents are otherwise HIGHLY similar outside of this shift.
I want to discover, ideally through opencv, what translation is required (for the time being ignoring euclidean) to "align" these images as to disover my ε, shift my area, and parse my data directly.
I have thought about using tesseract to mine the text from the document and then parse from there, but there are check boxes that are either filled or empty
that contain meaningful information for my problem.
The code I currently have for cropping the image is:
from PIL import Image
img = Image.open(img_path)
area = area_lookup['key']
cropped_img = img.crop(area)
cropped_img.show()
My two sample files are attached.
My two images are:
We can assume my first image is my "template".
As you can see, the two images are very "similar" but one is moved slightly (human error). There may be cases where the rotation is more extreme, or the image is shifted more.
I would like transform image 2 to be as aligned to image 1 as possible, and then parse data from it.
Any help would be sincerely appreciated.
Thank you very much
Related
I've been given an image containing stars and ovals, and have been tasked with detecting which is which and counting how many of each are contained within the image. One such image with ovals only looks like this:
I've first tried to solve the problem using OpenCV using tutorials such as this one and this one.
However I seem to run into issues with both in bounding the ovals, one results in a count of 1 oval whereas another results in a count of 330.
I then tried using YOLOv4, thinking that it would be more useful when dealing with two different classes (stars and ovals). I used the following code from top try bound boxes on my sample image.
box, label, count = cv.detect_common_objects(img)
output = draw_bbox(img, box, label, count)
output = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.figure(figsize= (10, 10))
plt.axis("off")
plt.imshow(img1)
plt.show()
However I received IndexError: Invalid index to scalar variable.
Can anyone point me in the right direction on how to proceed?
I first need to be able to do it for one class, and then multiple classes, before expanding into doing it for several images automatically.
Thanks
I am trying to select an area of an image to do some analysis on that specific area of the image.
However, when I searched online, I am only able to find guides on how to select a rectangular area. I need to select an area that is drawn using my mouse. An example of such area is included bellow.
Would anyone be able to recommend to me some key words or libraries to search to help me with this or links to guides that do something similar?
Also, I am not sure if it is necessary information but the analysis I am trying to do on the region of interest is to Find a ratio of amount of white to black pixels in that specific area.
I produced a simple working example based on this answer. I also tried using scipy.ndimage.morphology.fill_binary_holes, but could not get it to work. Note that the provided function takes a little longer since it is assuming that the input image is grayscale and not binarized.
I specifically avoided the usage of OpenCV, since I find the setup to be a bit tedious, but I think it should also provide an equivalent (see here).
Additionally, my "binarization" is kind of hacky, but you can probably figure out how to parse your image into a valid format yourself (and it might be easier if you produce the result within a program). In any case, I would suggest making sure that you have a proper image format, since jpeg's compression might violate your connectivity, and cause issues in certain cases.
import scipy as sp
import numpy as np
import scipy.ndimage
import matplotlib.pyplot as plt
def flood_fill(test_array,h_max=255):
input_array = np.copy(test_array)
el = sp.ndimage.generate_binary_structure(2,2).astype(np.int)
inside_mask = sp.ndimage.binary_erosion(~np.isnan(input_array), structure=el)
output_array = np.copy(input_array)
output_array[inside_mask]=h_max
output_old_array = np.copy(input_array)
output_old_array.fill(0)
el = sp.ndimage.generate_binary_structure(2,1).astype(np.int)
while not np.array_equal(output_old_array, output_array):
output_old_array = np.copy(output_array)
output_array = np.maximum(input_array,sp.ndimage.grey_erosion(output_array, size=(3,3), footprint=el))
return output_array
x = plt.imread("test.jpg")
# "convert" to grayscale and invert
binary = 255-x[:,:,0]
filled = flood_fill(binary)
plt.imshow(filled)
This produces the following result:
I'm writing a script to process an image and extract a PDF417 2D barcode, which will then be decoded. I'm extracting the ROI without problems, but when I try to correct the perspective using cv2.warpPerspective, the result is not as expected.
The following is the extracted barcode, the red dots are the detected corners:
This is the resulting image:
This is the code I'm using for the transformation (the values are found by the script, but for the previous images are as follow):
box_or = np.float32([[23, 30],[395, 23],[26, 2141],[389, 2142]])
box_fix = np.float32([[0,0],[415,0],[0,2159],[415,2159]])
M = cv2.getPerspectiveTransform(box_or,box_fix)
warped = cv2.warpPerspective(img,M,(cols,rows))
I've checked and I don't find anything wrong with the code, yet the transformation is definitely wrong. The amount of perspective distortion in the extracted ROI is minimum, but may affect the decoding process.
So, is there a way to get rid of the perspective distortion? Am I doing something wrong? Is this a known bug or something? Any help is very much welcome.
BTW, I'm using OpenCV 3.3.0
It looks like you're giving the image coordinates as (y, x). I know the interpretation of coordinates varies within OpenCV.
In the homography example code they provide the coordinates as (x,y) - at least based on their use of 'h' and 'w' in this snippet:
h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)
So try providing the coordinates as (x,y) to both getPerspectiveTransform and warpPerspective.
Image that some new image X arrives, and I want to know if X is new or has already been encountered before. I have code, below, that shrinks the image and then converts it to a hash code. I can then see via a single hash look-up if I've already encountered an image with the same hash code, so it's very fast.
My question is, is there an efficient way for me to see if a similar image, but one with a different hash code, has already been seen? If was going to title this question something like "Data structure for determining efficiently whether a similar, non-identical item is already contained" but decided that would be an instance of the XY problem.
When I say that this new image is "similar," I'm thinking of one that's perhaps gone through lossy compression and so looks like the original to the human eye but is not identical. Normally shrinking the image eliminates the difference, but not always, and if I shrink the image too much I start getting false positives.
Here's my current code:
import PIL
seen_images = {} # This would really be a shelf or something
# From http://www.guguncube.com/1656/python-image-similarity-comparison-using-several-techniques
def image_pixel_hash_code(image):
pixels = list(image.getdata())
avg = sum(pixels) / len(pixels)
bits = "".join(map(lambda pixel: '1' if pixel < avg else '0', pixels)) # '00010100...'
hexadecimal = int(bits, 2).__format__('016x').upper()
return hexadecimal
def process_image(filepath):
thumb = PIL.Image.open(filepath).resize((128,128)).convert("L")
code = image_pixel_hash_code(thumb)
previous_image = seen_images.get(code, None)
if code in seen_images:
print "'{}' already seen as '{}'".format(filepath, previous_image)
else:
seen_images[code] = filepath
You can put a path to a bunch of image files into a variable called IMAGE_ROOT and then try my code out with:
import os
for root, dirs, files in os.walk(IMAGE_ROOT):
for filename in files:
filepath = os.path.join(root, filename)
try:
process_image(filepath)
except IOError:
pass
There are a lot of methods for comparing images, but for your given example I suspect that simplicity and speed are the key factors (hence why you're trying to use a hash as a first-pass). Here are some suggestions - in all cases I'd suggest shrinking and cropping the image to a regular size and shape.
Smooth the image (gaussian blur) before shrinking to minimise the influence of artefacts. Then apply the hash or other comparison.
Subtract the images from one another (RGB) and check the remainder. Identical images will return zero, compression artefacts will result in small minor variations. You can either threshold, sum, or average the value and compare to a cut-off.
Use standard distance algorithsm (see scipy.spatial.distance) to calculate 'distance' between the two images. For example euclidean distance will give effectively the same as the sum of subtracting, while cosine will ignore itensity but match the profile of changes over the image i.e. a darker version of the same image will be considered equivalent. For these you will need to flatten your image to a 1D array.
The last two entail comparing every image to every other image when uploading, and that is going to get very computationally expensive for large numbers of images.
I have a number of images from Chinese genealogies, and I would like to be able to programatically categorize them. Generally speaking, one type of image has primarily line-by-line text, while the other type may be in a grid or chart format.
Example photos
'Desired' type: http://www.flickr.com/photos/63588871#N05/8138563082/
'Other' type: http://www.flickr.com/photos/63588871#N05/8138561342/in/photostream/
Question: Is there a (relatively) simple way to do this? I have experience with Python, but little knowledge of image processing. Direction to other resources is appreciated as well.
Thanks!
Assuming that at least some of the grid lines are exactly or almost exactly vertical, a fairly simple approach might work.
I used PIL to find all the columns in the image where more than half of the pixels were darker than some threshold value.
Code
import Image, ImageDraw # PIL modules
withlines = Image.open('withgrid.jpg')
nolines = Image.open('nogrid.jpg')
def findlines(image):
w,h, = image.size
s = w*h
im = image.point(lambda i: 255 * (i < 60)) # threshold
d = im.getdata() # faster than per-pixel operations
linecolumns = []
for col in range(w):
black = sum( (d[x] for x in range(col, s, w)) )//255
if black > 450:
linecolumns += [col]
# return an image showing the detected lines
im2 = image.convert('RGB')
draw = ImageDraw.Draw(im2)
for col in linecolumns:
draw.line( (col,0,col,h-1), fill='#f00', width = 1)
return im2
findlines(withlines).show()
findlines(nolines).show()
Results
showing detected vertical lines in red for illustration
As you can see, four of the grid lines are detected, and, with some processing to ignore the left and right sides and the center of the book, there should be no false positives on the desired type.
This means that you could use the above code to detect black columns, discard those that are near to the edge or the center. If any black columns remain, classify it as the "other" undesired class of pictures.
AFAIK, there is no easy way to solve this. You will need a decent amount of image processing and some basic machine learning to classify these kinds of images (and even than it probably won't be 100% successful)
Another note:
While this can be solved by only using machine learning techniques, I would advice you to start searching for some image processing techniques first and try to convert your image to a form that has a decent difference for both images. For this you best start reading about the fft. After that have a look at some digital image processing techniques. When you feel comfortable that you have a decent understanding of these, you can read up on pattern recognition.
This is only one suggested approach though, there are more ways to achieve this.