Categorize different images - python

I have a number of images from Chinese genealogies, and I would like to be able to programatically categorize them. Generally speaking, one type of image has primarily line-by-line text, while the other type may be in a grid or chart format.
Example photos
'Desired' type: http://www.flickr.com/photos/63588871#N05/8138563082/
'Other' type: http://www.flickr.com/photos/63588871#N05/8138561342/in/photostream/
Question: Is there a (relatively) simple way to do this? I have experience with Python, but little knowledge of image processing. Direction to other resources is appreciated as well.
Thanks!

Assuming that at least some of the grid lines are exactly or almost exactly vertical, a fairly simple approach might work.
I used PIL to find all the columns in the image where more than half of the pixels were darker than some threshold value.
Code
import Image, ImageDraw # PIL modules
withlines = Image.open('withgrid.jpg')
nolines = Image.open('nogrid.jpg')
def findlines(image):
w,h, = image.size
s = w*h
im = image.point(lambda i: 255 * (i < 60)) # threshold
d = im.getdata() # faster than per-pixel operations
linecolumns = []
for col in range(w):
black = sum( (d[x] for x in range(col, s, w)) )//255
if black > 450:
linecolumns += [col]
# return an image showing the detected lines
im2 = image.convert('RGB')
draw = ImageDraw.Draw(im2)
for col in linecolumns:
draw.line( (col,0,col,h-1), fill='#f00', width = 1)
return im2
findlines(withlines).show()
findlines(nolines).show()
Results
showing detected vertical lines in red for illustration
As you can see, four of the grid lines are detected, and, with some processing to ignore the left and right sides and the center of the book, there should be no false positives on the desired type.
This means that you could use the above code to detect black columns, discard those that are near to the edge or the center. If any black columns remain, classify it as the "other" undesired class of pictures.

AFAIK, there is no easy way to solve this. You will need a decent amount of image processing and some basic machine learning to classify these kinds of images (and even than it probably won't be 100% successful)
Another note:
While this can be solved by only using machine learning techniques, I would advice you to start searching for some image processing techniques first and try to convert your image to a form that has a decent difference for both images. For this you best start reading about the fft. After that have a look at some digital image processing techniques. When you feel comfortable that you have a decent understanding of these, you can read up on pattern recognition.
This is only one suggested approach though, there are more ways to achieve this.

Related

How do I overlay a pattern on a PNG using Pillow in Python

I want to overlay a certain pattern on a shirt in Pillow Python, this is my code -
design =Image.open("source/stripe.png").convert("RGBA")
shirtsketchtrans = Image.open("source/shirtsketchtrans.png").convert("RGBA")
design.paste(shirtsketchtrans, (0,0), shirtsketchtrans)
design.show()
The outcome coming is this -
I don't really mind the size, that can be fixed.
But what I want to do is have the striped pattern overlay only on my shirt PNG and not in the whole background, basically get the pattern only on my shirt and nowhere else.
Is there any solution to this?
Any help is appreciated! Thanks!
Edit: Input/Source Images -
PIL on its own isn't smart enough to know what's "inside" or "outside" the shirt. You need to make a Transparency Mask and then use PIL.Image.composite to combine them.
Example:
design = Image.open("source/stripe.png").convert("RGBA")
shirt_sketch_trans = Image.open("source/shirtsketchtrans.png").convert("RGBA")
shirt_sketch_mask = Image.open("source/shirtsketchmask.png").convert("RGBA")
full_design = Image.composite(design, shirt_sketch_trans, shirt_sketch_mask)
full_design.show()

Python image manipulation using PIL(LSB)

I have recently started studying steganography and I've come across a problem that I just don't seem to understand. Basically, the image is a png which contains a hidden flag in it.
When you extract the bit planes from the image, you can see that there's an image in the blue and green planes that you can see in the red one. To reveal the flag in clear text, you have to remove those images from the red one by XORing the LSB or something. I am not totally sure.
This is what the image in the red plane looks like if you don't remove the others.
My question is how do I go about doing this kind of thing? This is the image in question.
Actually the hidden image is in the lowest 3 bit planes. Doing a full bit decomposition makes that clear.
Start by loading the image to a numpy array, which will have dimensions MxNx3.
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
img = Image.open('stego.png')
data = np.array(img)
All you have to do now is XOR each colour plane with another and then keep the 3 least significant bits (lsb).
extracted = (data[...,0] ^ data[...,1] ^ data[...,2]) & 0x07
plt.imshow(extracted)
plt.show()
In case it wasn't obvious, the & 0x07 part is an AND operation with the binary number 00000111, just written in hexadecimal for conciseness.
If you don't keep all 3 lsb, then you'll either be missing some letters in the solution, or everything will be there but some edges won't be as smooth. The first of these is critically important.

Python OpenCV - perspective transformation issues

I'm writing a script to process an image and extract a PDF417 2D barcode, which will then be decoded. I'm extracting the ROI without problems, but when I try to correct the perspective using cv2.warpPerspective, the result is not as expected.
The following is the extracted barcode, the red dots are the detected corners:
This is the resulting image:
This is the code I'm using for the transformation (the values are found by the script, but for the previous images are as follow):
box_or = np.float32([[23, 30],[395, 23],[26, 2141],[389, 2142]])
box_fix = np.float32([[0,0],[415,0],[0,2159],[415,2159]])
M = cv2.getPerspectiveTransform(box_or,box_fix)
warped = cv2.warpPerspective(img,M,(cols,rows))
I've checked and I don't find anything wrong with the code, yet the transformation is definitely wrong. The amount of perspective distortion in the extracted ROI is minimum, but may affect the decoding process.
So, is there a way to get rid of the perspective distortion? Am I doing something wrong? Is this a known bug or something? Any help is very much welcome.
BTW, I'm using OpenCV 3.3.0
It looks like you're giving the image coordinates as (y, x). I know the interpretation of coordinates varies within OpenCV.
In the homography example code they provide the coordinates as (x,y) - at least based on their use of 'h' and 'w' in this snippet:
h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)
So try providing the coordinates as (x,y) to both getPerspectiveTransform and warpPerspective.

Discover transformation required to align images of standardized documents

My question is not too far off from the "Image Alignment (ECC) in OpenCV ( C++ / Python )" article.
I also found the following article about facial alignment to be very interesting, but WAY more complex than my problem.
Wow! I can really go down the rabbit-hole.
My question is WAY more simple.
I have a scanned document that I have treated as a "template". In this template I have manually mapped the pixel regions that I require info from as:
area = (x1,y1,x2,y2)
such that x1<x2, y1<y2.
Now, these regions are, as is likely obvious, a bit too specific to my "template".
All other files that I want to extract data from are mostly shifted by some unknown amount such that their true area for my desired data is:
area = (x1 + ε1, y1 + ε2, x2 + ε1, y2 + ε2)
Where ε1, ε2 are unknown in advance.
But the documents are otherwise HIGHLY similar outside of this shift.
I want to discover, ideally through opencv, what translation is required (for the time being ignoring euclidean) to "align" these images as to disover my ε, shift my area, and parse my data directly.
I have thought about using tesseract to mine the text from the document and then parse from there, but there are check boxes that are either filled or empty
that contain meaningful information for my problem.
The code I currently have for cropping the image is:
from PIL import Image
img = Image.open(img_path)
area = area_lookup['key']
cropped_img = img.crop(area)
cropped_img.show()
My two sample files are attached.
My two images are:
We can assume my first image is my "template".
As you can see, the two images are very "similar" but one is moved slightly (human error). There may be cases where the rotation is more extreme, or the image is shifted more.
I would like transform image 2 to be as aligned to image 1 as possible, and then parse data from it.
Any help would be sincerely appreciated.
Thank you very much

Quickly determining using Python whether an image is (fuzzily) in a collection

Image that some new image X arrives, and I want to know if X is new or has already been encountered before. I have code, below, that shrinks the image and then converts it to a hash code. I can then see via a single hash look-up if I've already encountered an image with the same hash code, so it's very fast.
My question is, is there an efficient way for me to see if a similar image, but one with a different hash code, has already been seen? If was going to title this question something like "Data structure for determining efficiently whether a similar, non-identical item is already contained" but decided that would be an instance of the XY problem.
When I say that this new image is "similar," I'm thinking of one that's perhaps gone through lossy compression and so looks like the original to the human eye but is not identical. Normally shrinking the image eliminates the difference, but not always, and if I shrink the image too much I start getting false positives.
Here's my current code:
import PIL
seen_images = {} # This would really be a shelf or something
# From http://www.guguncube.com/1656/python-image-similarity-comparison-using-several-techniques
def image_pixel_hash_code(image):
pixels = list(image.getdata())
avg = sum(pixels) / len(pixels)
bits = "".join(map(lambda pixel: '1' if pixel < avg else '0', pixels)) # '00010100...'
hexadecimal = int(bits, 2).__format__('016x').upper()
return hexadecimal
def process_image(filepath):
thumb = PIL.Image.open(filepath).resize((128,128)).convert("L")
code = image_pixel_hash_code(thumb)
previous_image = seen_images.get(code, None)
if code in seen_images:
print "'{}' already seen as '{}'".format(filepath, previous_image)
else:
seen_images[code] = filepath
You can put a path to a bunch of image files into a variable called IMAGE_ROOT and then try my code out with:
import os
for root, dirs, files in os.walk(IMAGE_ROOT):
for filename in files:
filepath = os.path.join(root, filename)
try:
process_image(filepath)
except IOError:
pass
There are a lot of methods for comparing images, but for your given example I suspect that simplicity and speed are the key factors (hence why you're trying to use a hash as a first-pass). Here are some suggestions - in all cases I'd suggest shrinking and cropping the image to a regular size and shape.
Smooth the image (gaussian blur) before shrinking to minimise the influence of artefacts. Then apply the hash or other comparison.
Subtract the images from one another (RGB) and check the remainder. Identical images will return zero, compression artefacts will result in small minor variations. You can either threshold, sum, or average the value and compare to a cut-off.
Use standard distance algorithsm (see scipy.spatial.distance) to calculate 'distance' between the two images. For example euclidean distance will give effectively the same as the sum of subtracting, while cosine will ignore itensity but match the profile of changes over the image i.e. a darker version of the same image will be considered equivalent. For these you will need to flatten your image to a 1D array.
The last two entail comparing every image to every other image when uploading, and that is going to get very computationally expensive for large numbers of images.

Categories

Resources