Color Perceptual Image Hashing - python

I've been trying to write on a fast (ish) image matching program which doesn't match rotated or scale deformed image, in Python.
The goal is to be able to find small sections of an image that are similar to other images in color features, but dissimilar if rotated or warped.
I found out about perceptual image hashing, and I've had a look at the ImageHash module for Python and SSIM, however most of the things I've looked at do not have in color as a major factor, ie they average the color and only work in one channel, and phash in particular doesn't care if images are rotated.
I would like to be able to have an algorithm which would match images which at a distance would appear the same (but which would not necessarily need to be the same image).
Can anyone suggest how I would structure and write such an algorithm in python? or suggest a function which would be able to compare images in this manner?

I found a couple of ways to do this.
I ended up using a Mean Squared Error function that I wrote myself:
def mse(reference, query):
return (((reference).astype("double")-(query).astype("double"))**2).mean()
Until, upon later tinkering I found a function that seemed to do something similar (compare image similarity, bit by bit), but a good amount faster:
def linalg_norm(reference, query):
return np.linalg.norm(reference-query)
I have no theoretical knowledge of what the second function does, however practically it doesn't matter. I am not averse to learning how it works.

Related

Remove differences between two video frames

Im trying to remove the differences between two frames and keep the non-chaning graphics. Would probably repeat the same process with more frames to get more accurate results. My idea is to simplify the frames removing things that won't need to simplify the rest of the process that will do after.
The different frames are coming from the same video so no need to deal with different sizes, orientation, etc. If the same graphic its in another frame but with a different orientation or scale, I would like to also remove it. For example:
Image 1
Image 2
Result (more or less, I suppose that will be uglier but containing a similar information)
One of the problems of this idea is that the source video, even if they are computer generated graphics, is compressed so its not that easy to identify if a change on the tonality of a pixel its actually a change or not.
Im ideally not looking at a pixel level and given the differences in saturation applied by the compression probably is not possible. Im looking for unchaged "objects" in the image. I want to extract the information layer shown on top of whats happening behind it.
During the last couple of days I have tried to achieve it in a Python script by using OpenCV with all kinds of combinations of absdiffs, subtracts, thresholds, equalizeHists, canny but so far haven't found the right implementation and would appreciate any guidance. How would you achieve it?
Im ideally not looking at a pixel level and given the differences in saturation applied by the compression probably is not possible. Im looking for unchaged "objects" in the image. I want to extract the information layer shown on top of whats happening behind it.
This will be extremely hard. You would need to employ proper CV and if you're not an expert in that field, you'll have really hard time.
How about this, forgetting about tooling and libs, you have two images, ie. two equally sized sequences of RGB pixels. Image A and Image B, and the output image R. Allocate output image R of the same size as A or B.
Run a single loop for every pixel, read pixel a and from A and pixel b from B. You get a 3-element (RGB) vector. Find distance between the two vectors, eg. magnitude of a vector (b-a), if this is less than some tolerance, write either a or b to the same offset into result image R. If not, write some default (background) color to R.
You can most likely do this with some HW accelerated way using OpenCV or some other library, but that's up to you to find a tool that does what you want.

Find the crop parameters from two images

Given two images - one a cropped (but not scaled) portion of the other, how can I find the crop parameters (i.e.: the x and y offsets and width/height)? The idea is to crop one image (screenshot) by hand, and then crop a lot more at the same points.
Ideally via imagemagick, but I am happy with any pseudo-code solution, or with Perl, Python, JavaScript (in order of preference)
I have thought of a brute-force approach (find the first pixel which is the same color, check the next, keep going until different, or move to the next). Before I go down this barabarous (and probably slow) route, I'd like to check for better ones.
Template matching can be used for the identification of smaller image within a larger image.
The following resource might be helpful. Please check it out
https://docs.opencv.org/4.5.2/d4/dc6/tutorial_py_template_matching.html

Basic pattern recognition in binary (pixelated) image

Here is a cropped example (about 11x9 pixels) of the kind of images (which ultimately are actually all of size 28x28, but stored in memory flattened as a 784-components array) I will be trying to apply the algorithm on:
Basically, I want to be able to recognize when this shape appears (red lines are used to put emphasis on the separation of the pixels, while the surrounding black border is used to better outline the image against the white background of StackOverflow):
The orientation of it doesn't matter: it must be detected in any of its possible representations (rotations and symmetries) along the horizontal and vertical axis (so, for example, a 45° rotation shouldn't be considered, nor a diagonal symmetry: only consider 90°, 180°, and 270° rotations, for example).
There are two solutions to be found on that image that I first presented, though only one needs to be found (ignore the gray blurr surrounding the white region):
Take this other sample (which also demonstrates that the white figures inside the images aren't always fully surrounded by black pixels):
The function should return True because the shape is present:
Now, there is obviously a simple solution to this:
Use a variable such as pattern = [[1,0,0,0],[1,1,1,1]], produce its variations, and then slide all of the variations along the image until an exact match is found at which point the whole thing just stops and returns True.
This would, however, in the worst case scenario, take up to 8*(28-2)*(28-4)*(2*4) which is approximately 40000 operations for a single image, which seem a bit overkill (if I did my quick calculations right).
I'm guessing one way of making this naive approach better would be to first of all scan the image until I find the very first white pixel, and then start looking for the pattern 4 rows and 4 columns earlier than that point, but even that doesn't seem good enough.
Any ideas? Maybe this kind of function has already been implemented in some library? I'm looking for an implementation or an algorithm that beats my naive approach.
As a side note, while kind of a hack, I'm guessing this is the kind of problem that can be offloaded to the GPU but I do not have much experience with that. While it wouldn't be what I'm looking for primarily, if you provide an answer, feel free to add a GPU-related note.
EDIT:
I ended up making an implementation of the accepted answer. You can see my code in this Gist.
If you have too many operations, think how to do less of them.
For this problem I'd use image integrals.
If you convolve a summing kernel over the image (this is a very fast operation in fft domain with just conv2,imfilter), you know that only locations where the integral is equal to 5 (in your case) are possible pattern matching places. Checking those (even for your 4 rotations) should be computationally very fast. There can not be more than 50 locations in your example image that fit this pattern.
My python is not too fluent, but this is the proof of concept for your first image in MATLAB, I am sure that translating this code should not be a problem.
% get the same image you have (imgur upscaled it and made it RGB)
I=rgb2gray(imread('https://i.stack.imgur.com/l3u4A.png'));
I=imresize(I,[9 11]);
I=double(I>50);
% Integral filter definition (with your desired size)
h=ones(3,4);
% horizontal and vertical filter (because your filter is not square)
Ifiltv=imfilter(I,h);
Ifilth=imfilter(I,h');
% find the locations where integral is exactly the value you want
[xh,yh]=find(Ifilth==5);
[xv,yv]=find(Ifiltv==5);
% this is just plotting, for completeness
figure()
imshow(I,[]);
hold on
plot(yh,xh,'r.');
plot(yv,xv,'r.');
This results in 14 locations to check. My standard computer takes 230ns on average on computing both image integrals, which I would call fast.
Also GPU computing is not a hack :D. Its the way to go with a big bunch of problems because of the enormous computing power they have. E.g. convolutions in GPUs are incredibly fast.
The operation you are implementing is an operator in Mathematical Morphology called hit and miss.
It can be implemented very efficiently as a composition of two erosions. If the shape you’re detecting can be decomposed into a few simple geometrical shapes (especially rectangles are quick to compute) then the operator can be even more efficient.
You’ll find very efficient erosions in most image processing libraries, for example try OpenCV. OpenCV also has a hit and miss operator, here is a tutorial for how to use it.
As an example for what output to expect, I generated a simple test image (left), applied a hit and miss operator with a template that matches at exactly one place in the image (middle), and again with a template that does not match anywhere (right):
I did this in MATLAB, not Python, because I have it open and it's easiest for me to use. This is the code:
se = [1,1,1,1 % Defines the template
0,0,0,1];
img = [0,0,0,0,0,0 % Defines the test image
0,1,1,1,1,0
0,0,0,0,1,0
0,0,0,0,0,0
0,0,0,0,0,0
0,0,0,0,0,0];
img = dip_image(img,'bin');
res1 = hitmiss(img,se);
res2 = hitmiss(img,rot90(se,2));
% Quick-and-dirty display
h = dipshow([img,res1,res2]);
diptruesize(h,'tight',3000)
hold on
plot([5.5,5.5],[-0.5,5.5],'r-')
plot([11.5,11.5],[-0.5,5.5],'r-')
The code above uses the hit and miss operator as I implemented in DIPimage. This same implementation is available in DIPlib's Python bindings as dip.HitAndMiss() (install with pip install diplib):
import diplib as dip
# ...
res = dip.HitAndMiss(img, se)

How to get background from cv2.BackgroundSubtractorMOG2?

Is there any way to obtain background from cv2.BackgroundSubtractorMOG2 in python?
In other words, is there any technique to compute an image based on last n frames of a video, which can be used as background?
Such a technique would be pretty complicated, but you might want to look at some keywords: image-stitching, gradient-based methods, patch-match, image filling. Matlab, for example, has a function that tries to interpolate missing values from nearby pixels. You could extend this method to work with 3D (shouldn't be so difficult in linear case).
More generally, it is sort of an ill-posed problem since there is no way to know what goes in the missing region.
Specifically to address your question, you might first take the difference between the original frame, and the extracted image, which should reveal the background. Then, use ROI fill in or similar method. There is likely some examples you can find on the web, such as this.

Vague Image Recognition in Python

So my goal here is to take an image input and get out a list of the shapes contained in it as an output. Of course, the shapes will not be anything like 'triangle' or 'square', but just lists of contiguous pixels of like values.
My first attempt used a recursive algorithm which 'roamed' the image through paths of like-colored pixels, and added all the ones it could get to to the shape list. This worked for small images, but quickly exceeded the max. recursion depth for larger images.
My current attempt is Iterative, but doesn't want to work.
http://pastebin.com/seLbnGE4
Are there any better ways to do it, or are there modules or methods which already exist which would fit my needs?
what you're looking for sounds like "image segmentation" (cf http://en.wikipedia.org/wiki/Segmentation_(image_processing) ). Usually a difficult math problem but implemented in OpenCV.
You may consider using level sets and the "Chan-Vese Algorithim" (cf http://www.univ-pau.fr/~cgout/viscosite/old/20032004/veseChanIJCV2002.pdf
ftp://ftp-sop.inria.fr/odyssee/Team/Rachid.Deriche/Lectures/MPRI/IEEEIP2001.pdf )
A convex optimization framework for segmentation into more than 2 regions is an open math problem of great interest.

Categories

Resources