Python: Recognize if image contains graphic/text or a picture - python

I want to write a script, that converts unknown images (jpg, png, gif, bmp, tiff, etc.) to a specific resolution and format as well as generating a thumbnail.
the problem is that the compression level, that is totally fine for pictures produces crap for exports of Presentations for example; So I want to differ the conversion settings based on the contents of the image.
Does anyone have experience in doing that kind of stuff in python (or shell scripts whose output is easily pasreable)?
my ideas are:
increase contrast and check histogramm if there are only single spikes left
doing a high pass filtering of the image and check what?
doing face recognition of known letters
the goal is that the recognition should be quite fast (approx. 10 images/second) and quite easy to implement

This is a pretty trivial machine learning problem, I would research the MNIST dataset problems that teach you how to recognize handwritten characters, this process should be very similar. Check out this tutorial and see if you can modify it to recognize graphics vs pictures. If your error rate ends up too high you'll have to try more advanced machine learning techniques.
http://mxnet.io/tutorials/python/mnist.html

Related

OpenCV Reflective Surface Problem (Pre-Process Text from Digital Screen)

I'm working on a machine learning application for reading data from fuel pumps, so far I've gone ahead and created a pretty robust YOLOv5 Object Detection Model that can detect the regions that I want fairly accurately. But there is a problem, at certain times of the day there are reflections on the digital screen and I'm unable to use OpenCV pre-process it so that I can extract the numbers from the display.
Check this Video to Understand (YOLOv5 Detection)
https://www.youtube.com/watch?v=3XjZ6Nw70j8
Minimum Reproduceable Example
Cars come and go and their reflection makes it really difficult to differentiate between the reigons for digital-7 font that is used in these displays, you can check out the following repository to understand what I want as s result https://github.com/arturaugusto/display_ocr
Other Solutions I'm Open to:
Since, this application is going to run 24/7 how should I deal with different times,
perhaps create a database of HSV ranges to extract at different times.
Use a polarizing lens would it help in removing the reflections (any user's who have had previous experiences in deploying them).
Edit: I added the correct video ...

How can I make a bayer image and become a debayer again? (demosaicing)

My goal is to blur the picture a bit using a bilinear debayer.
This is to embody the dirty image of the VHS days.
As a graphic major, I tried to reproduce it with various graphic tools, but did not get the desired quality result.
I want that subtle feeling of faded haze when scanned with a scanner.
I decided to emulate a camera sensor.
The process I envisioned is this:
I convert the tiff,targa.png.jpg format image I made into a bayer format image. I want to restore the original image by debayering it again with a bilinear algorithm.
The reason for the bilinear method is that it degrades most gently and strongly.
The link below is the image change according to the algorithm.
https://www.dpreview.com/forums/post/63514167
I'm not a programmer at all, but I've tried something on my own to get what I want.
https://codegolf.stackexchange.com/questions/86410/reverse-bayer-filter-of-an-image
I succeeded in making an image of the Bayer pattern using the coding here.
And I tried debayering by running the debayer source code downloaded from other places, but it failed because the extension was not supported.
So you can change demoasic(debayer) in various ways
I got a program called darkable and raw therapy and tried to convert it, but these programs could only recognize raw files.
Even the algorithms provided by both programs were so good that it was hard to get the impression that the image was degraded.
How do I make what I want?
What can I look for? I really want to make this.
Please let me know which way I should go.

Is there a way to transform a full resolution and regular image to look like I took a photo of it with my cellphone from my computer screen?

I'm training a computer vision algorithm, and I want to give it some more robust data. For the application of the software I'm building, oftentimes people will take pictures of their computer screen with their phones and use those images, rather than the actual original image file to run the computer vision on.
Do you know what kind of transformations I can make to my already labeled image dataset to emulate what it would look like if someone used a cellphone to take a picture of a screen?
Like some qualities demonstrated below in the sample image of my screen for this question:
I guess this is what I'm thinking so far conceptually, but I'm not sure what libraries to use in Python:
The image resolution will probably drop, so modulating that to be lower or more commensurate with what a cellphone's granularity is
Adding in random color aberrations to the images, because when you take pictures of screens it seems like mini rainbows form?
Warping the angle the image is viewed at, since when someone takes a photo, they may not be taking it perfectly square/flat
Adding pixel-looking grids to the images to make them look more like the images taken of screens.
Is there anything I missed and do yall have any library recommendations or starting code to help me? I really want to avoid relabelling all of my data...
Thanks in advance!
I found this: https://graphicdesign.stackexchange.com/a/14001
It seems to be exactly what I'm looking for, but how do I translate this into code? Any library recommendations?

Recognize images in Python

I'm kinda new both to OCR recognition and Python.
What I'm trying to achieve is to run Tesseract from a Python script to 'recognize' some particular figures in a .tif.
I thought I could do some training for Tesseract but I didn't find any similar topic on Google and here at SO.
Basically I have some .tif that contains several images (like an 'arrow', a 'flower' and other icons), and I want the script to print as output the name of that icon. If it finds an arrow then print 'arrow'.
Is it feasible?
This is by no means a complete answer, but if there are multiple images in the tif and if you know the size in advance, you can standardize the image samples prior to classifying them. You would cut up the image into all the possible rectangles in the tif.
So when you create a classifier (I don't mention the methods here), the end result would take a synthesis of classifying all of the smaller rectangles.
So if given a tif , the 'arrow' or 'flower' images are 16px by 16px , say, you can use
Python PIL to create the samples.
from PIL import Image
image_samples = []
im = Image.open("input.tif")
sample_dimensions = (16,16)
for box in get_all_corner_combinations(im, sample_dimensions):
image_samples.append(im.crop(box))
classifier = YourClassifier()
classifications = []
for sample in image_samples:
classifications.append (classifier (sample))
label = fuse_classifications (classifications)
Again, I didn't talk about the learning step of actually writing YourClassifier. But hopefully this helps with laying out part of the problem.
There is a lot of research on the subject of learning to classify images as well as work in cleaning up noise in images before classifying them.
Consider browsing through this nice collection of existing Python machine learning libraries.
http://scipy-lectures.github.com/advanced/scikit-learn/index.html
There are many techniques that relate to images as well.

OCR of low-resolution text from screenshots

I'm writing an OCR application to read characters from a screenshot image. Currently, I'm focusing only on digits. I'm partially basing my approach on this blog post: http://blog.damiles.com/2008/11/basic-ocr-in-opencv/.
I can successfully extract each individual character using some clever thresholding. Where things get a bit tricky is matching the characters. Even with fixed font face and size, there are some variables such as background color and kerning that cause the same digit to appear in slightly different shapes. For example, the below image is segmented into 3 parts:
Top: a target digit that I successfully extracted from a screenshot
Middle: the template: a digit from my training set
Bottom: the error (absolute difference) between the top and middle images
The parts have all been scaled (the distance between the two green horizontal lines represents one pixel).
You can see that despite both the top and middle images clearly representing a 2, the error between them is quite high. This causes false positives when matching other digits -- for example, it's not hard to see how a well-placed 7 can match the target digit in the image above better than the middle image can.
Currently, I'm handling this by having a heap of training images for each digit, and matching the target digit against those images, one-by-one. I tried taking the average image of the training set, but that doesn't resolve the problem (false positives on other digits).
I'm a bit reluctant to perform matching using a shifted template (it'd be essentially the same as what I'm doing now). Is there a better way to compare the two images than simple absolute difference? I was thinking of maybe something like the EMD (earth movers distance, http://en.wikipedia.org/wiki/Earth_mover's_distance) in 2D: basically, I need a comparison method that isn't as sensitive to global shifting and small local changes (pixels next to a white pixel becoming white, or pixels next to a black pixel becoming black), but is sensitive to global changes (black pixels that are nowhere near white pixels become black, and vice versa).
Can anybody suggest a more effective matching method than absolute difference?
I'm doing all this in OpenCV using the C-style Python wrappers (import cv).
I would look into using Haar cascades. I've used them for face detection/head tracking, and it seems like you could build up a pretty good set of cascades with enough '2's, '3's, '4's, and so on.
http://alereimondo.no-ip.org/OpenCV/34
http://en.wikipedia.org/wiki/Haar-like_features
OCR on noisy images is not easy - so simple approaches no not work well.
So, I would recommend you to use HOG to extract features and SVM to classify. HOG seems to be one of the most powerful ways to describe shapes.
The whole processing pipeline is implemented in OpenCV, however I do not know the function names in python wrappers. You should be able to train with the latest haartraining.cpp - it actually supports more than haar - HOG and LBP also.
And I think the latest code (from trunk) is much improved over the official release (2.3.1).
HOG usually needs just a fraction of the training data used by other recognition methods, however, if you want to classify shapes that are partially ocludded (or missing), you should make sure you include some such shapes in training.
I can tell you from my experience and from reading several papers on character classification, that a good way to start is by reading about Principal Component Analysis (PCA), Fisher's Linear Discriminant Analysis (LDA), and Support Vector Machines (SVMs). These are classification methods that are extremely useful for OCR, and it turns out that OpenCV already includes excellent implementations on PCAs and SVMs. I haven't seen any OpenCV code examples for OCR, but you can use some modified version of face classification to perform character classification. An excellent resource for face recognition code for OpenCV is this website.
Another library for Python that I recommend you is "scikits.learn". It is very easy to send cvArrays to scikits.learn and run machine learning algorithms on your data. A basic example for OCR using SVM is here.
Another more complicated example using manifold learning for handwritten character recognition is here.

Categories

Resources