finding not exactly same template in image - python

I want to find a character in my input image. When I crop char from an image and use template matching, in almost all cases, it works perfect; but if I use that template for another image with a little different shape of char it fails.
This is my question: How can I find object in my images similar to my template but with a little variety in slope or line thickness or ... ? Is there any way with template matching or do you suggest another methods according to your experience?
I will appreciate any related answer in advance.

I think you are facing a limitation of template matching here. It seems like you are already using the right metric (normalised cross-correlation). Maybe one last thing you can do is to check for 5 templates: a 'perfect' one (take from a perfect image) and 4 rotated versions. Then, for each template you find the best match, and then compare the 5 best matches against each other to pick the best of them.
Depending on how much time you want to invest, and what robustness level you wish to accomplish, you can also use Neural Network ! It will b the more robust approach to this problem. Of course, there are some neural network trained for character detection, for instance here, and some available tutorial like this one.
If you don't want to use NN, you could think of another method which could work with line detection. A '/' character has a quite characteristic shape: a closed path, rotated with a certain angle, with 'inertia' around one single axis. That is easy to describe with some mathematical properties of the detected closed shape. This approach is called shape descriptor and is described (for instance) here. I think that if you have some knowledge on the size of the text, and pretty good image quality, then this 'low-level' approach has some potential. It often works really well.
I hope this helps you to solve your problem.

Related

Is there a better way to get accurate count?

i'm working on this project to get accurate number of fishes in images. However, i've used pixellib but couldn't get accurate result. Is there any package that could just count objects in an image and give accurate result?
This is a test image
https://i.stack.imgur.com/RUa79.jpg
This is the output of the test image.
Or will watershed algorithm be better because the object recognition is not important than the number of fishes in the image ?
First of all, you don't need to spend any resource on segmentation if the actual goal is just to count your objects. Object detection might be enough.
It is important to choose because you definitely need to train a custom model because of the specific form, partially visible objects, overlapping, etc.
I've just tried to segment your example:
another part:
BTW, it is better to improve overall input quality. Anyway, segmented results might be used to "measure" shapes, so in a way joined shapes can be properly "interpreted".
You can prepare a specific training set, so your model will recognize all corner cases in a proper way, but it might take some time.
Object detection will not require so many efforts:
Long story short: improve your input (if possible) and try to use any suitable tutorial
Update
Bounding box (detection) vs accurate shape (segmentation) discussion is quite useless, because the goal is to get an accurate count, so let's address another issue:
Even a super advanced model/approach will fail with the example provide. I'd suggest to start with any possible/reasonable input improvement.

How to determine which method of OCR to use depending on images quality

I am asking a question, because my two week research are started to get me really confused.
I have a bunch of images, from which I want to get the numbers in Runtime (it is needed for reward function in Reinforcment Learning). The thing is, that they are pretty clear for me (I know that it is absolutely different thing for OCR-systems, but that's why I am providing additional images to show what I am talking about)
And I thought that because they are rather clear. So I've tried to use PyTesseract and when it does not worked out I have tried to research which other methods could be useful to me.
... and that's how my search ended here, because two weeks of trying to find out which method would be bestly suited for my problem just raised more questions.
Currently I think that the best resolve for it is to create digit recognizing model from MNIST/SVNH dataset, but is not it a little bit overkill? I mean, images are standardized, they are in Grayscale, they are small, and the numbers font stays the same so I suppose that there is easier way of modyfing those images/using different OCR method.
That is why I am asking for two questions:
Which method should be the most useful for my case, if not model
trained with MNIST/SVNH datasets?
Is there any kind of documentation/books/sources which could make the actual choice of infrastructure easier? I mean, let's say
that in future I will come up again to plan which OCR system to use.
On what basis should I make choice? Is it purely trial and error
thing?
If what you have to recognize are those 7 segment digits, forget about any OCR package.
Use the outline of the window to find the size and position of the digits. Then count the black pixels in seven predefined areas, facing the segments.

How can I find identical/duplicate images in a image bank [duplicate]

I am thinking about creating a database system for images where they are stored with compact signatures and then matched against a "query image" that could be a resized, cropped, brightened, rotated or a flipped version of the stored one. Note that I am not talking about image similarity algorithms but rather strictly about duplicate detection. This would make things a lot simpler. The system wouldn't care if two images have an elephant on them, it would only be important to detect if the two images are in fact the same image.
Histogram comparisons simply won't work for cropped query images. The only viable way to go I see is shape/edge detection. Images would first be somehow discretized, every pixel being converted to an 8-level grayscale for example. The discretized image will contain vast regions in the same colour which would help indicate shapes. These shapes then could be described with coefficients and their relative position could be remembered. Compact signatures would be produced out of that. This process will be carried out over each image being stored and over each query image when a comparison has to be performed. Does that sound like an efficient and realisable algorithm? To illustrate this idea:
removed dead ImageShack link
I know this is an immature research area, I have read Wikipedia on the subject and I would ask you to propose your ideas about such an algorithm.
SURF should do its job.
http://en.wikipedia.org/wiki/SURF
It is fast an robust, it is invariant on rotations and scaling and also on blure and contrast/lightning (but not so strongly).
There is example of automatic panorama stitching.
Check article on SIFT first
http://en.wikipedia.org/wiki/Scale-invariant_feature_transform
If you want to do a feature detection driven model, you could perhaps take the singular value decomposition of the images (you'd probably have to do a SVD for each color) and use the first few columns of the U and V matrices along with the corresponding singular values to judge how similar the images are.
Very similar to the SVD method is one called principle component analysis which I think will be easier to use to compare between images. The PCA method is pretty close to just taking the SVD and getting rid of the singular values by factoring them into the U and V matrices. If you follow the PCA path, you might also want to look into correspondence analysis. By the way, the PCA method was a common method used in the Netflix Prize for extracting features.
How about converting this python codes to C back?
Check out tineye.com They have a good system that's always improving. I'm sure you can find research papers from them on the subject.
The article you might be referring to on Wikipedia on feature detection.
If you are running on Intel/AMD processor, you could use the Intel Integrated Performance Primitives to get access to a library of image processing functions. Or beyond that, there is the OpenCV project, again another library of image processing functions for you. The advantage of a using library is that you can try various algorithms, already implemented, to see what will work for your situation.

Best method to connect lines in image

I need help to choose me a method which I can apply to my problem. Problem is I have 2 images like this
First image and Second image.
You can see these are images with same same lines, not all lines. I would like you to suggest me a method how to approach this problem. I need to have best match possible and to have the coordinates for these to images how to put them together again, without matching them or using an algorithm again. Btw I prefer python like a programing language for this problem and please do not suggest patented method like surf etc.
Thank you for all answers and help from you.
Have a nice a day.
I need to have best match possible and to have the coordinates for these to images how to put them together again, without matching them or using an algorithm again
I'm not sure what exactly you mean by the highlighted part, but what you're describing seems to be an image stitching problem, or a least part of one.
OpenCV has a class that implements a stitching pipeline.
If you are only interested in finding the correspondences and not the combined image, you could have a look here, where they explain a feature matcher and extractor.
Note, however, that the performance of these feature extractors depends a lot on the parameters you set, so you might have to tune them a bit before it works well.

How to better preprocess images for a better deep learning result?

We are experimenting with applying a convolutional neural network to classify good surfaces and surfaces with defects.
The good and bad images are mostly like the following:
Good ones:
Bad ones:
The image is relatively big (Height:800 pixels, width: 500 pixels)
The defect very local and small relative to image
The background is very noisy
The deep learning (6 x conv+pooling -> flatten -> dense64-> dense32) result is very bad
(perhaps due to limited Bad samples and very small defect pattern)
There are other defect patterns like very subtle scratches, residuals and stains, etc., which is one of the main reasons that we want to use deep learning instead of specific feature engineering.
We can and are willing to accumulate more images of defects.
So the question are:
Is deep learning even an appropriate tool for defect detection like this in practice.
If yes, how can we adapt or pre-process the images to the formats that the deep learning models can really work with. (Could we apply some known filters to make the background much less noisy?)
If no, what are other practical techniques that can be used other than deep models.
Will things like template matching or anything else actually be a fit for this type of problems?
Update:
Very good idea to come up with an explicit circular stripes checker.
It might be directly used to check where the pattern is disturbed or be used as a pre-processing step for deep learning.
Update:
A more subtle pattern 'scratch'.
There is a scratch starting from the bottom of the fan area going up and a little to the right.
Is deep learning even an appropriate tool for defect detection like
this in practice.
Deep learning certainly is a possibility that promises to be universal. In general, it should rather be the last resort than the first approach. Downsides include:
It is difficult to include prior knowledge.
You therefore need an extreme amount of data to train the classifier for the general case.
If you succeed, the model is opaque. It might depend on subtle properties, which cause it to fail if the manufacturing process is changed in the slightest way and there is no easy way to fix it.
If yes, how can we adapt or pre-process the images to the formats that
the deep learning models can really work with. (Could we apply some
known filters to make the background much less noisy?)
Independent of the classifier you eventually decide to use, preprocessing should be optimal.
Illumination: The illumination is uneven. I'd suggest to define a region of interested, in which the illumination is bright enough to see something. I'd suggest to calculate the average intensity over many images and use this to normalize the brightness. The result would be an image cropped to the region of interest, where the illumination is homogenous.
Circular stripes: In the images you show, as the stripes are circular, their orientation depends on the position in the image. I would suggest to use a transformation, which transforms the region of interest (fraction of a circle) into a trapezoid, where each stripe is horizontal and the length of each stripe is retained.
If no, what are other practical techniques that can be used other than
deep models. Will things like template matching or anything else
actually be a fit for this type of problems?
Rather than identifying defects, you could try identifying the intact structure, which has relatively constant properties. (This would be the circular stripes checker that I have suggested in the comment). Here, one obvious thing to test would be a 2D fourier transformation at each pixel within an image preprocessed as described above. If the stripes are intact, you should see that the frequency of intensity change is much lower in horizontal than in vertical direction. I would just plot these two quantities for many "good" and "bad" pixels and check, whether that might already allow some classification.
If you can preselect possible defects with that method, you could then crop out a small image and subject it to deep learning or whatever other method you want to use.

Categories

Resources