We are experimenting with applying a convolutional neural network to classify good surfaces and surfaces with defects.
The good and bad images are mostly like the following:
Good ones:
Bad ones:
The image is relatively big (Height:800 pixels, width: 500 pixels)
The defect very local and small relative to image
The background is very noisy
The deep learning (6 x conv+pooling -> flatten -> dense64-> dense32) result is very bad
(perhaps due to limited Bad samples and very small defect pattern)
There are other defect patterns like very subtle scratches, residuals and stains, etc., which is one of the main reasons that we want to use deep learning instead of specific feature engineering.
We can and are willing to accumulate more images of defects.
So the question are:
Is deep learning even an appropriate tool for defect detection like this in practice.
If yes, how can we adapt or pre-process the images to the formats that the deep learning models can really work with. (Could we apply some known filters to make the background much less noisy?)
If no, what are other practical techniques that can be used other than deep models.
Will things like template matching or anything else actually be a fit for this type of problems?
Update:
Very good idea to come up with an explicit circular stripes checker.
It might be directly used to check where the pattern is disturbed or be used as a pre-processing step for deep learning.
Update:
A more subtle pattern 'scratch'.
There is a scratch starting from the bottom of the fan area going up and a little to the right.
Is deep learning even an appropriate tool for defect detection like
this in practice.
Deep learning certainly is a possibility that promises to be universal. In general, it should rather be the last resort than the first approach. Downsides include:
It is difficult to include prior knowledge.
You therefore need an extreme amount of data to train the classifier for the general case.
If you succeed, the model is opaque. It might depend on subtle properties, which cause it to fail if the manufacturing process is changed in the slightest way and there is no easy way to fix it.
If yes, how can we adapt or pre-process the images to the formats that
the deep learning models can really work with. (Could we apply some
known filters to make the background much less noisy?)
Independent of the classifier you eventually decide to use, preprocessing should be optimal.
Illumination: The illumination is uneven. I'd suggest to define a region of interested, in which the illumination is bright enough to see something. I'd suggest to calculate the average intensity over many images and use this to normalize the brightness. The result would be an image cropped to the region of interest, where the illumination is homogenous.
Circular stripes: In the images you show, as the stripes are circular, their orientation depends on the position in the image. I would suggest to use a transformation, which transforms the region of interest (fraction of a circle) into a trapezoid, where each stripe is horizontal and the length of each stripe is retained.
If no, what are other practical techniques that can be used other than
deep models. Will things like template matching or anything else
actually be a fit for this type of problems?
Rather than identifying defects, you could try identifying the intact structure, which has relatively constant properties. (This would be the circular stripes checker that I have suggested in the comment). Here, one obvious thing to test would be a 2D fourier transformation at each pixel within an image preprocessed as described above. If the stripes are intact, you should see that the frequency of intensity change is much lower in horizontal than in vertical direction. I would just plot these two quantities for many "good" and "bad" pixels and check, whether that might already allow some classification.
If you can preselect possible defects with that method, you could then crop out a small image and subject it to deep learning or whatever other method you want to use.
Related
I'm testing various Python image pre-processing pipelines for tesseract-ocr.
My input data are pdf invoices and receipts of all manner of quality from scanned documents (best) to mobile phone supplied photos taken in poor lighting (worst), and everything in between. When performing manual scanning for OCR, I typically choose among several scanning presets (unsharp mask, edge fill, color enhance, gamma). I'm thinking about implementing a similar solution in a Python pipeline.
I understand the standard metric for OCR quality is Levenshtein (Edit distance), which is a measure of the quality of results compared to ground truth.
What I'm after are measurements of image processing effects on OCR results qualtiy. For example, in this paper Prediction of OCR Accuracy the author describes at least two measurements White Speckle Factor (WSF) and Broken Character Factor (BCF). Other descriptors I've read include salt and pepper noise and aberrant pixels.
I've worked my way through 200 of the near 4k tesseract tagged questions here. Very interesting. Most questions are of the type, I have this kind of image, how can I improve the OCR outcomes. Nothing so far about measuring the image-processing effect on OCR outcomes.
A curious question was this one, Dirty Image Quality Assesment Measure, but the question is not focused on OCR and the solutions seem overkill.
There is no universal image improvement technique for OCR-ability. Every image defect is (partly) corrected with ad-hoc techniques, and a technique that works in one case can be counter-productive in another.
For a homogenous data set (in the sense that all documents have similar origin/quality and were captured in the same conditions), you can indeed optimize the preprocessing chain by trying different combinations and settings, and computing the total edit distance. But this requires preliminary knowledge of the ground truth (at least for a sampling of the documents).
But for heterogeneous data sets, there is little that you can do. There remains the option of testing different preprocessing chains and relying on the recognition scores returned by the OCR engine, assuming that better readability corresponds to better correctness.
You might also extract some global image characteristic such as contrast, signal-to-noise ratio, sharpness, character size and density... and optimize the readability as above. Then feed this info to a classifier that will learn how to handle the different image conditions. Honestly, I don't really believe in this approach.
i'm working on this project to get accurate number of fishes in images. However, i've used pixellib but couldn't get accurate result. Is there any package that could just count objects in an image and give accurate result?
This is a test image
https://i.stack.imgur.com/RUa79.jpg
This is the output of the test image.
Or will watershed algorithm be better because the object recognition is not important than the number of fishes in the image ?
First of all, you don't need to spend any resource on segmentation if the actual goal is just to count your objects. Object detection might be enough.
It is important to choose because you definitely need to train a custom model because of the specific form, partially visible objects, overlapping, etc.
I've just tried to segment your example:
another part:
BTW, it is better to improve overall input quality. Anyway, segmented results might be used to "measure" shapes, so in a way joined shapes can be properly "interpreted".
You can prepare a specific training set, so your model will recognize all corner cases in a proper way, but it might take some time.
Object detection will not require so many efforts:
Long story short: improve your input (if possible) and try to use any suitable tutorial
Update
Bounding box (detection) vs accurate shape (segmentation) discussion is quite useless, because the goal is to get an accurate count, so let's address another issue:
Even a super advanced model/approach will fail with the example provide. I'd suggest to start with any possible/reasonable input improvement.
I am learning OpenCV applications by reading research papers and attempting to duplicate their tests and results. I may have jumped a bit too deep off the beaten path and am now curious the proper way to go about this investigation.
Goal: 1) Register these two images. 2) Stack the exposures (there are actually 20+ in this series). 3) Learn.
Attached below is an example image- shot with a cell phone, in low light, in burst mode. If one were to level stretch one would see there are very few hard edges (some sheets), but there are enough details to manually align portions of the images with each other. I ran this through the default OpenCV implementations of ORB and SIFT and, as expected, came back with poor matches.
I have not yet stumbled upon the right technique described to increase edge detection. As mentioned, no hard edges are present. However I thought I'd previously read that one could downsample the image using a max function and get a better 'edge' detection. That edge should be able to provide registration homography to the higher resolution image. But I can neither find the resource to do so nor any descriptions of similar activity. Help here would be appreciated.
In addition if there are any authored papers discussing this technique that I could be pointed to I'd appreciate it. I'm quite familiar with astrophotography and star stacking, and am looking forward to trying drizzle on a different type of image set.
Downsampling the image techniques I've tried to better indicate edges: Differences of Gaussians, Laplace, directional edge detection, and a few others.
I appreciate the time you've taken to help me learn how to expand my efforts for this.
Thank you.
Edit: Modifying the image's contrast, or brightness, or tonal response, has no effect on the correlation of the image content. At least in the limited set of tests I've been able to run. It makes them 'prettier' but, honestly, the algorithms don't care if they're in 'human visual space' or in 'linear digital counts'. I can post it as a pretty image but, without those sharp edges, most of the filters fail and matches don't succeed- which is the crux of my issues here.
I am thinking about creating a database system for images where they are stored with compact signatures and then matched against a "query image" that could be a resized, cropped, brightened, rotated or a flipped version of the stored one. Note that I am not talking about image similarity algorithms but rather strictly about duplicate detection. This would make things a lot simpler. The system wouldn't care if two images have an elephant on them, it would only be important to detect if the two images are in fact the same image.
Histogram comparisons simply won't work for cropped query images. The only viable way to go I see is shape/edge detection. Images would first be somehow discretized, every pixel being converted to an 8-level grayscale for example. The discretized image will contain vast regions in the same colour which would help indicate shapes. These shapes then could be described with coefficients and their relative position could be remembered. Compact signatures would be produced out of that. This process will be carried out over each image being stored and over each query image when a comparison has to be performed. Does that sound like an efficient and realisable algorithm? To illustrate this idea:
removed dead ImageShack link
I know this is an immature research area, I have read Wikipedia on the subject and I would ask you to propose your ideas about such an algorithm.
SURF should do its job.
http://en.wikipedia.org/wiki/SURF
It is fast an robust, it is invariant on rotations and scaling and also on blure and contrast/lightning (but not so strongly).
There is example of automatic panorama stitching.
Check article on SIFT first
http://en.wikipedia.org/wiki/Scale-invariant_feature_transform
If you want to do a feature detection driven model, you could perhaps take the singular value decomposition of the images (you'd probably have to do a SVD for each color) and use the first few columns of the U and V matrices along with the corresponding singular values to judge how similar the images are.
Very similar to the SVD method is one called principle component analysis which I think will be easier to use to compare between images. The PCA method is pretty close to just taking the SVD and getting rid of the singular values by factoring them into the U and V matrices. If you follow the PCA path, you might also want to look into correspondence analysis. By the way, the PCA method was a common method used in the Netflix Prize for extracting features.
How about converting this python codes to C back?
Check out tineye.com They have a good system that's always improving. I'm sure you can find research papers from them on the subject.
The article you might be referring to on Wikipedia on feature detection.
If you are running on Intel/AMD processor, you could use the Intel Integrated Performance Primitives to get access to a library of image processing functions. Or beyond that, there is the OpenCV project, again another library of image processing functions for you. The advantage of a using library is that you can try various algorithms, already implemented, to see what will work for your situation.
I am working on a project in which I want to compare a non-modified original picture against a dataset that contains images of which some are small to medium alterations of the original image. These alterations can go from simple color changes, gradients, lighting, flipping/rotating the image to even modifications done by a professional in Photoshop and used for a movie poster.
My goal is to identify, with rather good accuracy, if the original image has been used in one of the images.
I have already tried many different approaches:
Perceptual Hashing
Feature Extraction
Both with and without Machine Learning techniques
Tensorflow
...
However, I always have the feeling like all the above have some shortcomings in terms of accuracy and performance.
Therefore I was wondering if someone knows a good Python project (Github, website,...) that will allow me to achieve my goal.