I'm working with OpenCV (cv2) in Python3.7 to attempt to create a license plate reader employing the pytesseract module. Using cv2.Canny() to detect the contour edges of the license plate in
the image works fairly well, but when use cv2.approxPolyDP() to reduce the contour to a four-point polygon, the resulting polygon is slightly rotated (represented by the green lines in the image). This negatively affects the ability to interpret the text on the license plate because the text is somewhat rotated after performing a four-point transform to de-warp the polygon into a rectangle. Unfortunately, tesseract 4.0 does not seem to have the capability to detect text rotation for such a small number of characters -- it generates an exception in text_to_osd() for anything less than 133 characters.
I believe the problem may be that license plates have rounded corners, and since cv2.approxPolyDP() is limited to points on the contour, there may be no way to avoid the rotation. In this particular case though, it looks as if there may actually be a better choice for points on the contour that better approximate the plate edges with four points. Unless there is some way to avoid this, I would need some clever way to detect and correct the text rotation, since tesseract appears to be unable to do this for a small number of characters.
So what I'm looking for is a) a method for finding the 4-point contour of a license plate that accurately matches the image or b) an alternate means of detecting text rotation in an image when there are as few as six characters in the image.
I'm currently working on my first assignment in image processing (using OpenCV in Python, but I'm open to any libraries and languages). My assignment is to calculate a precise score (to tenths of point) of one to several shooting holes in an image uploaded by a user. The issue is that the image uploaded by the user can be taken on different backgrounds (although it will never match the rest of the target mean colors). Due to this, I have ruled out most of the solutions found on the internet and most of the solutions I could come up with.
Summary of my problem
Bullet holes identification:
bullet holes can be on different backgrounds
bullet holes can overlap
single bullet holes will always be of similar size (there is only one type of caliber used on all of the calculated shooting targets)
I'm able to calculate a very precise radius of the shooting hole
Shooting targets:
there are two types of shooting targets that my app is going to calculate (images provided below)
photos of the shooting targets can be taken in different lighting conditions
Shooting target 1 example:
Shooting target 2 example:
Shooting target examples to find bullet holes in:
shooting target example 1
shooting target example 2
shooting target example 3
shooting target example 4
shooting target example 5
What I tried so far:
Color segmentation
due to the reasons mentioned above
Difference matching
to be able to actually compare the target images (empty and fired on), I have written an algorithm that crops the target by its outer largest circle (its radius + bullet size in pixels)
after that, I have probably tried all of the ways of images comparison found on the internet
for example: brute force matching, histogram comparisons, feature matching and many more
I failed here mostly because the colors on both compared images were a bit different and also because one of the images was sometimes taken in a slight angle and therefore the circles weren't overlapping and they were calculated as differences
Hough circles algorithm
since I know the radius (in pixels) of the shots on the target I thought I could simply detect them using this algorithm
after several hours/days of playing with parameters of HoughCircles function, I figured it would never work on all of the uploaded images without changing the parameters based on the uploaded image
Edge detection and finding contours of the bullet holes
I have tried two edge detection methods (Canny and Sobel) while playing with image smoothening algorithms (like blurring, bilateral filtering, metamorphization, etc..)
after that, I have tried to find all of the contours in the edge detected image and filter out the circles of the target with a similar center point
this seemed like the solution at first, but on several test images it wouldn't work properly :/
At this point, I have ran out of ideas and therefore came here for any kind of advice or an idea that would push me further. Is it possible that there simply isn't a solution to such complicated shooting target recognition or am I just too inexperienced to come up with it?
Thank you in advance for any help.
Edit: I know I could simply put a single color paper behind the shooting target and find the bullets that way. This is not how I want the app to work thought and therefore it's not a valid solution to my problem.
I am trying to obtain the relative depth of pixels of an image. For example, the image in https://www.awn.com/news/nvidia-unveils-quadro-rtx-worlds-first-ray-tracing-gpu . I don't need the precise distance of each pixel, which I believe would be impossible, but I would like to get something as "the green ball is further than the other balls". Is it possible using OpenCV in python? The codes I generated can identify each ball, but not their relative distance or depth, so they are pretty much useless to my intents.
That's an ill-posed problem (you can not measure depth with a single RGB camera) and a topic of resent research. I found this survey paper. Most often a depth image is learned from an RGB image using convolutional neural networks.
However, if you use a lot of prior information about your scene (all objects are circular within in the image and the partially visible circles corresponds to the ones which are in the background), then you might be able to do something with heuristical methods like, thresholding, edge detection or hough transforms, but it won't be easy.
I am working on a project to figure out the difference between two objects and tag them with the proper model code.
I need help with a suggestion on how can we tackle such problem with image processing using OpenCV, following are the images
Till now I tried calculating black pixel difference between two images after doing binary threshold and also calculated a number of holes present on the gasket.
I also tried using feature points but it didn't worked well
what else can be done to improve the detection?
Thank you
The holes are excellent features that can be robustly detected by blob analysis.
In the first place, locate the large circle and determine its center and radius. The radius might be a first discriminant feature.
Next, establish the configuration of the screw holes around the center. You can use the distance to the center, the number of holes and the angles they define around the center.
If this is still not enough, you can register the gaskets and compare them to the models by matching the screw holes, adjusting the rotation, then comparing pixel-wise with a similarity measure such as SAD or SSD.
Just for educational purposes, I'm working on making a letter-and-symbol recognition program in Python, and I've run into some trouble with region separation. I made a working connected-component labeling function using the information here:
CCL - Wikipedia
But I need one with the accuracy of an 8-connectivity, which it mentions but doesn't provide info for. It has a diagram on the right side that shows that to check for it, the Northwest and Northeast pixels need to be included, but I have no idea how and I can't find any information on it. I'm not asking for code, but can anybody familiar with this method describe how to incorporate those?
8-connectivity isn't more accurate, and in practice it's suitable only for certain applications. It's more common to use 4-connectivity, especially for "natural" images rather than images created in the lab for testing. An 8-connected region will include checkerboard patterns and zigzag noise. A 4-connected foreground yields an 8-connected background.
You can dig into the source for the OpenCV function cvFindContours(). There are OpenCV bindings to Python.
http://opencv.willowgarage.com/documentation/python/structural_analysis_and_shape_descriptors.html
http://opencv.willowgarage.com/wiki/PythonInterface
I would recommend first implementing a 4-connected algorithm. You can find pseudocode in books like the following:
Machine Vision: Theory, Algorithms, Practicalities by E. R. Davies
In the 3rd edition, see section 6.3, "Object Labeling and Counting"
Digital Image Processing by Gonzalez and Woods
See section 9.5.3 "Extraction of Connected Components"
The presentation is less clear, but this is a standard all-in-one textbook for image processing. The section on thresholding for binarization is good. An international edition costs about $35.
Older textbooks may have simple, straightforward descriptions. Used copies of
Computer Vision by Ballard and Brown are quite cheap. In that book, Algorithm 5.1 is called Blob Coloring.
My favorite quick description can be found in the section "Region Labeling Algorithm" of Handbook of Image and Video Processing edited by Al Bovik. Conveniently, pages 44 - 45 are available online in Google Books:
http://books.google.com/books?id=UM_GCfJe88sC&q=region+labeling+algorithm#v=snippet&q=region%20labeling%20algorithm&f=false
For OCR it's common to look for dark connected regions (blobs) on a light background. Our binarized image will be a black foreground (0) on a white background (1) in a 1-bit image.
For a 4-connected algorithm you'll use structure elements like the ones shown below (which you'll also see in the Bovik book). Once you've tinkered with 4-connectivity, the extension to 8-connectivity should be obvious.
We scan each row of pixels in the image from left to right, and all rows from top to bottom. For any pixel (x,y), its left neighbor (x - 1, y) and top neighbor (x, y - 1) have already been scanned, so we can check whether a region number has already been assigned to one or both of those neighbors. For example, if pixel (x, y-1) is labeled region 8, and if (x,y) is also a foreground pixel, then we assign region 8 to (x,y). If pixel (x,y) is a foreground pixel but the left and top neighbors are background pixels, we assign a new region number to (x,y).
I recommend the Bovik reference, but here's a quick overview of the algorithm.
Initialize a region number contour (e.g. "region = 0")
Initialize a "region equivalency" data structure for later processing.
Create a black and white image using a binarization threshold.
Scan each pixel in the image from top to bottom, left to right.
Assign region 0 to any white background (1) pixel.
For any black foreground pixel (x,y) test the following conditions:
If top and left pixels are foreground, use the region number for (x-1, y) as the region number for (x,y), and track the equivalency of the left and top region numbers.
If only left neighbor (x - 1,y) is a foreground pixel, use its region number for (x,y)
If only top neighbor (x, y - 1) is a foreground pixel, use its region number for (x,y)
If left and top neighbors are background pixels, increment the region number and assign this new region number to (x,y).
After completing this processing for the entire image, analyze the equivalency matrix and reduce each collection of equivalent regions to a single region.
The reduction of equivalencies is the tricky part. In the image below, regions have been correctly labeled according to the algorithm. The image shows a different color for each region number. The three touching regions must be reduced to one connected region.
Your code should scan the equivalency data structure to reassign 2 (red) and 3 (dark blue) to the lowest-numbered region, which is 1 (yellow). Once the region number reassignment is complete, region labeling is complete.
There are one-pass algorithms that avoid the need for an equivalency check altogether, though such algorithms are a bit harder to implement. I would recommend first implementing the traditional 4-connected algorithm, solving its problems, and then introducing an option to use 8-connectivity instead. (This option is common in image processing libraries.) Once you have 4-connected and 8-connected region labeling working you'll have a good algorithm that will find many uses. In searching for academic papers on the subject, check for "region labeling," "blobs," "contours," and "connectivity."
For grayscale algorithms that need to be binarized, your threshold algorithm will likely become a weak point in your chain of algorithms. For help with thresholding, get a copy of the Gonzalez and Woods book. For OCR, check out the book Character Recognition Systems by Cheriet, Karma, Liu, and Suen.
I propose this implementation of 8-cclabeling, posted on Github.