Python - Detecting desired corners of a image - python

I need help in python coding an algorithm capable of detecting the corners of a image. I have a thresholded image so far and I was using cornerHarris from opencv to detect all the corners. My problem is filtrating all those points to output only the ones I desired. Maybe I can do a loop to achieve this?
In my case, I want the two lowest corners and the two highest corners points. My main interest is to obtain the pixel coordinates of this corners. You can see an example of a image I'm processing here:
In this image I draw the corners points I'm interested in.

There are several ways to solve this problem. In real-world applications it's rare (that is, actually never occurs) that you need to solve a problem once for a single image. If you have additional images it would be nice to see how much the object of interest varies.
One method to find corners is the convex hull. This method is more generally used to find a convex shape encompassing scattered points, but it's worth knowing about and implementing.
https://en.wikipedia.org/wiki/Convex_hull
What's handy about the convex hull is that the concept of a "corner" (a vertex on the convex hull polygon) is easy to grasp and doesn't rely on parameter settings. You don't have to consider whether a corner is sharp enough, strong enough, pointy enough, unique in its neighborhood, etc.--the convex hull will simply make sense to you.
You should be able to write a functional version of a convex hull "gift wrapping" algorithm in a reasonable period of time.
https://en.wikipedia.org/wiki/Gift_wrapping_algorithm
There are many ways to compute the convex hull, but don't get lost in all the different methods. Choose one that makes sense to you and implement it. The fastest known method may still be Seidel, but don't even think about running down that rabbit hole. Simple is good.
Before you compute the convex hull, you'll need to reduce your white shape to edge points; otherwise the hull algorithm will check far too many points. Reducing the number of points to be considered can be done using edge-finding on the connected component (the white "blob"), edge-finding without first segmenting foreground from background, or any of various simple kernels (e.g. Sobel).
Although the algorithm is called the "convex" hull, your shape doesn't have to be convex, especially if you're only interested in the top and bottom vertices/corners as shown in your sample image.
Corner finders can be a bit disappointing, frankly, especially since the name implies, "Hey, it'll just find corners all the time." There are some good ones out there, but you could spend a lot of time investigating all the alternatives. Even then you'll likely have to set thresholds, consider whether your application will yield the occasional weird result given the shape and scale of corners, and so on.
Although you mention wanting to find only the top and bottom points, if you wanted to find those two odd triangular outcroppings on the left side the corner-finding gets a little more complicated; using the convex hull keeps this very simple.
Although you want to find a robust solution to corner detection, preferably using a known algorithm for which performance can be understood easily, you also want to avoid overgeneralizing. In any case, review some list of corner detectors and see what strikes your fancy. If you see a promising algorithm that looks easy-ish to implement, why not try implementing it?
https://en.wikipedia.org/wiki/Corner_detection

Related

Is there any way to detect edge in this situation?

I try to detect edge using python.
There are hundreds of algorithms for edge detection, however, the image is very obscure and unclear. The most serious problem is one edge is located at the local maximum value, but the other edge is located slightly shifted from local maximum value. Through the detailed examination, I found the other edge is located at the one of the inflection points of original values. I depicted this situation as a simple situation.
Is there any simple and beautiful ways for detection edges in various situations?
No, there is no simple and beautiful method to detect edges. This is an ill-posed problem. In particular, there is no absolute criterion to tell signal from noise.
A not-so-bad method is to consider the peaks of the derivative, provided they correspond to a sufficiently high step in the signal.

How to create steerable Edge Detection filters using Python or discard edges that don't conform to desired angle

I know how to do basic Canny edge detection using OpenCV. However I need to discard all edges that do not fall within 15 degrees of a predetermined angle.
Any help would be greatly appreciated.
Its an old question but here is the process you should use.
1]Start by filter your source image (back-ground subtract/color/etc)
2]Apply a generic Edge detector or a steerable filter or (if you want to get some really good result & are doing it for research purposes look for Phase Strectch Transform Algorithm
3]Save those line in a vector/whatever
4]Create a circle drawing algorithm (here is the main idea)
Your circle drawing algorithm (CDA further) will take every point returned by your edge filter.
For each point it will build circles of a variable diameter [Dmin;Dmax] based on the max/min distance you can accept for two points be considered on the same line.
If no next-pixel are present in the circle octant corresponding to your angle, simply dismiss it.
Once you have your lines that match your angle you can sort them by length to dismiss line probably due to noise.
You should also note that there is other methods, this method as some good aspect:
1- Its robust against noise & low quality images/video
2- Its CUDA compliant (i.e. easy to push in parallel processing).
3-Its fast and roughly more accurate than most basic line detectors.

Detecting the centre of a curved shape with opencv

I've been trying for a while to find the centre of a curved shape (for example a banana). I can do all the basics, such as creating a binary image, and locating the contour. However, the centroid function correctly finds a point outside of the contour. The point I require must be inside the contour. I've attached an image which should explain things better.
If anyone has any ideas, or has seen something similar I would really appreciate some help.
You could look at this answer, What is the fastest way to find the "visual" center of an irregularly shaped polygon?
Basically skeletonisation algorithms should help (in terms of efficiency and accuracy as compared to continuous erosion, which would fail in some cases), since they narrow down the set of possible valid points to a set of line segments, which you can then do some sort of conditional processing on.

Given a contour outlining the edges of an 'S' shape in OpenCV/Python, what methods can be used to trace a curve along the center of the shape?

Given a contour outlining the edge of the letter S (in comic sans for example), how can I get a series of points along the spine of this letter in order to later represent this shape using lines, cubic spline or other curve-representing technique? I want to process and represent the shape using 30-40 points in Python/OpenCV.
Morphological skeletonization could help with this but the operation always seems to produce erroneous branches. Is there a better way to collapse the contour into just the 'S' shape of the letter?
In the example below you can see the erroneous 'serpent's tongue' like branches that are produced by morphological skeletonization. I don't know if it's fair to say they are erroneous if that's what the algorithm is supposed to be doing, but for me I would not like them to be there.
Below is the comic sans alphabet:
Another problem with skeletonization is that it is computationally expensive, but if you know a way of making it robust to forming 'serpent's tongue' like branches then I will give it a try.
Actually vectorizing fonts isn't trivial problem and quite tricky. To properly vectorize fonts using bezier curve you'll need tracing. There are many library you can use for tracing image, for example Potrace. I'm not knowledgeable using python but based on my experience, I have done similar project using c++ described below:
A. Fit the contour using cubic bezier
This method is quite simple although a lot of work should be done. I believe this also works well if you want to fit skeletons obtained from thinning.
Find contour/edge of the object, you can use OpenCV function findContours()
The entire shape can't be represented using a single cubic bezier, so divide them to several segments using Ramer-Douglas-Peucker (RDP). The important thing in this step, don't delete any points, use RDP only to segment the points. See colored segments on image below.
For each segments, where S is a set of n points S = (s0, s1,...Sn), fit a cubic bezier using Least Square Fitting
Illustration of least square fitting:
B. Resolution Resolution Independent Curve Rendering
This method as described in this paper is quite complex but one of the best algorithms available to display vector fonts:
Find contour (the same with method A)
Use RDP, differently from method A, use RDP to remove points so the contour can be simplified.
Do delaunay triangulation.
Draw bezier curve on the outer edges using method described in the paper
The following simple idea might be usefull.
Calculate Medial axis of the outer contour. This would ensure connectivity of the curves.
Find out the branch points. Depending on its length you can delete them in order to eliminate "serpent's tongue" problem.
Hope it helps.

Robust detection of grid pattern in an image

I have written a program in Python which automatically reads score sheets like this one
At the moment I am using the following basic strategy:
Deskew the image using ImageMagick
Read into Python using PIL, converting the image to B&W
Calculate calculate the sums of pixels in the rows and the columns
Find peaks in these sums
Check the intersections implied by these peaks for fill.
The result of running the program is shown in this image:
You can see the peak plots below and to the right of the image shown in the top left. The lines in the top left image are the positions of the columns and the red dots show the identified scores. The histogram bottom right shows the fill levels of each circle, and the classification line.
The problem with this method is that it requires careful tuning, and is sensitive to differences in scanning settings. Is there a more robust way of recognising the grid, which will require less a-priori information (at the moment I am using knowledge about how many dots there are) and is more robust to people drawing other shapes on the sheets? I believe it may be possible using a 2D Fourier Transform, but I'm not sure how.
I am using the EPD, so I have quite a few libraries at my disposal.
First of all, I find your initial method quite sound and I would have probably tried the same way (I especially appreciate the row/column projection followed by histogramming, which is an underrated method that is usually quite efficient in real applications).
However, since you want to go for a more robust processing pipeline, here is a proposal that can probably be fully automated (also removing at the same time the deskewing via ImageMagick):
Feature extraction: extract the circles via a generalized Hough transform. As suggested in other answers, you can use OpenCV's Python wrapper for that. The detector may miss some circles but this is not important.
Apply a robust alignment detector using the circle centers.You can use Desloneux parameter-less detector described here. Don't be afraid by the math, the procedure is quite simple to implement (and you can find example implementations online).
Get rid of diagonal lines by a selection on the orientation.
Find the intersections of the lines to get the dots. You can use these coordinates for deskewing by assuming ideal fixed positions for these intersections.
This pipeline may be a bit CPU-intensive (especially step 2 that will proceed to some kind of greedy search), but it should be quite robust and automatic.
The correct way to do this is to use Connected Component analysis on the image, to segment it into "objects". Then you can use higher level algorithms (e.g. hough transform on the components centroids) to detect the grid and also determine for each cell whether it's on/off, by looking at the number of active pixels it contains.

Categories

Resources