I'm trying to detect the key intersection points of lines on a sports court to allow me to use homography to map a player's position in the camera frame to the corresponding position on a 2D model of the court. The camera position and environment details such as lighting will vary.
My current approach has been to use Canny edge detection and Hough Transform to detect the lines then to use kmeans to group them into horizontal/vertical groups. Once I obtain the key points (court floor corners and intersection points of service lines) I know I can use findHomography to obtain the perspective transform for the court. My problems are:
Even if I get the intersection points for all the lines, given the camera position (and therefore frame rotation) is not fixed how can I know which lines/intersection points are which? Is there a technique to take the intersections and try and fit them to a model court? I have the exact dimensions of the court floor which is the area I care about.
There's lots of "noise" created when using HoughLines and HoughLinesP - see image below. What would be the best way to merge these lines? Or if fitting to a model is that not a problem?
Is this the best modern approach or would attempting something like training a CV model to do segmentation be a better approach?
Sample output from Input/HoughLines/HoughLinesP (L to R):
Related
If we find contours for an image using cv2.findContours(), the co-ordinates present in each contour are of datatype of int32.
Example: Printing contours[0] gives,
array([[[0,1]],
[[1,1]],
.
.
.
[[400,232]]])
Is there any way to find the co-ordinates in the contour with the higher precision (subpixel)?
array([[[0.11,1.78]],
[[1.56,1.92]],
.
.
.
[[400.79,232.35]]])
In principle, in a properly sampled gray-scale image you do have information to guess at the location with sub-pixel precision. But it’s still a guess. I’m not aware of any existing algorithms that try to do this, I only know of algorithms that trace the contour in a binary image.
There have been a few papers published (that I’m aware of) that try to simplify the outline polygon in a way that you get a better representation of the underlying contour. These papers all make an assumption of smoothness that allows them to accomplish their task, without such an assumption there is no other information in the binary image than what is extracted by the simple pixel contour algorithm. The simplest method in this category is smoothing the polygon: move each vertex a little closer to the line formed by its two neighboring vertices.
If the goal however is to get more precise measurements of the object, then there are several different approaches to use the gray-value data to significantly improve the precision of these measurements. I’m familiar with two of them:
L.J. van Vliet, “Gray-scale measurements in multi-dimensional digitized images”, PhD thesis Delft University of Technology, 1993. PDF
N. Sladoje and J. Lindblad, “High Precision Boundary Length Estimation by Utilizing Gray-Level Information”, IEEE Transactions on Pattern Analysis and Machine Intelligence 31(2):357-363, 2009. DOI
The first one does area, perimeter and local curvature in 2D (higher dimensions lead to additional measurements), based on the assumption of a properly sampled band-limited image. The latter does length, but is the start point of the “pixel coverage” model: the same authors have papers also on area and Feret diameters. This model assumes area sampling of a sharp boundary.
I have been trying to create custom regions for states. I want to fill the state map by using area of influence of points.
The below image represents what I have been trying. The left image shows the points and I just want to fill all the areas as in the right image. I have used Voronoi/Thiesen, but it leaves some points outside the area since it just takes the centroid to color the polygon.
Is there any algorithm or process to achieve that?, now I am using in Python.
You've identified your basic problem: you used a cluster-unit Voronoi algorithm, which is too simplistic for your application. You need to apply that same algebra to the points themselves, not to the region as a single-statistic entity.
To this end, I strongly recommend a multi-class SVM (Support Vector Machine) algorithm, which will identify the largest gaps between identified regions (classes) of points. Use a Gaussian kernel modification (of a very low degree) to handle non-linear boundaries. You will almost certainly get simple curves instead of lines.
I know how to do basic Canny edge detection using OpenCV. However I need to discard all edges that do not fall within 15 degrees of a predetermined angle.
Any help would be greatly appreciated.
Its an old question but here is the process you should use.
1]Start by filter your source image (back-ground subtract/color/etc)
2]Apply a generic Edge detector or a steerable filter or (if you want to get some really good result & are doing it for research purposes look for Phase Strectch Transform Algorithm
3]Save those line in a vector/whatever
4]Create a circle drawing algorithm (here is the main idea)
Your circle drawing algorithm (CDA further) will take every point returned by your edge filter.
For each point it will build circles of a variable diameter [Dmin;Dmax] based on the max/min distance you can accept for two points be considered on the same line.
If no next-pixel are present in the circle octant corresponding to your angle, simply dismiss it.
Once you have your lines that match your angle you can sort them by length to dismiss line probably due to noise.
You should also note that there is other methods, this method as some good aspect:
1- Its robust against noise & low quality images/video
2- Its CUDA compliant (i.e. easy to push in parallel processing).
3-Its fast and roughly more accurate than most basic line detectors.
I'm doing image processing and mathematical morphology using scipy.ndimage and really enjoy it. Our work involves simulating charges moving through various films, and we're trying to use image analysis tools to estimate why different morphologies work better than others.
I quickly was able to use ndimage.label and distance_transform_edt to find the connected components and get sizing on them. I also implemented a breadth-first search to find minimal paths between the components and the edges, which represent electrodes.
Now, I'd like to determine "bottleneck" or "narrow channel" regions. I'm not even sure if I'm searching for the right keywords, since my expertise isn't really in image processing. I've given two examples below.. I want to find features like the red circles and count them and determine their size distributions. (Consider that charges will move more easily through wider bottlenecks.)
The problem is that I can't label these, since they're not independent components. The distance transforms give me small numbers at the edges.. I want something like the smallest distance through these bottlenecks.
Any advice where to look or general strategies?
One could use the medial axis transform to calculate the radius of a ball fit at each point in the bacl set to obtain the nooks in the image. In the following example we use the watershed of the distance function weighted by the distance function itself to obtain contours which separate minimas(the white components in the image). This thus gives a path weighted by the maximum value of the distance function separating 2 white components. I have done this in matlab but i think its easy to replicate the same in Scikit image tool box.
Image1:
Filling the holes since they aren't paths:
Distance function: (heat map)
Watershed of distance function (paths):
Watershed weighted by Distance function (final paths):
Image 2:
Distance function:
Watershed of distance function (paths):
Watershed weighted by Distance function (final paths):
Thus as demonstrated we have calculated technical a skeleton by zone of influence(SKIZ) using the watershed of the distance function(cityblock used here). One has to also note that the holes on the borders are not filled since the imfill ignores holes on borders. If its to be filled one can add a frame around so that one can use imfill to fill these later.
I am trying to detect arcs inside an image. The information that I have for certain with me is the radius of the arc. I can try and maybe get the centre of the circle whose arc I want to identify.
Is there any algorithm in Open CV which can tell us that the detected contour ( or edge from canny edge is an arc or an approximation of an arc)
Any help on how this would be possible in OpenCV with Python or even a general approach would be very helpful
Thanks
If you think that there will not be any change in the shape (i mean arc won't become line or something like this) then you can have a look a Generalized Hough Transform (GHT) which can detect any shape you want.
Cons:
There is no directly function in openCV library for GHT but you can get several source code at internet.
It is sometimes slow but can become fast if you set the parameters properly.
It won't be able to detect if the shape changes. for exmaple, i tried to detect squares using GHT and i got good results but when square were not perfect squares (i.e. rectangle or something like that), it didn't detect.
You can do it this way:
Convert the image to edges using canny filter.
Make the image binary using threshold function there is an option for regular threshold, otsu or adaptive.
Find contours with sufficient length (findContours function)
Iterate all the contours and try to fit ellipse (fitEllipse function)
Validate fitted ellipses by radius.
Check if detected ellipse is good fit - checking how much of the contour pixels are on the detected ellipse.
Select the best one.
You can try to increase the speed using RANSAC each time selecting 6 points from binarized image and trying to fit.
My math is rusty, but...
What about evaluating a contour by looping over its composite edge-nodes and finding those where the angle between the edges doesn't change too rapidly AND doesn't change sign?
A chain of angles (θ) where:
0 < θi < θmax
with number of edges (c) where:
c > dconst
would indicate an arc of:
radius ∝ 1/(θi + θi+1 + ...+ θn)/n)
or:
r ∝ 1/θave
and:
arclenth ∝ c
A way of finding these angles is discussed at Get angle from OpenCV Canny edge detector