Finding contour points in subpixel in OpenCV - python

If we find contours for an image using cv2.findContours(), the co-ordinates present in each contour are of datatype of int32.
Example: Printing contours[0] gives,
array([[[0,1]],
[[1,1]],
.
.
.
[[400,232]]])
Is there any way to find the co-ordinates in the contour with the higher precision (subpixel)?
array([[[0.11,1.78]],
[[1.56,1.92]],
.
.
.
[[400.79,232.35]]])

In principle, in a properly sampled gray-scale image you do have information to guess at the location with sub-pixel precision. But it’s still a guess. I’m not aware of any existing algorithms that try to do this, I only know of algorithms that trace the contour in a binary image.
There have been a few papers published (that I’m aware of) that try to simplify the outline polygon in a way that you get a better representation of the underlying contour. These papers all make an assumption of smoothness that allows them to accomplish their task, without such an assumption there is no other information in the binary image than what is extracted by the simple pixel contour algorithm. The simplest method in this category is smoothing the polygon: move each vertex a little closer to the line formed by its two neighboring vertices.
If the goal however is to get more precise measurements of the object, then there are several different approaches to use the gray-value data to significantly improve the precision of these measurements. I’m familiar with two of them:
L.J. van Vliet, “Gray-scale measurements in multi-dimensional digitized images”, PhD thesis Delft University of Technology, 1993. PDF
N. Sladoje and J. Lindblad, “High Precision Boundary Length Estimation by Utilizing Gray-Level Information”, IEEE Transactions on Pattern Analysis and Machine Intelligence 31(2):357-363, 2009. DOI
The first one does area, perimeter and local curvature in 2D (higher dimensions lead to additional measurements), based on the assumption of a properly sampled band-limited image. The latter does length, but is the start point of the “pixel coverage” model: the same authors have papers also on area and Feret diameters. This model assumes area sampling of a sharp boundary.

Related

Algorithm to create polygons(No Thiesen/Voronoi)

I have been trying to create custom regions for states. I want to fill the state map by using area of influence of points.
The below image represents what I have been trying. The left image shows the points and I just want to fill all the areas as in the right image. I have used Voronoi/Thiesen, but it leaves some points outside the area since it just takes the centroid to color the polygon.
Is there any algorithm or process to achieve that?, now I am using in Python.
You've identified your basic problem: you used a cluster-unit Voronoi algorithm, which is too simplistic for your application. You need to apply that same algebra to the points themselves, not to the region as a single-statistic entity.
To this end, I strongly recommend a multi-class SVM (Support Vector Machine) algorithm, which will identify the largest gaps between identified regions (classes) of points. Use a Gaussian kernel modification (of a very low degree) to handle non-linear boundaries. You will almost certainly get simple curves instead of lines.

How to detect key intersection points on sports court using OpenCV

I'm trying to detect the key intersection points of lines on a sports court to allow me to use homography to map a player's position in the camera frame to the corresponding position on a 2D model of the court. The camera position and environment details such as lighting will vary.
My current approach has been to use Canny edge detection and Hough Transform to detect the lines then to use kmeans to group them into horizontal/vertical groups. Once I obtain the key points (court floor corners and intersection points of service lines) I know I can use findHomography to obtain the perspective transform for the court. My problems are:
Even if I get the intersection points for all the lines, given the camera position (and therefore frame rotation) is not fixed how can I know which lines/intersection points are which? Is there a technique to take the intersections and try and fit them to a model court? I have the exact dimensions of the court floor which is the area I care about.
There's lots of "noise" created when using HoughLines and HoughLinesP - see image below. What would be the best way to merge these lines? Or if fitting to a model is that not a problem?
Is this the best modern approach or would attempting something like training a CV model to do segmentation be a better approach?
Sample output from Input/HoughLines/HoughLinesP (L to R):

Find the most significant corner of a skeleton and segment the skeleton at that corner

I have images of ore seams which I have first skeletonised (medial axis multiplied by the distance transform), then extracted corners (see the green dots). It looks like this:
The problem is to find a turning point and then segment the seam by separating the seam at the turning point. Not all skeletons have turning points, some are quite linear, and the turning points can be in any orientation. But the above image shows a seam which does have a defined turning point. Other examples of turning points look like (using ASCII): "- /- _". "X" turning points don't really exist.
I've tried a number of methods including downsampling the image, curve fitting, k-means clustering, corner detection at various thresholds and window sizes, and I haven't figured it out yet. (I'm new to to using scikit)
The technique must be able to give me some value which I can use heuristically determine whether there is a turning point or not.
What I'd like to do is to do some sort of 2 line ("piecewise"?) regression and find an intersection or some sort of rotated polynomial regression, then determine if a turning point exists, and if it does exist, the best coordinate that represents the turning point. Here is my work in progress: https://gist.github.com/anonymous/40eda19e50dec671126a
From there, I learned that a watershed segmentation with appropriate label coordinates should be able to segment the skeleton.
I found this resource: Fit a curve for data made up of two distinct regimes
But I wasn't able to figure out to apply it my current situation. More importantly there's no way for me to guess a-priori what the initial coefficients are for the fitting function since the skeletons can be in any orientation.

Determining "bottleneck" image regions using scipy

I'm doing image processing and mathematical morphology using scipy.ndimage and really enjoy it. Our work involves simulating charges moving through various films, and we're trying to use image analysis tools to estimate why different morphologies work better than others.
I quickly was able to use ndimage.label and distance_transform_edt to find the connected components and get sizing on them. I also implemented a breadth-first search to find minimal paths between the components and the edges, which represent electrodes.
Now, I'd like to determine "bottleneck" or "narrow channel" regions. I'm not even sure if I'm searching for the right keywords, since my expertise isn't really in image processing. I've given two examples below.. I want to find features like the red circles and count them and determine their size distributions. (Consider that charges will move more easily through wider bottlenecks.)
The problem is that I can't label these, since they're not independent components. The distance transforms give me small numbers at the edges.. I want something like the smallest distance through these bottlenecks.
Any advice where to look or general strategies?
One could use the medial axis transform to calculate the radius of a ball fit at each point in the bacl set to obtain the nooks in the image. In the following example we use the watershed of the distance function weighted by the distance function itself to obtain contours which separate minimas(the white components in the image). This thus gives a path weighted by the maximum value of the distance function separating 2 white components. I have done this in matlab but i think its easy to replicate the same in Scikit image tool box.
Image1:
Filling the holes since they aren't paths:
Distance function: (heat map)
Watershed of distance function (paths):
Watershed weighted by Distance function (final paths):
Image 2:
Distance function:
Watershed of distance function (paths):
Watershed weighted by Distance function (final paths):
Thus as demonstrated we have calculated technical a skeleton by zone of influence(SKIZ) using the watershed of the distance function(cityblock used here). One has to also note that the holes on the borders are not filled since the imfill ignores holes on borders. If its to be filled one can add a frame around so that one can use imfill to fill these later.

How to perform image cross-correlation with subpixel accuracy with scipy

The image below shows two circles of same radius, rendered with antialiasing, only that the left circle is shifted half pixel horizontally (notice that the circle horizontal center is at the middle of a pixel at the left, and at the pixel border at the right).
If I perform a cross-correlation, I can take the position of the maximum on the correlation array, and then calculate the shift. But since pixel positions are always integers, my question is:
"How can I obtain a sub-pixel (floating point) offset between two images using cross-correlation in Numpy/Scipy?"
In my scripts, am using either of scipy.signal.correlate2d or scipy.ndimage.filters.correlate, and they seem to produce identical results.
The circles here are just examples, but my domain-specific features tend to have sub-pixel shifts, and currently getting only integer shifts is giving results that are not so good...
Any help will be much appreciated!
The discrete cross-correlation (implemented by those) can only have a single pixel precision. The only solution I can see is to interpolate your 2D arrays to a finer grid (up-sampling).
Here's some discussion on DSP about upsampling before cross-correlation.
I had a very similar issue, also with shifted circles, and stumbled upon a great Python package called 'image registration' by Adam Ginsburg. It gives you sub-pixel 2D images shifts and is fairly fast. I believe it's a Python implementation of a popular MATLAB module, which only upsamples images around the peak of the x-correlation.
Check it out: https://github.com/keflavich/image_registration
I've been using 'chi2_shifts.py' with good results.

Categories

Resources