Follow image contour using marching square in Python - python

I am new to Python and would be grateful if anyone can point me the right direction or better still some examples.
I am trying to write a program to convert image (jpeg or any image file) into gcode or x/y coordinate. Instead of scanning x and y direction, I need to follow the contour of objects in the image. For example, a doughnut with outer circle and inner circle, or a face with face outline and inner contour of organs.
I know there is something called marching square, but not sure how to do it in python?
Thank you.

You can also have a look at OpenCV's findContours function which perform the same operation very fast. It is not pure python but there is a very nice Python binding making use of numpy arrays, etc ... (the new "cv2" module).

You can find an implementation of marching squares in scikit-image: https://scikit-image.org/docs/dev/auto_examples/edges/plot_contours.html

I would suggest the following procedure:
1) Convert your image to a binary image (nxn numpy array): 1(object pixels) and 0 (background pixels)
3) Since you want to follow a contour, you can see this problem as: finding the all the object pixels belonging to the same object. This can be solved by the Connected Components Labeling Object.
4) Once you have your objects identified, you can run the Marching squares algorithm over each object. MS consist in divide your image in n squares and then evaluating the value of all the vertex for a given square. MS will find the the border by analyzing each square and finding those in which the value of at least one of the vertex of a given square is 0 whereas the other vertex of such square are 1 --> The border/contour is contained in such square.

Related

How to use Numpy to compute the outer contour of binary image and fill inner area?

I want to use Numpy (without any other packages) to find the outer contour of the 1st binary image and fill the inside area so it looks like the 2nd image, basically filling the holes of the wheels but I don't know how to do it. Does anyone have any ideas?
You're looking to implement the flood fill algorithm. The high-level idea is:
Pick an origin point, say (0, 0).
Run a breath-first or depth-first search from the origin to collect a list of points with the same RGB value. You pick a pixel (starting with the origin), find it's horizontal and vertical neighbours, and if the colour is the same, repeat on the new pixel.
Set every pixel that wasn't identified in the search to white.
This operation has been implemented many times before. If you are not opposed to using a new library, take a look at findContours and drawContours in OpenCV. OpenCV operates on numpy arrays so you won't have to transform the data.

How to separate monochromatic objects of different sizes in opencv

I want to separate a noiseless 1-bit (black and white) image with white circles based on the concave part of the outline.
Please refer to the picture below.
This is the white object to separate:
The target result is:
Here is my implementation with the watershed algorithm:
The above result is not what I want.
If the size of the separated objects is similar, my algorithm is fine, but if the size difference is large, a problem occurs as shown in the picture above.
I would like to implement an opencv algorithm that can segment a region like the second picture.
However, the input photo is not necessarily a perfect circle.
It can be oval like the picture below:
Or it can be squished:
However, I would like to separate it based on the concave part of the outline anyway.
I think it can be implemented by using the distanceTransform function well, but I'm not sure how to approach it.
Please let me know which way to refer.
Thank you.
Here is an algorithm which should give you a good start.
Compute all contours.
For each contour compute the convexity defects. If there is no defect the contour is an isolated circle and you can segment it out.
After you handled all the isolated circles, you can work out the remaining contours by counting the convexity defects: the number of circles N for each contour is the number of convexity defects divided by 2.
Use a clustering algorithm (https://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html should do well given the shapes you have) and cluster the "white" points using N as the number of clusters to be found.
If you want to find the minimal openings, you can use a medial axis based approach.
Pseudo code:
compute contours of bitmap
compute medial-axis of bitmap
for each point on medial-axis:
get minimal distance d from medial axis algorithm
for each local minimum of distance d:
get two points on bitmap contours with minimal distance that are at least d apart from each other
use these points for deviding line
If you need a working implementation in python, please let me know. I would use skimage lib. For other languages you might have to implement medial-axis on your own. But that shouldn't be a big deal.

Triangulation of Polygons in a binary Mask

I´m currently working on a project to measure the surface of plant leaves. Until now I´ve successfully implemented an RCNN model to segment individual leaves and also generated a depth map using stereo computer vision which allows me to calculate distances between any two points.
Now I´m stuck trying to connect everything together in order to calculate the area of a leaf/polygon.
**I got original RGB images, Binary masks containing leaves, and also the depth information of every pixel.
Can someone please point me in the right direction?**
I reckon the right way would be to use Delauney triangulation on the polygons in the binary masks and then calculate the surface using the distance between the 3 points of each triangle. I haven't been able to find something quite similar to my problem which is implemented in python.
Thanks so much for your help in advance. I´ll upload a picture of an RGB image with the masks plotted.
leaf instance segmentation
Count the pixels inside the outlines (by polygon filling) or use the shoelace formula.

How to find neighbors in binary image with given horizontal and vertical distance (Python)

I have an Image (or several hundreds of them) that need to be analyzed. The goal is to find all black spots close to each other.
For example all black spots with a Horizontal distance of 160 pixel and vertical 40 pixel.
For now I just look at each Pixel and if there is a black pixel I call a recursive Method to find its neighbours (i can post the code too if you want to)
It works, but its very slow. At the moment the script runs about 3-4 minutes depending on image size.
Is there some easy/fast way to accomplish this (best would be a scikit-image method to help out here) I'm using Python.
edit: I tried to use scikit.measure.find_contours, now i have an array with arrays containing the contours of the black spots. Now I only need to find the contours in the neighbourhood of these contours.
When you get the coordinates of the different black spots, rather than computing all distances between all pairs of black pixels, you can use a cKDTree (in scipy.spatial, http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.html#scipy.spatial.cKDTree). The exact method of cKDTree to use depends on your exact criterion (you can for example use cKDTree.query_ball_tree to know whether there exists a pair of points belonging to two different labels, with a maximal distance that you give).
KDTrees are a great method to reduce the complexity of problems based on neighboring points. If you want to use KDTrees, you'll need to rescale the coordinates so that you can use one of the classical norms to compute the distance between points.
Disclaimer: I'm not proficient with the scikit image library at all, but I've tackled similar problems using MATLAB so I've searched for the equivalent methods in scikit, and I hope my findings below help you.
First you can use skimage.measure.label which returns label_image, i.e. an image where all connected regions are labelled with the same number. I believe you should call this function with background=255 because from your description it seems that the background in your images is the while region (hence the value 255).
This is essentially an image where the background pixels are assigned the value 0 and the pixels that make up each (connected) spot are assigned the value of an integer label, so all the pixels of one spot will be labelled with the value 1, the pixels of another spot will be labelled with the value 2, and so on. Below I'll refer to "spots" and "labelled regions" interchangeably.
You can then call skimage.measure.regionprops, that takes as input the label_image obtained in the previous step. This function returns a list of RegionProperties (one for each labelled region), which is a summary of properties of a labelled region.
Depending on your definition of
The goal is to find all black spots close to each other.
there are different fields of the RegionProperties that you can use to help solve your problem:
bbox gives you the set of coordinates of the bounding box that contains that labelled region,
centroid gives you the coordinates of the centroid pixel of that labelled region,
local_centroid gives you the centroid relative to the bounding box bbox
(Note there are also area and bbox_area properties which you can use to decide whether to throw away very small spots that you might not be interested in, thus reducing computation time when it comes to comparing proximity of each pair of spots)
If you're looking for a coarse comparison, then comparing the centroid or local_centroid between each pair of labelled regions might be enough.
Otherwise you can use the bbox coordinates to measure the exact distance between the outer bounds of any two regions.
If you want to base the decision on the precise distance between the pixel(s) of each pair of regions that are closest to each other, then you'll likely have to use the coords property.
If your input image is binary, you could separate your regions of interest as follows:
"grow" all the regions by the expected distance (actually half of it, as you grow from "both sides of the gap") with binary_dilation, where the structure is a kernel (e.g. rectangular: http://scikit-image.org/docs/dev/api/skimage.morphology.html#skimage.morphology.rectangle) of, let's say, 20x80pixels;
use the resulting mask as an input to skimage.measure.label to assign different values for different regions' pixels;
multiply your input image by the mask created above to zero dilated pixels.
Here are the results of proposed method on your image and kernel = rectange(5,5):
Dilated binary image (output of step 1):
Labeled version of the above (output of step 2):
Multiplication results (output of step 3):

Getting pixel averages of a vector sitting atop a bitmap

I'm currently involved in a hardware project where I am mapping triangular shaped LED to traditional bitmap images. I'd like to overlay a triangle vector onto an image and get the average pixel data within the bounds of that vector. However, I'm unfamiliar with the math needed to calculate this. Does anyone have an algorithm or a link that could send me in the right direction? (I tagged this as Python, which is preferred, but I'd be happy with the general algorithm!)
I've created a basic image of what I'm trying to capture here: http://imgur.com/Isjip.gif
Will this work: http://www.blackpawn.com/texts/pointinpoly/default.html ?
You can do line rasterization on the lineparts to determine for each pixel at each horizontal scanline lie within your triangle. Sum and divide their RGB values to get the average.

Categories

Resources