Getting pixel averages of a vector sitting atop a bitmap - python

I'm currently involved in a hardware project where I am mapping triangular shaped LED to traditional bitmap images. I'd like to overlay a triangle vector onto an image and get the average pixel data within the bounds of that vector. However, I'm unfamiliar with the math needed to calculate this. Does anyone have an algorithm or a link that could send me in the right direction? (I tagged this as Python, which is preferred, but I'd be happy with the general algorithm!)
I've created a basic image of what I'm trying to capture here: http://imgur.com/Isjip.gif

Will this work: http://www.blackpawn.com/texts/pointinpoly/default.html ?

You can do line rasterization on the lineparts to determine for each pixel at each horizontal scanline lie within your triangle. Sum and divide their RGB values to get the average.

Related

How to separate monochromatic objects of different sizes in opencv

I want to separate a noiseless 1-bit (black and white) image with white circles based on the concave part of the outline.
Please refer to the picture below.
This is the white object to separate:
The target result is:
Here is my implementation with the watershed algorithm:
The above result is not what I want.
If the size of the separated objects is similar, my algorithm is fine, but if the size difference is large, a problem occurs as shown in the picture above.
I would like to implement an opencv algorithm that can segment a region like the second picture.
However, the input photo is not necessarily a perfect circle.
It can be oval like the picture below:
Or it can be squished:
However, I would like to separate it based on the concave part of the outline anyway.
I think it can be implemented by using the distanceTransform function well, but I'm not sure how to approach it.
Please let me know which way to refer.
Thank you.
Here is an algorithm which should give you a good start.
Compute all contours.
For each contour compute the convexity defects. If there is no defect the contour is an isolated circle and you can segment it out.
After you handled all the isolated circles, you can work out the remaining contours by counting the convexity defects: the number of circles N for each contour is the number of convexity defects divided by 2.
Use a clustering algorithm (https://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html should do well given the shapes you have) and cluster the "white" points using N as the number of clusters to be found.
If you want to find the minimal openings, you can use a medial axis based approach.
Pseudo code:
compute contours of bitmap
compute medial-axis of bitmap
for each point on medial-axis:
get minimal distance d from medial axis algorithm
for each local minimum of distance d:
get two points on bitmap contours with minimal distance that are at least d apart from each other
use these points for deviding line
If you need a working implementation in python, please let me know. I would use skimage lib. For other languages you might have to implement medial-axis on your own. But that shouldn't be a big deal.

Triangulation of Polygons in a binary Mask

I´m currently working on a project to measure the surface of plant leaves. Until now I´ve successfully implemented an RCNN model to segment individual leaves and also generated a depth map using stereo computer vision which allows me to calculate distances between any two points.
Now I´m stuck trying to connect everything together in order to calculate the area of a leaf/polygon.
**I got original RGB images, Binary masks containing leaves, and also the depth information of every pixel.
Can someone please point me in the right direction?**
I reckon the right way would be to use Delauney triangulation on the polygons in the binary masks and then calculate the surface using the distance between the 3 points of each triangle. I haven't been able to find something quite similar to my problem which is implemented in python.
Thanks so much for your help in advance. I´ll upload a picture of an RGB image with the masks plotted.
leaf instance segmentation
Count the pixels inside the outlines (by polygon filling) or use the shoelace formula.

Is there a way to fill in a contour in a binary image?

I'm new to Python and I need help with what is, hopefully, an easy problem.
I have equations for all sides of a polygon.
I intend to use this equations to plot dots in a binary image in order to get a contour of such polygon.
I think that in Matlab I could use imfill.m to fill in this polygon. However, I don't know if this function recognizes a smaller contour inside a larger one.
When this happens, I want my filled image to have a hole in it correspondent to the smaller contour.
Will imfill.m ignore the smaller contour and fill the hole as well?
If not, is there a similar function in Python?
Basically, what I'm looking for is a function or algorithm in Python capable of filling multiple polygons in the same image while avoiding possible holes.
Thank you for your attention.

How to perform image cross-correlation with subpixel accuracy with scipy

The image below shows two circles of same radius, rendered with antialiasing, only that the left circle is shifted half pixel horizontally (notice that the circle horizontal center is at the middle of a pixel at the left, and at the pixel border at the right).
If I perform a cross-correlation, I can take the position of the maximum on the correlation array, and then calculate the shift. But since pixel positions are always integers, my question is:
"How can I obtain a sub-pixel (floating point) offset between two images using cross-correlation in Numpy/Scipy?"
In my scripts, am using either of scipy.signal.correlate2d or scipy.ndimage.filters.correlate, and they seem to produce identical results.
The circles here are just examples, but my domain-specific features tend to have sub-pixel shifts, and currently getting only integer shifts is giving results that are not so good...
Any help will be much appreciated!
The discrete cross-correlation (implemented by those) can only have a single pixel precision. The only solution I can see is to interpolate your 2D arrays to a finer grid (up-sampling).
Here's some discussion on DSP about upsampling before cross-correlation.
I had a very similar issue, also with shifted circles, and stumbled upon a great Python package called 'image registration' by Adam Ginsburg. It gives you sub-pixel 2D images shifts and is fairly fast. I believe it's a Python implementation of a popular MATLAB module, which only upsamples images around the peak of the x-correlation.
Check it out: https://github.com/keflavich/image_registration
I've been using 'chi2_shifts.py' with good results.

Follow image contour using marching square in Python

I am new to Python and would be grateful if anyone can point me the right direction or better still some examples.
I am trying to write a program to convert image (jpeg or any image file) into gcode or x/y coordinate. Instead of scanning x and y direction, I need to follow the contour of objects in the image. For example, a doughnut with outer circle and inner circle, or a face with face outline and inner contour of organs.
I know there is something called marching square, but not sure how to do it in python?
Thank you.
You can also have a look at OpenCV's findContours function which perform the same operation very fast. It is not pure python but there is a very nice Python binding making use of numpy arrays, etc ... (the new "cv2" module).
You can find an implementation of marching squares in scikit-image: https://scikit-image.org/docs/dev/auto_examples/edges/plot_contours.html
I would suggest the following procedure:
1) Convert your image to a binary image (nxn numpy array): 1(object pixels) and 0 (background pixels)
3) Since you want to follow a contour, you can see this problem as: finding the all the object pixels belonging to the same object. This can be solved by the Connected Components Labeling Object.
4) Once you have your objects identified, you can run the Marching squares algorithm over each object. MS consist in divide your image in n squares and then evaluating the value of all the vertex for a given square. MS will find the the border by analyzing each square and finding those in which the value of at least one of the vertex of a given square is 0 whereas the other vertex of such square are 1 --> The border/contour is contained in such square.

Categories

Resources