Smoothing the edge in a large scale - python

I have images of eyes and eyebrows like the following one.
And I want it to be processed to be more smooth on the edges like the following one, which is drawn by hand.
I've tried with morphology opening, but with different size of the SE, it either fills the unexpected area or leaves with some rough edges. Here's the result with circle SE of size 9 and 7 respectively.
Another idea is to calculate the Convex Hull of the eyebrow and fill the color. But since the eyebrow is usually bending and the Convex Hull will become something like the following image, which is also not very ideal.
Or should I make every pixel on the edge to be a vertex of a polygon and then smooth the polygon? Any specific idea here?
Any idea how can I get the result in the second image?
I'm using Python OpenCV. Code or general idea are both welcomed. Thanks!

Related

How to separate monochromatic objects of different sizes in opencv

I want to separate a noiseless 1-bit (black and white) image with white circles based on the concave part of the outline.
Please refer to the picture below.
This is the white object to separate:
The target result is:
Here is my implementation with the watershed algorithm:
The above result is not what I want.
If the size of the separated objects is similar, my algorithm is fine, but if the size difference is large, a problem occurs as shown in the picture above.
I would like to implement an opencv algorithm that can segment a region like the second picture.
However, the input photo is not necessarily a perfect circle.
It can be oval like the picture below:
Or it can be squished:
However, I would like to separate it based on the concave part of the outline anyway.
I think it can be implemented by using the distanceTransform function well, but I'm not sure how to approach it.
Please let me know which way to refer.
Thank you.
Here is an algorithm which should give you a good start.
Compute all contours.
For each contour compute the convexity defects. If there is no defect the contour is an isolated circle and you can segment it out.
After you handled all the isolated circles, you can work out the remaining contours by counting the convexity defects: the number of circles N for each contour is the number of convexity defects divided by 2.
Use a clustering algorithm (https://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html should do well given the shapes you have) and cluster the "white" points using N as the number of clusters to be found.
If you want to find the minimal openings, you can use a medial axis based approach.
Pseudo code:
compute contours of bitmap
compute medial-axis of bitmap
for each point on medial-axis:
get minimal distance d from medial axis algorithm
for each local minimum of distance d:
get two points on bitmap contours with minimal distance that are at least d apart from each other
use these points for deviding line
If you need a working implementation in python, please let me know. I would use skimage lib. For other languages you might have to implement medial-axis on your own. But that shouldn't be a big deal.

Calculating the tangent on a transition point of a black and white image

I would like to calculate the angle of the tangent on a given white to black transition point on an image that consists entirely of black and white pixels and displays simple shapes such as squares, circles or triangles.
Zooming in on an image like that would look like this:
If you were to pick any of the black pixels next to a white one, my solution would be to follow the edge for a few pixels, then define a formula based on the curvature of the pixels and calculate the exact value of the defined point. Is there a simpler way of doing that? The resolution of the images is around 800x600 pixels so a fairly accurate estiamate of the angle of the provided point should be possible.
In my current approach I follow the edge line of the shape for about ten pixels, but I'm not sure where to go from there. Is there a library that already performs this kind of calculation for you? How many pixels would you need in order to be able to make an accurate judgement of the angle at that point?
Such a measurement is highly inaccurate on binary images, if not unusable.
If you measure on two neighboring pixels, the angle will be one of 0° or ±45°, so the angular resolution is very poor !
You can compute on several pixels to improve that resolution (five pixels correspond to like 11°), but now you are no more sure that the direction is the same, because the shape might be rounded.
If in your case the repertoire of shapes is known to be simple, you'd better perform fitting of the whole shapes before querying the tangents.

How do I split a shape with conected pixels in to two parts in a binary image

My goal is to draw a rectangle border around the face by removing the neck area connected to the whole face area. All positive values here represent skin color pixels. Here I have so far filtered out the binary image using OpenCV and python. Code so far skinid.py
Below is the test image.
Noise removals have also been applied to this binary image
Up to this point, I followed this paper Face segmentation using skin-color map in videophone applications. And for the most of it, I used custom functions rather than using built-in OpenCV functions because I kind of wanted to do it from scratch. (although some erosion, opening, closing were used to tune it up)
I want to know a way to split the neck from the whole face area and remove it like this,
as I am quite new to the whole image processing area.
Perform a distance transform (built into opencv or you could write by hand its a pretty fun and easy one to write using the erode function iteratively, and adding the result into another matrix each round, lol slow but conceptually easy). On the binary image you presented above, the highest value in a distance transform (and tbh I think pretty generalized across any mug shots) will be the center of the face. So that pixel is the center of your box, but also that value (value of that pixel after the distance transform) will give you a pretty solid approx face size (since it is going to be the pixel distance from the center of the face to the horizontal edges of the face). Depending on what you are after, you may just be able to multiply that distance by say 1.5 or so (figure out standard face width to height ratio and such to choose your best multiplier), set that as your circle radius (or half side width for a box) and call it a day. Comment if you need anything clarified as I am pretty confident in this answer and would be happy to write up some quick code (in c++ opencv) if you need/ it would help.
(alt idea). You could tweak your color filter a bit to reject darker areas (this will at least in the image presented) create a nice separation between your face and neck due to the shadowing of the chin. (you may have to dial back your dilate/ closing op tho)

Finding waters edge using OpenCV and Python accurately

I have been working on trying to detect the edge of the water using OpenCV/Python, and the results I am getting are fairly inaccurate and there is no robustness.
This is what I have achieved so far:
Original Image, output image
Canny Edge detection
What I am currently doing is setting some variables (the level of Gaussian blur, the sigma used for the Canny edge detection, and the maximum deviation which the level measured can change between each point), performing an 'automatic' Canny edge detection (where the median pixel intensity is measured and used to form the lower and upper boundaries), then moving from the bottom left hand corner upwards to find the first 'white' pixel. This is done in x intervals of five the entire length of the frame.
The average y value of the points is the calculated. Each point is then tested to see if it deviates too far from the average pixel, with the deviation limit being set earlier. The remaining points are then drawn on the image as the blue line. The average value of the drawn pixels is recorded at each frame.
After 30 frames, the average of the averages is calculated and drawn as the red line, which is then assumed to be the 'real' water height.
Has anyone have any ideas on a better way to do this? What would make the edge of the water stand out more? This method works on most footage I have recorded, but with poor results.
Thanks in advance.
I have worked on a similar problem and I hope these advices can help you in some way:
Try to restrict your search area: can you make assumptions on where the water level should be? Consider also to have correctly detected the water level. Is it safe to assume that in the next frames the water level will decrease/increase constantly? Will it change slowly? Crop your image in order to take into consideration only the area where it is safe to assume that the water level is present.
Change color space: you can try to work in other color spaces like HSV in order to have the brightness separated from the chromaticity
Hough Transform line detection: try to use this algorithm to search for specific horizontal lines in the image, or other shapes.
Image undistortion: if necessary try to correct the image in order to rectify the curved lines, or cancel the perspective with an Inverse Perspective Mapping (IPM).
You can also consider to change edge detection algorithm.

Detect an arc from an image contour or edge

I am trying to detect arcs inside an image. The information that I have for certain with me is the radius of the arc. I can try and maybe get the centre of the circle whose arc I want to identify.
Is there any algorithm in Open CV which can tell us that the detected contour ( or edge from canny edge is an arc or an approximation of an arc)
Any help on how this would be possible in OpenCV with Python or even a general approach would be very helpful
Thanks
If you think that there will not be any change in the shape (i mean arc won't become line or something like this) then you can have a look a Generalized Hough Transform (GHT) which can detect any shape you want.
Cons:
There is no directly function in openCV library for GHT but you can get several source code at internet.
It is sometimes slow but can become fast if you set the parameters properly.
It won't be able to detect if the shape changes. for exmaple, i tried to detect squares using GHT and i got good results but when square were not perfect squares (i.e. rectangle or something like that), it didn't detect.
You can do it this way:
Convert the image to edges using canny filter.
Make the image binary using threshold function there is an option for regular threshold, otsu or adaptive.
Find contours with sufficient length (findContours function)
Iterate all the contours and try to fit ellipse (fitEllipse function)
Validate fitted ellipses by radius.
Check if detected ellipse is good fit - checking how much of the contour pixels are on the detected ellipse.
Select the best one.
You can try to increase the speed using RANSAC each time selecting 6 points from binarized image and trying to fit.
My math is rusty, but...
What about evaluating a contour by looping over its composite edge-nodes and finding those where the angle between the edges doesn't change too rapidly AND doesn't change sign?
A chain of angles (θ) where:
0 < θi < θmax
with number of edges (c) where:
c > dconst
would indicate an arc of:
radius ∝ 1/(θi + θi+1 + ...+ θn)/n)
or:
r ∝ 1/θave
and:
arclenth ∝ c
A way of finding these angles is discussed at Get angle from OpenCV Canny edge detector

Categories

Resources