Simple python region growing image segmentation tool? - python

I was wondering if there was a simple python toolkit for region-based image segmentation. I have a grayscale image, and my goal is to efficiently find a complete segmentation such that the pixel values in each region are similar (presumably the definition of "similar" will be determined by some tolerance parameter). I am looking for an instance segmentation where every pixel belongs exactly one region.
I have looked at the scikit-image segmentation module (https://scikit-image.org/docs/dev/api/skimage.segmentation.html), but the tools there didn't seem to do what I was looking for. For instance, skimage.segmentation.watershed looked attractive, but gave poor results using markers=None.

The flood fill algorithm from scikit-image seems close to what you want, has a tolerance parameter as well.
For more fine-tuned control you can check out OpenCV

Related

Finding contour around a cluster of pixels

I have a set of images that look like this:
Using python need a way to find a contour around the yellow shape that ignores the isolated points and is not too complex. Something looking a bit like this :
I tried some methods such as the find_contours function from skimage,which gives this after keeping only the biggest contour:
which is not what I am looking for. A also tried active contour (snake) which had the problem of paying too much attention to isolated pixels. Is there a particular method that would help me in this situation ?
Thank you
Assuming the yellow blob is slightly different across your images, I recommend you look into either using Morphological Operations, or using Contour Approximation.
I've never used scikit-image, but it appears to have Morphological functionalities included.
You can take a look at this OpenCV tutorial for a quick guideline of the different operations.
But I think all you need is to use the "Opening" operation to preprocess your yellow shape; making it smoother and removing the random speckles.
Another approach is by approximating that contour you've extracted to make it smoother. For scikit-image, that is the measure.approximate_polygon function. Also another OpenCV tutorial for reference on how Contour Approximation works (the same algorithm as with scikit-image).

yolo v8: does segment contain point?

I'm using yolo v8 to detect subjects in pictures. It's working well, and can create quite precise masks over subjects.
from ultralytics import YOLO
model = YOLO('yolov8x-seg.pt')
for output in model('image.jpg', return_outputs=True):
for segment in output['segment']:
print(segment)
The code above works, and generates a series of "segments", which are a list of points that define the shape of subjects on my image. That shape is not convex (for example horses).
I need to figure out if a random coordinate on the image falls within these segments, and I'm not sure how to do it.
My first approach was to build an image mask using PIL. That roughly worked, but it doesn't always work, depending on the shape of the segments. I also thought about using shapely, but it has restrictions on the Polygon classes, which I think will be a problem in some cases.
In any case, this really feels like a problem that could easily be solved with the tools I'm already using (yolo, pytorch, numpy...), but to be honest I'm too new to all this to figure out how to do it properly.
Any suggestion is appreciated :)
You should be able to get a segmentation mask from your model: imagine a binary image where black (zeros) represents the background and white (or other non zero values) represent an instance of a segmentation class.
Once you have the binary image you can use opencv's findContours function to get a the largest outer path.
Once you have that path you can use pointPolygonTest() to check if a point is inside that contour or not.

How to calculate a facial point grid using OpenCV and Python?

I'm trying to automatically draw a mesh or grid over a face, similar to the image below, to use the result in a blog post that I'm writing. However, my knowledge of computer vision is not enough to recognize which model or algorithm is behind these types of cool visualizations.
Could someone help pointing me some link to reador or a starting point?
Using Python, OpenCV and dlib the closest thing I found is something called delauny triangulation but I'm not sure if that's exactly what I'm looking for seeing the results.
Putting it in a few words what I have so far is:
Detect all faces on image and calculate their landmarks using dlib.get_frontal_face_detector() and dlib.shape_predictor() methods from dlib.
Use the method cv2.Subdiv2D() from OpenCV to compute a 2D subdivision based on my landmarks. In particulary I'm getting the delauny subdivision using the getTriangleList() method from the resulting subdivision.
The complete code is available here.
However, the result is not so attractive perhaps because the division is using triangles instead of polygons and I want to check if I can improve it!

opencv-python object detection

I'm a beginner in opencv using python. I have many 16 bit gray scale images and need to detect the same object every time in the different images. Tried template matching in opencv python but needed to take different templates for different images which could be not desirable. Can any one suggest me any algorithm in python to do it efficiently.
Your question is way too general. Feature matching is a very vast field.
The type of algorithm to be used totally depends on the object you want to detect, its environment etc.
So if your object won't change its size or angle in the image then use Template Matching.
If the image will change its size and orientation you can use SIFT or SURF.
If your object has unique color features that is different from its background, you can use hsv method.
If you have to classify a group of images as you object,for example all the cricket bats should be detected then you can train a number of positive images to tell the computer how the object looks like and negative image to tell how it doesn't, it can be done using haar training.
u can try out sliding window method. if ur object is the same in all samples
One way to do this is to look for known colors, shapes, and sizes.
You could start by performing an HSV threshold on your image, by converting your image to HSV colorspace and then calling
cv2.inRange(source, (minHue, minSat, minVal), (maxHue, maxSat, maxVal))
Next, you could use cv2.findContours to find all the areas in your image that meet your color requirements. Then, you could use methods such as boundingRect and contourArea to find specific attributes of the object that you want.
What you will end up with is essentially a 'pipeline' that can take a frame, and look for a shape that fits the criteria you have set. Depending on the complexity of what you want to do (you didn't say what you're looking for), this may or may not work, but I have used it with reasonable success.
GRIP is an application that allows you to threshold things in a visual way, and it will also generate Python code for you if you want. I don't really recommend using the generated code as-is because I've run into some problems that way. Here's the link to GRIP: https://github.com/WPIRoboticsProjects/GRIP
If the object you want to detect has different size in every image and also slightly varies in shape too, then I recommend you use HaarCascade of that object. If the object is very general then you can easily find haar cascade for it online. Otherwise it is not very difficult to make haar cascades(can be a littile time consuming though).
You can use this tutorial by sentdex to make HaarCascade here.
Or If you want to know how to use HaarCascades then you can get it on this link
here.

"hard" supervision in image segmentation with python

There are several packages and methods for segmentation in Python. However, if I know apriori that certain pixels (and no others) correspond to a particular object, how can I use that to segment other objects?
Which methods implemented in python would lend themselves to this approach?
Thanks.
You'll want to take a look at semi-automated image segmentation. Image segmentation in a semi-automated perspective means that you know before hand what class certain pixels belong to - either foreground or background. Given this a priori information, the goal is to minimize an energy function that best segments the rest of the pixels into foreground and background.
The best two methods that I know of are Graph Cuts and Random Walks. If you want to study the fundamentals of both of them, you should read the canonical papers by Boykov (Graph Cuts) and Grady (Random Walks) respectively:
Graph Cuts - Boykov: http://www.csd.uwo.ca/~yuri/Papers/ijcv06.pdf
Random Walks - Grady: http://webdocs.cs.ualberta.ca/~nray1/CMPUT615/MRF/grady2006random.pdf
For Graph Cuts, OpenCV uses the GrabCut algorithm, which is an extension of the original Graph Cuts algorithm: http://en.wikipedia.org/wiki/GrabCut. Essentially, you surround a box around the object you want segmented, and Gaussian Mixture Models are used to model the foreground and background and the object will be segmented from the background inside this box. Additionally, you can add foreground and background markers inside the box to further constrain the solution to ensure you get a good result.
Take a look at this official OpenCV tutorial for more details: http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_grabcut/py_grabcut.html
For Random Walks, this is implemented in the scikit-image library and here's a great tutorial on how to get the segmentation up and running off of their official website: http://scikit-image.org/docs/dev/auto_examples/plot_random_walker_segmentation.html
Good luck!

Categories

Resources