I use OpenCV 2.4.8 in python to segment images and find objects.
I want to use findContours to list the objects and analyse their area, shape and so on. But if I have two objects that are only separated by a thin (1 px wide) diagonal line or even only diagonally touch at the corners, they will be recognised as one object.
This image illustrates the problem:
There are obviously two objects, but they are recognised as one.
In Matlab one can specify a connectivity parameter (neighbourhood of 4 or 8) to solve this problem. Can this also be done in some way using opencv? Maybe using the hierarchy of the contours or some other work around?
I know, that I could use morphological erosion or opening to separate the two objects, but this can cause problems in other parts of my image. I already tried this.
If your objects are circular, you can try using circular Hough transform.
If such an image is represented as a single contour, then it is bound to have defects. You can search for convexity defects and proceed from there. But this again depends on the objects in your image.
I'm not sure what kind of objects your image contains, so its hard to come to a definitive answer.
Related
I have a set of images that look like this:
Using python need a way to find a contour around the yellow shape that ignores the isolated points and is not too complex. Something looking a bit like this :
I tried some methods such as the find_contours function from skimage,which gives this after keeping only the biggest contour:
which is not what I am looking for. A also tried active contour (snake) which had the problem of paying too much attention to isolated pixels. Is there a particular method that would help me in this situation ?
Thank you
Assuming the yellow blob is slightly different across your images, I recommend you look into either using Morphological Operations, or using Contour Approximation.
I've never used scikit-image, but it appears to have Morphological functionalities included.
You can take a look at this OpenCV tutorial for a quick guideline of the different operations.
But I think all you need is to use the "Opening" operation to preprocess your yellow shape; making it smoother and removing the random speckles.
Another approach is by approximating that contour you've extracted to make it smoother. For scikit-image, that is the measure.approximate_polygon function. Also another OpenCV tutorial for reference on how Contour Approximation works (the same algorithm as with scikit-image).
I'm using yolo v8 to detect subjects in pictures. It's working well, and can create quite precise masks over subjects.
from ultralytics import YOLO
model = YOLO('yolov8x-seg.pt')
for output in model('image.jpg', return_outputs=True):
for segment in output['segment']:
print(segment)
The code above works, and generates a series of "segments", which are a list of points that define the shape of subjects on my image. That shape is not convex (for example horses).
I need to figure out if a random coordinate on the image falls within these segments, and I'm not sure how to do it.
My first approach was to build an image mask using PIL. That roughly worked, but it doesn't always work, depending on the shape of the segments. I also thought about using shapely, but it has restrictions on the Polygon classes, which I think will be a problem in some cases.
In any case, this really feels like a problem that could easily be solved with the tools I'm already using (yolo, pytorch, numpy...), but to be honest I'm too new to all this to figure out how to do it properly.
Any suggestion is appreciated :)
You should be able to get a segmentation mask from your model: imagine a binary image where black (zeros) represents the background and white (or other non zero values) represent an instance of a segmentation class.
Once you have the binary image you can use opencv's findContours function to get a the largest outer path.
Once you have that path you can use pointPolygonTest() to check if a point is inside that contour or not.
I'm making some photo-editing tools in python using PIL (Python Imaging Library), and I was trying to make a program which converts a photo to its 'painted' version.
I've managed to make a program which converts a photo into its distinct colours, but the problem is that the algorithm I'm using is operating on every pixel, meaning that the resulting image has very jagged differences between colours.
Ideally, I'd like to smoothen out these edges, but I don't know how!
I've checked out this site for some help, but the method there produces quite different results to what I need.
My Starting Image:
My Image with Distinct Colours:
I would like to smoothen the edges in the image above.
Results of using the method which doesn't quite work:
As you can see, using the technique doesn't smoothen the edges into natural-looking curves; instead it creates jagged edges.
I know I should provide sample output, but suprisingly, I haven't actually got it, so I'll describe it as best as I can. Simply put, I want to smoothen the edges between the different colours.
I've seen something called a Gaussian blur, but I'm not quite sure as to how to apply it here as the answers I've seen always mention some sort of threshold, and are usually to do with binary images, so I don't think it can apply here.
Edge enhancement does the opposite of edge smoothing, so this is certainly not the tool you should use.
Unfortunately, there is little that you can do because edge smoothing will indeed smoothen the jaggies, but it will also destroy the true edges, resulting in a blurred image. Edge-preserving smoothing is also a dead-end.
You should have a look at the methods to extract the "cartoon part" of an image. There is a lot of literature on this topic, though often pretty sophisticated.
You can enhance the quality of your "Image with Distinct Colours" by applying a median filter with a radius of 2:
If you want to get "comic-like" dark edges, you can calculate the edges of the original image using a sobel filter, convert the edge map to grayscale, then multiply the resulting edge map with 2, inverse the map and add each non-white pixel of the edge map to the original image. This will result in:
This is of course only a starting point as the result leaves much to be desired, but it should give you a good idea about the basic concept.
My question is rather about feasibility of a task.
Note that I have read the solution of this question, however you can guess I am not dealing with rectangles and cameras here.
Situation:
I need to save lot of pictures in a folder all of them obeying to these rules:
In each picture, there is ONLY one object.
The object can be anything (car, horse, human hand ...)
The size and the format of the picture belong to certain set.
The background of the object is ALWAYS white.
The color of the object itself can be anything else (including, why not, areas of white pixels)
Goal:
I want to detect if the object of each image is CENTERED.
Development environment:
Python
OpenCV
Do you think this is feasible ?
I hope my question is not too broad. I just ask if this can be done automatically without human intervention on the pictures. I have thousands of them. The program will save in a separate folder pictures in which the object is not centered.
EDIT:
Following the comments and answer above: for me, a centered object is the one if I draw a square or rectangle around it, the edges of the square/rectangle must be equivalently distant from let and right sides of the image, whereas the top and the bottom of the object must be equivalently distant from the top and bottom of the picture.
Yep this is very feasible. However, depending on the type of objects the images contain, they are different ways to accomplish this. Assuming the objects in the images all have a uniform color you can easily perform a color detection algorithm, find the centre point of the object in terms of pixels and find it's position using the image resolution as the reference.
As the background is always white as specified, this is probably your best method as you can just extract all the non white (Or different shade of white) objects within the image.
if you do decide to go with this approach, i should be able to point you to some relevant code
Although writing in c++, more information on this can be found in the link below.
http://opencv-srf.blogspot.co.uk/2010/09/object-detection-using-color-seperation.html
the link is based on object detection in a video but as a video is just a series images the same concept can be used on images
Using python, which may be the best algorithm or the best strategy to detect the presence of colored bands as in image?
The image is scanned and cropped, the problem is that the crop not to be precise and I can not make use of a control that makes use of Cartesian coordinates to determine if the lines are present.
The strips may be present or not.
You have a number of options at your disposal:
If the strips are going to be the same size, and their orientation is known, then you can use cross-correlation (with working Python source). Your template image could be a single stripe, or a multiple strip pattern if you know the number of strips and their spacing.
More generally, you could go with morphological image processing and look for rectangles. You'd first have to threshold your image (using Ohtsu's method or some empirically determined threshold) and then perform contour detection. Here's an example that does something similar, but for ellipses -- it's trivial to modify it to look for rectangles. This time the source in in C, but it uses OpenCV like the first example, so it should be trivial to port
There are other approaches such as edge detection and Fourier analysis, but I really think that the first two are going to be more than enough for you.