I'm trying to extract the boundary between two regions in an image programmatically. I've got the hard bits figured out, so that I have a binary image that contains the boundary and plenty of noise.
. Cropping the areas outside isn't an issue.
The boundary in the image is afflicted both by noise (bottom-left for example) and some areas of discontinuity. That means I can't simply select the shape based on one known pixel.
The problem left to me is pretty simple - I only really need to fill the gaps in the boundary and smooth it out, so that I am left with something smooth and continuous that I can extract afterwards. That doesn't sound like a particularly hard problem for images like this, but I'm completely lost. What algorithms or strategies could I possible use in order to turn this image into something useful?
The output I'm looking for is something that can be cropped to give .
An common practice is to use a gaussian blur. This will filter out the noise in the image depending on the intensity of the blur. At the bottom of the article there is a gif with a cat image showing what you want.
After that there are contour finding algos which could help you extract the boundaries as pixel chains
Related
I am dealing with some images which contain tables and there are 1 or 2 stickers on them. What I am trying to do is getting rid of those stickers. Using color thresholding (in HSV) and contour detection I am able to create a mask for those stickers. Now I want those stickers to "dissolve" out from there (I don't know the correct term for this). While keeping those tables lines intact, so that my line detection works well (which I have to do after this cleaning).
I tried OpenCV's inpaint. But this doesn't work well here, because the sticker size is big enough.
See this example:
Part of the whole image where the sticker is sticking (inside contents are censored by me). It can be over horizontal lines, or vertical lines, or both. Basically, it's sticking somewhere on the table (maybe over some text too, but that can't be recovered anyway). The background won't be necessary whitish, it can be pink/orange/other colors.
This is the thresholded image, creating a mask of the sticker. We can also get the contour of this if required.
This is the result of cv.inpaint() with radius 3.
What I want is to reconstruct those lines.
My solution
Now my approach is to interpolate the colors in between the sticker contour, to fill it up. For each pixel inside the contour, I will do a vertical interpolation and a horizontal interpolation (interpolation of the boundary colors) and then fill that pixel with the average of both. I am hoping that this will preserve my vertical and horizontal lines at least. (Might fail if it's on a corner of the table). This will also keep the background smooth, my background can have some different colors.
Now my problem is how I can implement this. What I have are contours that I find using OpenCV's get_contours(). I don't know how to get the colors on its boundary and how to interpolate the in-between colors.
Any help is appreciated. Thanks in advance.
Due to confidentiality, I cannot share the whole image.
EDIT
I tried the seam-carving method (implementation). Here are the results:
Vertical seaming
Horizontal seaming
It works well once I know which one to use. And I am not sure how well it will do when we have both horizontal and vertical lines.
PS. Don't suggest a solution which needs to find lines and then work. Because there will be many lines in my whole image.
You can make synthetic example images. To better explain your issue.
As I got it you can use Poisson image editing. Just take a piece of clear paper image and paste it using poisson blending and the mask you extracted.
Check this github repo as instance for examples with code.
My goal is to draw a rectangle border around the face by removing the neck area connected to the whole face area. All positive values here represent skin color pixels. Here I have so far filtered out the binary image using OpenCV and python. Code so far skinid.py
Below is the test image.
Noise removals have also been applied to this binary image
Up to this point, I followed this paper Face segmentation using skin-color map in videophone applications. And for the most of it, I used custom functions rather than using built-in OpenCV functions because I kind of wanted to do it from scratch. (although some erosion, opening, closing were used to tune it up)
I want to know a way to split the neck from the whole face area and remove it like this,
as I am quite new to the whole image processing area.
Perform a distance transform (built into opencv or you could write by hand its a pretty fun and easy one to write using the erode function iteratively, and adding the result into another matrix each round, lol slow but conceptually easy). On the binary image you presented above, the highest value in a distance transform (and tbh I think pretty generalized across any mug shots) will be the center of the face. So that pixel is the center of your box, but also that value (value of that pixel after the distance transform) will give you a pretty solid approx face size (since it is going to be the pixel distance from the center of the face to the horizontal edges of the face). Depending on what you are after, you may just be able to multiply that distance by say 1.5 or so (figure out standard face width to height ratio and such to choose your best multiplier), set that as your circle radius (or half side width for a box) and call it a day. Comment if you need anything clarified as I am pretty confident in this answer and would be happy to write up some quick code (in c++ opencv) if you need/ it would help.
(alt idea). You could tweak your color filter a bit to reject darker areas (this will at least in the image presented) create a nice separation between your face and neck due to the shadowing of the chin. (you may have to dial back your dilate/ closing op tho)
Here`s the deal. I want to create a mask that visualizes all the changes between two images (GeoTiffs which are converted to 2D numpy arrays).
For that I simply subtract the pixel values and normalize the absolute value of the subtraction:
Since the result will be covered in noise, I use a treshold and remove all pixels with a value below a certain limit.
def treshold(array, thresholdLimit):
print("Treshold...")
result = (array > thresholdLimit) * array
return result
This works without a problem. Now comes the issue. When applying the treshold, outliers remain, which is not intended:
What is a good way to remove those outliers?
Sometimes the outliers are small chunks of pixels, like 5-6 pixels together, how could those be removed?
Additionally, the images I use are about 10000x10000 pixels.
I would appreciate all advice!
EDIT:
Both images are landsat satelite images, covering the exact same area.
The difference here is that one image shows cloud coverage and the other one is free of clouds.
The bright snakey line in the top right is part of a river that has been covered by a cloud. Since water bodies like the ocean or rivers are depicted black in those images, the difference between the bright cloud and the dark river results in the river showing a high degree of change.
I hope the following images make this clear:
Source tiffs :
Subtraction result:
I also tried to smooth the result of the tresholding by using a median filter but the result was still covered in outliers:
from scipy.ndimage import median_filter
def filter(array, limit):
print("Median-Filter...")
filteredImg = np.array(median_filter(array, size=limit)).astype(np.float32)
return filteredImg
I would suggest the following:
Before proceeding please double check if the two images are 100% registered. To check that you should overlay them using e.g. different color channels. Even minimal registration errors can render your task impossible
Smooth both input images slightly (before the subtraction). For that I would suggest you use standard implementations. Play around with the filter parameters to find an acceptable compromise between smoothness (or reduction of graininess of source image 1) and resolution
Then try to match the image statistics by applying histogram normalization, using the histogram of image 2 as a target for the histogram of image 1. For this you can also use e.g. the OpenCV implementation
Subtract the images
If you then still observe obvious noise, look at the histogram of the subtraction result and see if you can relate the noise to intensity outliers. If you can clearly separate signal and noise based on intensity, apply again a thresholding (informed by your histogram). Alternatively (or additionally), if the noise is structurally different from your signal (e.g. clustered), you could look into morphological operations to remove it.
I am working on an application where I need feature like Cam Scanner where document is to be detected in an image. For that I am using Canny Edge detection followed by Hough Transform.
The results look promising but the text in the document is creating issues as explained via images below:
Original Image
After canny edge detection
After hough transform
My issue lies in the third image, the text in original mage near the bottom has forced hough transform to detect the horizontal line(2nd cluster from bottom).
I know I can take the largest quadrilateral and that would work fine in most cases, but still I want to know any other ways where in this processing I can ignore the effect of text on the edges.
Any help would be appreciated.
I solved the issue of text with the help of median filter of size 15(square) in an image of 500x700.
Median filter doesn't affect the boundaries of the paper, but can help eliminate the text completely.
Using that I was able to get much more effective boundaries.
Another approach you could try is to use thresholding to find the paper boundaries. This would create a binary image. You can then examine the blobs of white pixels and see if any are large enough to be the paper and have the right dimensions. If it fits the criteria, you can find the min/max points of this blob to represent the paper.
There are several ways to do the thresholding, including iterative, otsu, and adaptive.
Also, for best results you may have to dilate the binary image to close the black lines in the table as shown in your example.
I'm newbie in computer vision. My goal is to distinguish individual cells on a set of pictures like this: Example
Basically, I blur whole image, find region maximum on it and use it like seed in watershed algorithm on distance tranfsform of threesholded blurred image. In fact I'm following tutorial which you can find here:
github/luispedro/python-image-tutorial
(sorry, can't post more than 2 links).
My problem is that some cells in my set have very distinguishable dark nucleus (which you can see on the example) and my algorithm produce results like this which are cleary wrong.
Of course it's possible to fix it by increasing strength of gaussian blur but it will merge some other cells toghether which is even worse.
What can be done to solve this problem? What are other possibilites if watershed just isn't situable for this case (keeping in mind that my set is pretty small and learning seems impossible)?
The watershed tends to over-segment if you don't use a watershed with markers.
Usually, we start with DNA/DAPI segmentation that is easy, and it provides the number of cells and the inner markers for the watershed.
If you blur the images, you smooth all the patterns. You should use an alternate sequential filter (opening / closing) in order to simplify each zone, and then try an ultimate eroded in order to find the number of inner seed for your watershed.