I want to extract the outline of an object within a furnace, here is the image:
I have tried various techniques to process the image but I have failed, the technique that gives the best image of the object is CLAHE as seen here:
Simple normalization:
I have tried Canny, Sobel, dilating, eroding and morphing but I cannot seem to get them to work harmoniously to allow me to extract the contour I want ( the contour surrounding the object in the furnace).
Any suggestions would be appreciated.
Histogram equalization followed by strong Gaussian filtering in the horizontal or vertical directions will enhance the near-horizontal and near-vertical edges (separately). That's about the best you can do. (Maybe try Hough on these.)
Also notice that the specialized edge fitters as found in typical gauging libraries can help you if the geometry is roughly known.
Related
I'm making some photo-editing tools in python using PIL (Python Imaging Library), and I was trying to make a program which converts a photo to its 'painted' version.
I've managed to make a program which converts a photo into its distinct colours, but the problem is that the algorithm I'm using is operating on every pixel, meaning that the resulting image has very jagged differences between colours.
Ideally, I'd like to smoothen out these edges, but I don't know how!
I've checked out this site for some help, but the method there produces quite different results to what I need.
My Starting Image:
My Image with Distinct Colours:
I would like to smoothen the edges in the image above.
Results of using the method which doesn't quite work:
As you can see, using the technique doesn't smoothen the edges into natural-looking curves; instead it creates jagged edges.
I know I should provide sample output, but suprisingly, I haven't actually got it, so I'll describe it as best as I can. Simply put, I want to smoothen the edges between the different colours.
I've seen something called a Gaussian blur, but I'm not quite sure as to how to apply it here as the answers I've seen always mention some sort of threshold, and are usually to do with binary images, so I don't think it can apply here.
Edge enhancement does the opposite of edge smoothing, so this is certainly not the tool you should use.
Unfortunately, there is little that you can do because edge smoothing will indeed smoothen the jaggies, but it will also destroy the true edges, resulting in a blurred image. Edge-preserving smoothing is also a dead-end.
You should have a look at the methods to extract the "cartoon part" of an image. There is a lot of literature on this topic, though often pretty sophisticated.
You can enhance the quality of your "Image with Distinct Colours" by applying a median filter with a radius of 2:
If you want to get "comic-like" dark edges, you can calculate the edges of the original image using a sobel filter, convert the edge map to grayscale, then multiply the resulting edge map with 2, inverse the map and add each non-white pixel of the edge map to the original image. This will result in:
This is of course only a starting point as the result leaves much to be desired, but it should give you a good idea about the basic concept.
I am working on an application where I need feature like Cam Scanner where document is to be detected in an image. For that I am using Canny Edge detection followed by Hough Transform.
The results look promising but the text in the document is creating issues as explained via images below:
Original Image
After canny edge detection
After hough transform
My issue lies in the third image, the text in original mage near the bottom has forced hough transform to detect the horizontal line(2nd cluster from bottom).
I know I can take the largest quadrilateral and that would work fine in most cases, but still I want to know any other ways where in this processing I can ignore the effect of text on the edges.
Any help would be appreciated.
I solved the issue of text with the help of median filter of size 15(square) in an image of 500x700.
Median filter doesn't affect the boundaries of the paper, but can help eliminate the text completely.
Using that I was able to get much more effective boundaries.
Another approach you could try is to use thresholding to find the paper boundaries. This would create a binary image. You can then examine the blobs of white pixels and see if any are large enough to be the paper and have the right dimensions. If it fits the criteria, you can find the min/max points of this blob to represent the paper.
There are several ways to do the thresholding, including iterative, otsu, and adaptive.
Also, for best results you may have to dilate the binary image to close the black lines in the table as shown in your example.
I'm newbie in computer vision. My goal is to distinguish individual cells on a set of pictures like this: Example
Basically, I blur whole image, find region maximum on it and use it like seed in watershed algorithm on distance tranfsform of threesholded blurred image. In fact I'm following tutorial which you can find here:
github/luispedro/python-image-tutorial
(sorry, can't post more than 2 links).
My problem is that some cells in my set have very distinguishable dark nucleus (which you can see on the example) and my algorithm produce results like this which are cleary wrong.
Of course it's possible to fix it by increasing strength of gaussian blur but it will merge some other cells toghether which is even worse.
What can be done to solve this problem? What are other possibilites if watershed just isn't situable for this case (keeping in mind that my set is pretty small and learning seems impossible)?
The watershed tends to over-segment if you don't use a watershed with markers.
Usually, we start with DNA/DAPI segmentation that is easy, and it provides the number of cells and the inner markers for the watershed.
If you blur the images, you smooth all the patterns. You should use an alternate sequential filter (opening / closing) in order to simplify each zone, and then try an ultimate eroded in order to find the number of inner seed for your watershed.
I'm trying to extract the boundary between two regions in an image programmatically. I've got the hard bits figured out, so that I have a binary image that contains the boundary and plenty of noise.
. Cropping the areas outside isn't an issue.
The boundary in the image is afflicted both by noise (bottom-left for example) and some areas of discontinuity. That means I can't simply select the shape based on one known pixel.
The problem left to me is pretty simple - I only really need to fill the gaps in the boundary and smooth it out, so that I am left with something smooth and continuous that I can extract afterwards. That doesn't sound like a particularly hard problem for images like this, but I'm completely lost. What algorithms or strategies could I possible use in order to turn this image into something useful?
The output I'm looking for is something that can be cropped to give .
An common practice is to use a gaussian blur. This will filter out the noise in the image depending on the intensity of the blur. At the bottom of the article there is a gif with a cat image showing what you want.
After that there are contour finding algos which could help you extract the boundaries as pixel chains
I would like to eliminate the keypoints detected around the frame of an image (an artwork of a museum gallery ). In other words I want to separate out the actual artwork from its frame. Each artwork consist of different types of frames.
![Keypoints detected using sift][1]
I have already written a Python wrapper for David Lowe's SIFT implementation to detect keypoints as well as to compute descriptors.
However my question is what is the best approach to solve this problem? any of the following or something else?
Using Hough transformation (using Python Image Library)
Template matching
Your help is highly appreciated
Thanks again
I'd go with Hough transform and try to detect lines which form a quadrilateral.
You might get into trouble if the painting actually does contain a square or something. I'd look for some assumptions like: acceptable aspect ratio, acceptable size. Also find the outermost quadrilateral, and work your way towards the center of the image picking up inner quadrilaterals, if applicable. This would give you the frame and its thickness, so you can disregard any keypoints here or beyond the frame.
P.S. If you got some random replies from me, it's because I accidentally replied to another post in your thread... ^^
For each artwork, do you have a clean, properly framed reference image?
If so another solution to remove the background clutter is:
to use the ratio test algorithm to compute keypoints correspondences between your frame and the reference image,
to perform a geometric consistency check to filter out false matches.
In addition the geometric check will provide you with the homography matrix that you can use to warp your input frame or alternatively to project the corners of the reference images.
That way you will natively obtain the artwork area within your frame.
Here's an example about how you can do that with opensift's match tool - below is an illustration.