I want to segment defective areas in images using MATLAB/Python-OpenCV.
Original image:
With Defects:
http://imgur.com/fyDkpcZ
Defect can be seen at 3rd rectangle.
What I tried so far:
Extract borders of rectangles with LoG filter / threshold graylevel (but not helps much because of shadows)
Trace their boundaries
Get centroid
Find distance between boundary points and centroid with respect to angle ( increment angle by 0.5 degrees for better resolution)
Find a good template rectangle and save it
Find the difference between template rectangle and candidate rectangle
Based on that result I can find the faulty regions but the false-alarm rate increases when I try to increase the sensitivity of algorithm.
I need to get boundaries much more precise and non-noisy. Because of the shadows, the edges of rectangle may vary vastly.
How can I get edges of rectangles more robust to shadows?
What can be done instead of what I tried so far?
Thanks for your help!
A Laplace of Gaussian filter is a zero mean operation. If you feed it an 8-bit image with intensities centered on 127, it will return you image data centered on zero. You must use a filter bias of arbitrary value, usually half the container's max value (so in this 8-bit example, the bias would be 127). You can also adjust the filter strength by multiplying the result pixels by a constant, this makes the log filter's effect more apparent.
The log filter will make one white and one black edge for very strong transitions. In the horizontal or vertical direction, finding the actual edge is very easy, as you need only take the average of position of both. This gives you sub-pixel resolution if integrated over a small distance.
If the illumination of these images is very similar, you can use registration and subtraction:
Normalize both the image suspected to contain defects and a reference image to some intensity.
Register (align) them; you could do this by detecting three points on a rectangle and then moving and rotating one of the images.
Subtract the suspect image from the reference image. This gives you an error map. You can apply a small blur and then a tight LoG filter to it to remove noise and make detection more accurate.
Related
I would like to calculate the angle of the tangent on a given white to black transition point on an image that consists entirely of black and white pixels and displays simple shapes such as squares, circles or triangles.
Zooming in on an image like that would look like this:
If you were to pick any of the black pixels next to a white one, my solution would be to follow the edge for a few pixels, then define a formula based on the curvature of the pixels and calculate the exact value of the defined point. Is there a simpler way of doing that? The resolution of the images is around 800x600 pixels so a fairly accurate estiamate of the angle of the provided point should be possible.
In my current approach I follow the edge line of the shape for about ten pixels, but I'm not sure where to go from there. Is there a library that already performs this kind of calculation for you? How many pixels would you need in order to be able to make an accurate judgement of the angle at that point?
Such a measurement is highly inaccurate on binary images, if not unusable.
If you measure on two neighboring pixels, the angle will be one of 0° or ±45°, so the angular resolution is very poor !
You can compute on several pixels to improve that resolution (five pixels correspond to like 11°), but now you are no more sure that the direction is the same, because the shape might be rounded.
If in your case the repertoire of shapes is known to be simple, you'd better perform fitting of the whole shapes before querying the tangents.
The left image is my result image after some processing. the right image is after running peak detection on the result image
How do I filter the image to only get the middle peak, which approximates to a 2d gaussian or circle, compared to the line on the right or peaks on the left that are quite spread out.
I need the filter to generalize across different peak sizes and imperfect gaussian/circle to some degree and be very fast
I tried eroding with different shapes (eg: circle, vertical/horizontal line) but that only created circles instead of removing non-circles
I thought about running a 2D gaussian as a filter with small positive numbers in the middle and large negative numbers on the edges but that would be sensitive to the size of the filter
Maybe i could use the detected peaks as starting points and "descend" the peak checking an increasing radius around it for almost uniform decrease until it reaches a certain threshold? But i fear manually coding that would be very slow
I have a number of lobster images as shown in the photo. My goal is to identify the edge between the carapace and the body and its coordinates. However, it seems like the methods of finding contours based on HSV thresholds or Canny edge detection output didn't work well when I applied to these images.
My idea is to find a way to 'amplify' the color difference between two areas of an image to make it easier for finding mask/contours based on color threshold?
For example, if I can make the yellow of the edge in the image stronger, then I can easily find the mask of this area using color threshold, can we do that?
Thanks.
This color is closer to the rest of the picture than one think (in the HLS space).
IMO, the best way to enhance it is by means of the Euclidean distance to that particular orange in RGB space.
For instance, the pixels at distance 32√3:
I have been working on trying to detect the edge of the water using OpenCV/Python, and the results I am getting are fairly inaccurate and there is no robustness.
This is what I have achieved so far:
Original Image, output image
Canny Edge detection
What I am currently doing is setting some variables (the level of Gaussian blur, the sigma used for the Canny edge detection, and the maximum deviation which the level measured can change between each point), performing an 'automatic' Canny edge detection (where the median pixel intensity is measured and used to form the lower and upper boundaries), then moving from the bottom left hand corner upwards to find the first 'white' pixel. This is done in x intervals of five the entire length of the frame.
The average y value of the points is the calculated. Each point is then tested to see if it deviates too far from the average pixel, with the deviation limit being set earlier. The remaining points are then drawn on the image as the blue line. The average value of the drawn pixels is recorded at each frame.
After 30 frames, the average of the averages is calculated and drawn as the red line, which is then assumed to be the 'real' water height.
Has anyone have any ideas on a better way to do this? What would make the edge of the water stand out more? This method works on most footage I have recorded, but with poor results.
Thanks in advance.
I have worked on a similar problem and I hope these advices can help you in some way:
Try to restrict your search area: can you make assumptions on where the water level should be? Consider also to have correctly detected the water level. Is it safe to assume that in the next frames the water level will decrease/increase constantly? Will it change slowly? Crop your image in order to take into consideration only the area where it is safe to assume that the water level is present.
Change color space: you can try to work in other color spaces like HSV in order to have the brightness separated from the chromaticity
Hough Transform line detection: try to use this algorithm to search for specific horizontal lines in the image, or other shapes.
Image undistortion: if necessary try to correct the image in order to rectify the curved lines, or cancel the perspective with an Inverse Perspective Mapping (IPM).
You can also consider to change edge detection algorithm.
I have an Image (or several hundreds of them) that need to be analyzed. The goal is to find all black spots close to each other.
For example all black spots with a Horizontal distance of 160 pixel and vertical 40 pixel.
For now I just look at each Pixel and if there is a black pixel I call a recursive Method to find its neighbours (i can post the code too if you want to)
It works, but its very slow. At the moment the script runs about 3-4 minutes depending on image size.
Is there some easy/fast way to accomplish this (best would be a scikit-image method to help out here) I'm using Python.
edit: I tried to use scikit.measure.find_contours, now i have an array with arrays containing the contours of the black spots. Now I only need to find the contours in the neighbourhood of these contours.
When you get the coordinates of the different black spots, rather than computing all distances between all pairs of black pixels, you can use a cKDTree (in scipy.spatial, http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.html#scipy.spatial.cKDTree). The exact method of cKDTree to use depends on your exact criterion (you can for example use cKDTree.query_ball_tree to know whether there exists a pair of points belonging to two different labels, with a maximal distance that you give).
KDTrees are a great method to reduce the complexity of problems based on neighboring points. If you want to use KDTrees, you'll need to rescale the coordinates so that you can use one of the classical norms to compute the distance between points.
Disclaimer: I'm not proficient with the scikit image library at all, but I've tackled similar problems using MATLAB so I've searched for the equivalent methods in scikit, and I hope my findings below help you.
First you can use skimage.measure.label which returns label_image, i.e. an image where all connected regions are labelled with the same number. I believe you should call this function with background=255 because from your description it seems that the background in your images is the while region (hence the value 255).
This is essentially an image where the background pixels are assigned the value 0 and the pixels that make up each (connected) spot are assigned the value of an integer label, so all the pixels of one spot will be labelled with the value 1, the pixels of another spot will be labelled with the value 2, and so on. Below I'll refer to "spots" and "labelled regions" interchangeably.
You can then call skimage.measure.regionprops, that takes as input the label_image obtained in the previous step. This function returns a list of RegionProperties (one for each labelled region), which is a summary of properties of a labelled region.
Depending on your definition of
The goal is to find all black spots close to each other.
there are different fields of the RegionProperties that you can use to help solve your problem:
bbox gives you the set of coordinates of the bounding box that contains that labelled region,
centroid gives you the coordinates of the centroid pixel of that labelled region,
local_centroid gives you the centroid relative to the bounding box bbox
(Note there are also area and bbox_area properties which you can use to decide whether to throw away very small spots that you might not be interested in, thus reducing computation time when it comes to comparing proximity of each pair of spots)
If you're looking for a coarse comparison, then comparing the centroid or local_centroid between each pair of labelled regions might be enough.
Otherwise you can use the bbox coordinates to measure the exact distance between the outer bounds of any two regions.
If you want to base the decision on the precise distance between the pixel(s) of each pair of regions that are closest to each other, then you'll likely have to use the coords property.
If your input image is binary, you could separate your regions of interest as follows:
"grow" all the regions by the expected distance (actually half of it, as you grow from "both sides of the gap") with binary_dilation, where the structure is a kernel (e.g. rectangular: http://scikit-image.org/docs/dev/api/skimage.morphology.html#skimage.morphology.rectangle) of, let's say, 20x80pixels;
use the resulting mask as an input to skimage.measure.label to assign different values for different regions' pixels;
multiply your input image by the mask created above to zero dilated pixels.
Here are the results of proposed method on your image and kernel = rectange(5,5):
Dilated binary image (output of step 1):
Labeled version of the above (output of step 2):
Multiplication results (output of step 3):