Remove outliers in an image after applying treshold - python

Here`s the deal. I want to create a mask that visualizes all the changes between two images (GeoTiffs which are converted to 2D numpy arrays).
For that I simply subtract the pixel values and normalize the absolute value of the subtraction:
Since the result will be covered in noise, I use a treshold and remove all pixels with a value below a certain limit.
def treshold(array, thresholdLimit):
print("Treshold...")
result = (array > thresholdLimit) * array
return result
This works without a problem. Now comes the issue. When applying the treshold, outliers remain, which is not intended:
What is a good way to remove those outliers?
Sometimes the outliers are small chunks of pixels, like 5-6 pixels together, how could those be removed?
Additionally, the images I use are about 10000x10000 pixels.
I would appreciate all advice!
EDIT:
Both images are landsat satelite images, covering the exact same area.
The difference here is that one image shows cloud coverage and the other one is free of clouds.
The bright snakey line in the top right is part of a river that has been covered by a cloud. Since water bodies like the ocean or rivers are depicted black in those images, the difference between the bright cloud and the dark river results in the river showing a high degree of change.
I hope the following images make this clear:
Source tiffs :
Subtraction result:
I also tried to smooth the result of the tresholding by using a median filter but the result was still covered in outliers:
from scipy.ndimage import median_filter
def filter(array, limit):
print("Median-Filter...")
filteredImg = np.array(median_filter(array, size=limit)).astype(np.float32)
return filteredImg

I would suggest the following:
Before proceeding please double check if the two images are 100% registered. To check that you should overlay them using e.g. different color channels. Even minimal registration errors can render your task impossible
Smooth both input images slightly (before the subtraction). For that I would suggest you use standard implementations. Play around with the filter parameters to find an acceptable compromise between smoothness (or reduction of graininess of source image 1) and resolution
Then try to match the image statistics by applying histogram normalization, using the histogram of image 2 as a target for the histogram of image 1. For this you can also use e.g. the OpenCV implementation
Subtract the images
If you then still observe obvious noise, look at the histogram of the subtraction result and see if you can relate the noise to intensity outliers. If you can clearly separate signal and noise based on intensity, apply again a thresholding (informed by your histogram). Alternatively (or additionally), if the noise is structurally different from your signal (e.g. clustered), you could look into morphological operations to remove it.

Related

Get absolute value derivative of image with python or Halcon

I think what I would get from 1 in the picture with gaussian filter is 2. I want to get number 3. I want to get the derivate but just the absolute amount.
I basically want to filter out the gradient of the big triangle and just get irregularities (arrow in the image)
I could use sine instead of the triangle as well, would this make it easier?
How can I implement this in python or halcon? What should I look into to get an better understanding in what I want and how to achieve it?
edit: I want to find the "defects" and get rid of the pattern
theory:
Real Image with real defects:
A Gaussian filter does not give you a derivative. It's a weigthed average.
Your assumption that a Gaussian would give you 2 for input 1 is incorrect.
Just suppress the low frequency of your background with a Notch filter for example.
Quick and dirty example:
Also see Find proper notch filter to remove pattern from image
Another simple approach is to use a row-wise threshold or background subtraction if the background is always aligned like that
Agree with Piglet that if the frequency of your pattern is substantially lower than that of your defects, a notch filter is the tool of first choice.
Also agree that if you have multiple frames of calibrated fringe patterns, then you have an array of options available. Recent versions of Halcon have built-in deflectometry operators.
For quick-n-dirty, you could also exploit the general orientation of your pattern using rectangular kernels. This is equivalent to an orthotropic high-pass filter.
read_image(imgInput, 'C:/Users/jpeyton/Documents/zzz_temp/FringePat_raw.jpg')
*smooth input image with mean using vertically oriented rectangular kernel
mean_image (imgInput, imgMean, 3, 15)
*subtract smoothed image from raw image to get local / high frequency residuals
abs_diff_image(imgMean,imgInput,imgAbsDiff, 1)
*threshold away background
threshold (imgAbsDiff, Regions, 8, 255)
Smooth with mean operator. Vertically oriented kernel (3x15 in this case)
Subtract smoothed image from raw image and threshold:
From there, you can run a connection operator and use region features to further accentuate defects. You'll notice this approach doesn't provide as strong a signal for the lower frequency defects (dents?).
So tradeoffs are that a FFT/DFT filter doesn't exploit direction of pattern, and leaves behind edge/harmonic artifacts. A highpass filter approach (like above) will not be sensitive to defects as the approach/exceed fringe frequency.

Object (simple shapes) Detection in Image

I've got the following image.
Other Samples
I want to detect the six square-shaped green portions and the one circular portion above them. I basically want a binary image with these portions marked 1 (white) and everything else 0 (black).
What have I done so far?
I found a range of H, S, and V within which these colors fall which works fine for a single image, but I've got multiple such images, some under different illumination (brightness) conditions and the ranges do not work in those cases. What should I do to make the thresholding as invariant to brightness as possible? Is there a different approach I should take for thresholding?
What you did was manually analyze the values you need for thresholding for a specific image, and then apply that. What you see is that analysis done on one image doesn't necessarily fit other images.
The solution is to do the analysis automatically for each image. This can be achieved by creating a histogram for each of the channels, and if you're working in HSV, I'm guessing that the H channel would be pretty much useless in this case.
Anyway, once you have the histograms, you should analyze the threshold using something like Lloyd-Max, which is basically a K-Means type clustering of intensities. This should give the centroids for the intensity of the white background, and the other colors. Then you choose the threshold based on the cluster standard deviation.
For example, in the image you gave above, the histogram of the S channel looks like:
You can see the large blob near 0 is the white background that has the lowest saturation.

Cell segmentation

I'm newbie in computer vision. My goal is to distinguish individual cells on a set of pictures like this: Example
Basically, I blur whole image, find region maximum on it and use it like seed in watershed algorithm on distance tranfsform of threesholded blurred image. In fact I'm following tutorial which you can find here:
github/luispedro/python-image-tutorial
(sorry, can't post more than 2 links).
My problem is that some cells in my set have very distinguishable dark nucleus (which you can see on the example) and my algorithm produce results like this which are cleary wrong.
Of course it's possible to fix it by increasing strength of gaussian blur but it will merge some other cells toghether which is even worse.
What can be done to solve this problem? What are other possibilites if watershed just isn't situable for this case (keeping in mind that my set is pretty small and learning seems impossible)?
The watershed tends to over-segment if you don't use a watershed with markers.
Usually, we start with DNA/DAPI segmentation that is easy, and it provides the number of cells and the inner markers for the watershed.
If you blur the images, you smooth all the patterns. You should use an alternate sequential filter (opening / closing) in order to simplify each zone, and then try an ultimate eroded in order to find the number of inner seed for your watershed.

What is the best approach to enhance blacked out areas to make the text inside them readable.?

I am trying to enhance old hand drawn maps which were digitized by scanning and this process has caused some blacked out areas in the image making the text inside them very hard to read.
I tried adaptive histogram equalization and couple of other histogram based approach using MATLAB but nothing gives me the desired result. I could probably lighten the darker shades of grey and make it look a bit better using adaptive histogram equalization but it doesn't really help with the text.
Specifically, I tried adapthisteq() with different variations which is a function available in MATLAB.
Something like this:
A = adapthisteq(I,'NumTiles',X,'clipLimit',0.01,'Distribution','uniform');
... and also tried to change the pixel values directly by having a look at image, something like this :
I(10 > I & I > 0) = 0;
I(30 > I & I > 10) = 10;
I(255 > I & I > 30) = 255;
Can I enhance the image and get an end result which has only black and white where the lines and text (basically all the information) turns into black (0) and the shades of grey and whiter regions turn into white (255 or 1)?
Is this even possible? If not, how close can I even get to it or what is the best solution to get as close as possible to the desired result. Any help is appreciated.
Here's what the original image looks like:
Here's what the result looks like after I tried out my solution using adaptive histogram equalization:
Sounds like a classic case of using adaptive thresholding. Adaptive thresholding in a general sense works by taking a look at local image pixel neighbourhoods, compute the mean intensity and seeing if a certain percentage of pixels exceed this mean intensity. If it does, we set the output to white and if not, we set this to black.
One classic approach is to use the Bradley-Roth algorithm.
If you'd like to see an explanation of the algorithm, you can take a look at a previous answer that I wrote up about it:
Bradley Adaptive Thresholding -- Confused (questions)
However, if you want the gist of it, an integral image of the grayscale version of the image is taken first. The integral image is important because it allows you to calculate the sum of pixels within a window in O(1) complexity. However, the calculation of the integral image is usually O(n^2), but you only have to do that once. With the integral image, you scan neighbourhoods of pixels of size s x s and you check to see if the average intensity is less than t% of the actual average within this s x s window then this is pixel classified as the background. If it's larger, then it's classified as being part of the foreground. This is adaptive because the thresholding is done using local pixel neighbourhoods rather than using a global threshold.
On this post: Extract a page from a uniform background in an image, there is MATLAB code I wrote that is an implementation of the Bradley-Roth algorithm, so you're more than welcome to use it.
However, for your image, the parameters I used to get some OK results was s = 12 and t = 25.
After running the algorithm, I get this image:
Be advised that it isn't perfect... but you can start to see some text that you didn't see before. Specifically at the bottom, I see Lemont Library - Built 1948.... and we couldn't see that before in the original image.
Play around with the code and the parameters, read up on the algorithm, and just try things out yourself.
Hope this helps!

Blobs detection using machine learning?

I have a large stack of images showing a bar with some dark blobs, whose position changes with time (see Figure, b). To detect the blobs I am now using an intensity threshold (c in Figure, where all intensity values below the threshold are set to 1) and then I search for blobs in the binary image using the Matlab code below. As you see the binary image is quite noisy, complicating the blobs detection process. Do you have any suggestion on how to improve the shape detection, maybe including some machine learning algorithm? Thanks!
Code:
se = strel('disk',1);
se_1 = strel('disk',3);
pw2 = imclose(IM,se);
pw3 = imopen(pw2,se_1);
pw4 = imfill(pw3, 'holes');
% Consider only the blobs with more than threshold pixels
[L,num] = bwlabel(pw4);
counts = sum(bsxfun(#eq,L(:),1:num));
number_valid_counts = length(find(counts>threshold));
This might help.
Extract texture features of the boundary of the blobs you want to extract. This can be done using Local binary patterns. There are many other texture features, you can get a detailed survey here.
Then use them to train a binary classifier.
It seems that the data come like pulses in the lower side of the image, I suggest to get some images and to slice vertical lines of the pixels perpendicular to the pulse direction, each time you take a line of values, little bit above and lower the pulse, the strip width is one pixel, and its height is little bit larger than the pulse image to take some of the light values lower and above the pulse, you may start from pixel 420-490, each time you save 70 grey values, those will form the feature vector, take also lines from the non blob areas to save for class 2, do this on several images and lines from each image.
now you get your training data, you may use any machine learning algorithm to train the computer for pulses and non pulses,
in the test step, you scan the image reading each time 70 pixels vertically and test them against the trained model, create a new black image if they belong to class "bolob" draw white vertical line starting from little below the tested pixel, else draw nothing on the output image.
at the end of scanning the image: check if there is an isolated white line you may delete considering it as false accepted . if you find a dark line within a group of white line, then convert it to white, considering false rejection.
you may use my classifier: https://www.researchgate.net/publication/265168466_Solving_the_Problem_of_the_K_Parameter_in_the_KNN_Classifier_Using_an_Ensemble_Learning_Approach
if you decide I will send you coed to do it. the distance metric is a problem, because the values varies between 0 and 255, so the light values will dominate the distance, to solve this problem you may use Hassanat distance metric at : https://www.researchgate.net/publication/264995324_Dimensionality_Invariant_Similarity_Measure
because it is invariant to scale in data, as each feature output a value between 0 and 1 no more, thus the highest values will not dominate the final distance.
Good luck

Categories

Resources