How do I remove saturated pixels from an image? - python

My goal with the code is to print an image where I have removed all the saturated pixels (the pixels that gives off their maximum value). This is because I am analyzing data from an .fit image of a star.
The instruction I got was:
"What you need to do is figure out what the maximum pixel value is in the 2d array from the data. Then write code that will remove those values, while still keeping the image. Basically, what I want to see is a star with a hole in the middle, where you have removed the saturated pixels."
I have succeeded to find the maximum pixel value (65535) and now I need to print my image without the pixels with that particular value.
This is my code so far but now I do not know how to remove the pixels from my image.
import numpy as np
from astropy.io import fits
import matplotlib.pyplot as plt
fits_image_filename = "Acturus_V_2s.fit"
hdul = fits.open(fits_image_filename)
data = hdul[0].data
datacut = data[610:710,755:855]
plt.imshow(datacut, origin="lower")
MaxPixelValue = np.amax(datacut)
print(MaxPixelValue)
And this gives the output:
65535 and my image
How am I supposed to remove those pixels?

As the comments have pointed out, it's not generally possible to "remove" pixels from an image. But there are a few ways of dealing with some pixels in an image while leaving the others alone. In general, to do this you'll end up creating a boolean mask -- an array with the same shape as your image but containing values that are either True or False.
mask = datacut >= MaxPixelValue
Now the mask array contains True wherever the original image is saturated and False everywhere else.
You can set the values of the saturated pixels to NaN, which Matplotlib will handle as blank pixels (i.e., they do not have any color when using imshow()).
datacut[mask] = np.nan
Alternatively, you can set the saturated pixels in your image to some indicative color (e.g. white or black):
datacut[mask] = 0
Or you can create a numpy maskedarray, which is a sort of combination of your original image and the mask:
masked_image = np.ma.masked_array(datacut, mask)
The numpy documentation has a brief description of how to use masked arrays.
https://numpy.org/doc/stable/reference/maskedarray.generic.html
I also found some documentation from the astrophysics community that might be useful, since you're also using fits and astropy.
https://prancer.physics.louisville.edu/astrowiki/index.php/Image_processing_with_Python_and_SciPy#Masked_Image_Operations

Related

matplotlib imshow gray colormap additional preprocess

I have bunch of images, randomly I figured out that best preprocessing for my images is using matplotlib imshow with cmap=gray. This is my RGB image (I can't publish the original images, this is a sample that I created to make my point. So the original images are not noiseless and perfect like this):
When I use plt.imshow(img, cmap='gray') the image will be:
I wanted to implement this process in Opencv. I tried to use OpenCV colormaps but there wasn't any gray one there. I used these solutions but the result is like the first image not the second one. (result here)
So I was wondering besides changing colormaps, what preprocessing does matplotlib apply on images when we call imshow?
P.S: You might suggest binarization, I've tested both techniques but on my data binarization will ruin some of the samples which this method (matplotlib) won't.
cv::normalize with NORM_MINMAX should help you. it can map intensity values so the darkest becomes black and the lightest becomes white, regardless of what the absolute values were.
this section of OpenCV docs contains example code. it's a permalink.
or so that minIdst(I)=alpha, maxIdst(I)=beta when normType=NORM_MINMAX (for dense arrays only)
that means, for NORM_MINMAX, alpha=0, beta=255. these two params have different meanings for different normTypes. for NORM_MINMAX it seems that the code automatically swaps them so the lower value of either is used as the lower bound etc.
further, the range for uint8 type data is 0 .. 255. giving 1 only makes sense for float data.
example:
import numpy as np
import cv2 as cv
im = cv.imread("m78xj.jpg")
normalized = cv.normalize(im, dst=None, alpha=0, beta=255, norm_type=cv.NORM_MINMAX)
cv.imshow("normalized", normalized)
cv.waitKey(-1)
cv.destroyAllWindows()
apply a median blur to remove noisy pixels (which go beyond the average gray of the text):
blurred = cv.medianBlur(im, ksize=5)
# ...normalize...
or do the scaling manually. apply the median blur, find the maximum value in that version, then apply it to the original image.
output = im.astype(np.uint16) * 255 / blurred.max()
output = np.clip(output, 0, 255).astype(np.uint8)
# ...

Best way to determine an "empty" image/slide

I'm dealing with lots of image files - particularly with tissue samples. Often when you magnify the image and divide the image into tiles there are "blank" tiles. I need to identify these "blank" tiles and remove them. Unfortunately, these are not all one homogenous color, but you can see in my examples, I have one real tile (the obvious one) and the other three are "blank" (in quotes here because to the visual eye they are empty, but from a pixel perspective it's not a uniform value). What's the best way in Python (using Pillow?) to determine that these 3 are blank?
You can maybe try something with numpy (or check the standard déviation or count the number of unique values)
Standard déviation of empty img should be close to zero:
(to adapt)
import numpy as np
image = Image.open('img.jpeg').convert('LA')
# convert image to numpy array
data = asarray(image)
np.reshape(data, (-1,1))
std_dev=np.std(data)
if std_dev<1:
check img
With unique count: (to adapt)
image = Image.open('img.jpeg').convert('LA')
# convert image to numpy array
data = asarray(image)
np.reshape(data, (-1,1))
u, count_unique = np.unique(data, unique_counts =True)
if count_unique.size< 10:
check img

Confused about how to properly add a white border to my numpy imagearray with numpy.pad

I've used openCV2 to load a grayscale image, which I then converted to a numpy.array. Now I want to pad that array with a 'frame' around the image. However, I'm having some trouble dissecting what the numpy manual wants me to do exactly. I tried googling and searching for padding examples, none came up that were relevant for my case.
My current code looks like this:
import numpy as np
img = cv2.imread('Lena.png', )
imgArray = np.array((img))
imgArray = np.pad(imgArray, pad_width=1,mode='constant' ,constant_values=0)
cv2.imshow('Padded', imgArray)
Check out the openCV2 documentation here: https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_core/py_basic_ops/py_basic_ops.html
My best guess is to use constant= cv2.copyMakeBorder(img,10,10,10,10,cv2.BORDER_CONSTANT,value=BLUE)
You can do as follows:
import numpy as np
import cv2
img = cv2.imread('Lena.png', 0)
img = np.pad(img, pad_width=4, mode='constant', constant_values=0)
cv2.imshow('Padded', img)
cv2.waitKey(0)
From the documentation of cv2.imread:
cv2.imread(filename[, flags]) → retval
Parameters:
filename – Name of file to be loaded.
flags:
Flags specifying the color type of a loaded image:
CV_LOAD_IMAGE_ANYDEPTH - If set, return 16-bit/32-bit image when the input has the corresponding depth, otherwise convert it to 8-bit.
CV_LOAD_IMAGE_COLOR - If set, always convert image to the color one
CV_LOAD_IMAGE_GRAYSCALE - If set, always convert image to the grayscale one
>0 Return a 3-channel color image.
Note In the current implementation the alpha channel, if any, is stripped from the output image. Use negative value if you need the alpha channel.
=0 Return a grayscale image.
<0 Return the loaded image as is (with alpha channel).
With the above code we got the following result:
And another option using np.pad:
As you can see here, you need to supply the axis you want to np.pad. Simply using:
imgArray = np.pad(imgArray, pad_width=1, mode='constant', constant_values=0)
adds only values to the third axis (i.e. the RGB channel), so that you cannot plot the image any more.
As described in the referenced question, you would need to use the following arguments to you code:
imgArray = np.pad(imgArray, pad_width=((1,1), (1,1), (0,0)), mode='constant', constant_values=0)
Also see the np.pad documentation:
Number of values padded to the edges of each axis. ((before_1, after_1), … (before_N, after_N)) unique pad widths for each axis. ((before, after),) yields same before and after pad for each axis. (pad,) or int is a shortcut for before = after = pad width for all axes.
This means the first entry of tuple pads the first axis (in case of an image the upper and lower border) and the second tuple pads the second axis (the left and right borders) with one "0".
You do not want to pad the last dimension, as this is the dimension storing the RGB information.
And as you stated in your question that you want a white border: constant_values should be set to 255 or 1, depending on the range of your image. Using 0 results in a black border.
Whilst I see you already have an answer, I wanted to show the general case where you want to pad with something other than black or white, i.e. you want to add a coloured border. I couldn't get any of the methods suggested in the other answers to do that, so...
Say you have lena.png as follows:
Then you can do:
from PIL import Image, ImageOps
import numpy as np
# Load the image - you could just as well use OpenCV `imread()`
img = Image.open('lena.png')
# Pad 20px to all sides with magenta
padded = ImageOps.expand(img, border=20, fill=(255,0,255))
# Save to disk
padded.save('result.png')
Before anyone decides to downvote because the OP asked how to add white borders, please note you can just as easily add white with this method if you use:
padded = ImageOps.expand(img, border=20, fill=(255,255,255))
If you are using numpy arrays to manipulate your images, you can convert from numpy array to PIL Image with:
pil_image = Image.fromarray(numpy_array)
and the other way with:
numpy_array = np.array(pil_image)

How to create mask for outer pixels when using skimage.transform.rotate

skimage rotate function create "outer" pixels, no matter how this pixels extrapolated (wrap, mirror, constant, etc) - they are fake, and can affect statistical analysis of image. How can I get mask of this pixels to ignore them in analysis?
mask_val = 2
rotated = skimage.transform.rotate(img, 15, resize=True, cval=mask_val,
preserve_range=False)
mask = rotated == mask_val
Idea: pick a value for the mask which doesn't appear in the image, then obtain mask by checking for equality with this value. Works well when image pixels are normalized floats. rotate above transforms image pixels to normalized floats internally thanks to preserve_range=False (this is default value, I specified it just to make point that without it this wouldn't work).

Remove spurious small islands of noise in an image - Python OpenCV

I am trying to get rid of background noise from some of my images. This is the unfiltered image.
To filter, I used this code to generate a mask of what should remain in the image:
element = cv2.getStructuringElement(cv2.MORPH_RECT, (2,2))
mask = cv2.erode(mask, element, iterations = 1)
mask = cv2.dilate(mask, element, iterations = 1)
mask = cv2.erode(mask, element)
With this code and when I mask out the unwanted pixels from the original image, what I get is:
As you can see, all the tiny dots in the middle area are gone, but a lot of those coming from the denser area are also gone. To reduce the filtering, I tried changing the second parameter of getStructuringElement() to be (1,1) but doing this gives me the first image as if nothing has been filtered.
Is there any way where I can apply some filter that is between these 2 extremes?
In addition, can anyone explain to me what exactly does getStructuringElement() do? What is a "structuring element"? What does it do and how does its size (the second parameter) affect the level of filtering?
A lot of your questions stem from the fact that you're not sure how morphological image processing works, but we can put your doubts to rest. You can interpret the structuring element as the "base shape" to compare to. 1 in the structuring element corresponds to a pixel that you want to look at in this shape and 0 is one you want to ignore. There are different shapes, such as rectangular (as you have figured out with MORPH_RECT), ellipse, circular, etc.
As such, cv2.getStructuringElement returns a structuring element for you. The first parameter specifies the type you want and the second parameter specifies the size you want. In your case, you want a 2 x 2 "rectangle"... which is really a square, but that's fine.
In a more bastardized sense, you use the structuring element and scan from left to right and top to bottom of your image and you grab pixel neighbourhoods. Each pixel neighbourhood has its centre exactly at the pixel of interest that you're looking at. The size of each pixel neighbourhood is the same size as the structuring element.
Erosion
For an erosion, you examine all of the pixels in a pixel neighbourhood that are touching the structuring element. If every non-zero pixel is touching a structuring element pixel that is 1, then the output pixel in the corresponding centre position with respect to the input is 1. If there is at least one non-zero pixel that does not touch a structuring pixel that is 1, then the output is 0.
In terms of the rectangular structuring element, you need to make sure that every pixel in the structuring element is touching a non-zero pixel in your image for a pixel neighbourhood. If it isn't, then the output is 0, else 1. This effectively eliminates small spurious areas of noise and also decreases the area of objects slightly.
The size factors in where the larger the rectangle, the more shrinking is performed. The size of the structuring element is a baseline where any objects that are smaller than this rectangular structuring element, you can consider them as being filtered and not appearing in the output. Basically, choosing a 1 x 1 rectangular structuring element is the same as the input image itself because that structuring element fits all pixels inside it as the pixel is the smallest representation of information possible in an image.
Dilation
Dilation is the opposite of erosion. If there is at least one non-zero pixel that touches a pixel in the structuring element that is 1, then the output is 1, else the output is 0. You can think of this as slightly enlarging object areas and making small islands bigger.
The implications with size here is that the larger the structuring element, the larger the areas of the objects will be and the larger the isolated islands become.
What you're doing is an erosion first followed by a dilation. This is what is known as an opening operation. The purpose of this operation is to remove small islands of noise while (trying to) maintain the areas of the larger objects in your image. The erosion removes those islands while the dilation grows back the larger objects to their original sizes.
You follow this with an erosion again for some reason, which I can't quite understand, but that's ok.
What I would personally do is perform a closing operation first which is a dilation followed by an erosion. Closing helps group areas that are close together into a single object. As such, you see that there are some larger areas that are close to each other that should probably be joined before we do anything else. As such, I would do a closing first, then do an opening after so that we can remove the isolated noisy areas. Take note that I'm going to make the closing structuring element size larger as I want to make sure I get nearby pixels and the opening structuring element size smaller so that I don't want to mistakenly remove any of the larger areas.
Once you do this, I would mask out any extra information with the original image so that you leave the larger areas intact while the small islands go away.
Instead of chaining an erosion followed by a dilation, or a dilation followed by an erosion, use cv2.morphologyEx, where you can specify MORPH_OPEN and MORPH_CLOSE as the flags.
As such, I would personally do this, assuming your image is called spots.png:
import cv2
import numpy as np
img = cv2.imread('spots.png')
img_bw = 255*(cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) > 5).astype('uint8')
se1 = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
se2 = cv2.getStructuringElement(cv2.MORPH_RECT, (2,2))
mask = cv2.morphologyEx(img_bw, cv2.MORPH_CLOSE, se1)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, se2)
mask = np.dstack([mask, mask, mask]) / 255
out = img * mask
cv2.imshow('Output', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.imwrite('output.png', out)
The above code is pretty self-explanatory. First, I read in the image and then I convert the image to grayscale and threshold with an intensity of 5 to create a mask of what is considered object pixels. This is a rather clean image and so anything larger than 5 seems to have worked. For the morphology routines, I need to convert the image to uint8 and scale the mask to 255. Next, we create two structuring elements - one that is a 5 x 5 rectangle for the closing operation and another that is 2 x 2 for the opening operation. I run cv2.morphologyEx twice for the opening and closing operations respectively on the thresholded image.
Once I do that, I stack the mask so that it becomes a 3D matrix and divide by 255 so that it becomes a mask of [0,1] and then we multiply this mask with the original image so that we can grab the original pixels of the image back and maintaining what is considered a true object from the mask output.
The rest is just for illustration. I show the image in a window, and I also save the image to a file called output.png, and its purpose is to show you what the image looks like in this post.
I get this:
Bear in mind that it isn't perfect, but it's much better than how you had it before. You'll have to play around with the structuring element sizes to get something that you consider as a good output, but this is certainly enough to get you started.
C++ version
There have been some requests to translate the code I wrote above into the C++ version using OpenCV. I have finally gotten around to writing a C++ version of the code and this has been tested on OpenCV 3.1.0. The code for this is below. As you can see, the code is very similar to that seen in the Python version. However, I used cv::Mat::setTo on a copy of the original image and set whatever was not part of the final mask to 0. This is the same thing as performing an element-wise multiplication in Python.
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int argc, char *argv[])
{
// Read in the image
Mat img = imread("spots.png", CV_LOAD_IMAGE_COLOR);
// Convert to black and white
Mat img_bw;
cvtColor(img, img_bw, COLOR_BGR2GRAY);
img_bw = img_bw > 5;
// Define the structuring elements
Mat se1 = getStructuringElement(MORPH_RECT, Size(5, 5));
Mat se2 = getStructuringElement(MORPH_RECT, Size(2, 2));
// Perform closing then opening
Mat mask;
morphologyEx(img_bw, mask, MORPH_CLOSE, se1);
morphologyEx(mask, mask, MORPH_OPEN, se2);
// Filter the output
Mat out = img.clone();
out.setTo(Scalar(0), mask == 0);
// Show image and save
namedWindow("Output", WINDOW_NORMAL);
imshow("Output", out);
waitKey(0);
destroyWindow("Output");
imwrite("output.png", out);
}
The results should be the same as what you get in the Python version.
One can also remove small pixel clusters using the remove_small_objects function in skimage:
import matplotlib.pyplot as plt
from skimage import morphology
import numpy as np
import skimage
# read the image, grayscale it, binarize it, then remove small pixel clusters
im = plt.imread('spots.png')
grayscale = skimage.color.rgb2gray(im)
binarized = np.where(grayscale>0.1, 1, 0)
processed = morphology.remove_small_objects(binarized.astype(bool), min_size=2, connectivity=2).astype(int)
# black out pixels
mask_x, mask_y = np.where(processed == 0)
im[mask_x, mask_y, :3] = 0
# plot the result
plt.figure(figsize=(10,10))
plt.imshow(im)
This displays:
To retain only larger clusters, try increasing min_size (smallest size of retained clusters) and decreasing connectivity (size of pixel neighborhood when forming clusters). Using just those two parameters, one can retain only pixel clusters of an appropriate size.

Categories

Resources