I'm dealing with lots of image files - particularly with tissue samples. Often when you magnify the image and divide the image into tiles there are "blank" tiles. I need to identify these "blank" tiles and remove them. Unfortunately, these are not all one homogenous color, but you can see in my examples, I have one real tile (the obvious one) and the other three are "blank" (in quotes here because to the visual eye they are empty, but from a pixel perspective it's not a uniform value). What's the best way in Python (using Pillow?) to determine that these 3 are blank?
You can maybe try something with numpy (or check the standard déviation or count the number of unique values)
Standard déviation of empty img should be close to zero:
(to adapt)
import numpy as np
image = Image.open('img.jpeg').convert('LA')
# convert image to numpy array
data = asarray(image)
np.reshape(data, (-1,1))
std_dev=np.std(data)
if std_dev<1:
check img
With unique count: (to adapt)
image = Image.open('img.jpeg').convert('LA')
# convert image to numpy array
data = asarray(image)
np.reshape(data, (-1,1))
u, count_unique = np.unique(data, unique_counts =True)
if count_unique.size< 10:
check img
Related
I have a black image that I need to fill with a new color.
I want to generate new images starting from 1% to 100% (generating an
image for every 1% filled).
Examples for 4 fill-ratios
Heart image filled with 1%, 5%, 10% and 15%
Research I did
I did a lot of research on the internet and the closest I came was this link:
Fill an image with color but keep the alpha (Color overlay in PIL)
However, as I don't have much experience with Python for image editing, I couldn't move forward or modify the code as needed.
Edit:
I was trying with this code from the link
from PIL import Image
import numpy as np
# Open image
im = Image.open('2746646.png')
# Make into Numpy array
n = np.array(im)
# Set first three channels to red
n[..., 0:3] = [ 255, 0, 0 ]
# Convert back to PIL Image and save
Image.fromarray(n).save('result.png')
But it only generates a single image (as if it were 100%, I need 100 images with 1% filled in each one).
Updated Answer
Now you have shared your actual starting image, it seems you don't really want to replace black pixels, but actually opaque pixels. If you split your image into its constituent RGBA channels and lay them out left-to-right R,G,B then A, you can see you want to fill where the alpha (rightmost) channel is white, rather than where the RGB channels are black:
That changes the code to this:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
# Load image, ensure not palettised, and make into Numpy array
im = Image.open('muscle.png').convert('RGBA')
# Make Numpy array
RGBA = np.array(im)
# Get RGB part
RGB = RGBA[..., :3]
# Get greyscale version of image as Numpy array
alpha = RGBA[..., 3]
# Find X,Y coordinates of all black pixels in image
blkY, blkX = np.where(alpha==255)
# Just take one entry per row, even if multiple black pixels in it
uniqueRows = np.unique(blkY)
# How many rows are there with black pixels in?
nUniqueRows = len(uniqueRows)
for percent in range(2,101):
# Work out filename based on percentage
filename = f'result-{percent:03d}.png'
# How many rows do we need to fill?
nRows = int(nUniqueRows * percent/100.0)
# Which rows are they? Negative index because filling bottom-up.
rows = uniqueRows[-nRows:]
print(f'DEBUG: filename: {filename}, percent: {percent}, nRows: {nRows}, rows: {rows}')
# What are the indices onto blkY, blkX ?
indices = np.argwhere(np.isin(blkY, rows))
# Make those pixels black
RGB[blkY[indices.ravel()], blkX[indices.ravel()], :3] = [0,255,0]
res = Image.fromarray(RGBA).save(filename)
Original Answer
That was fun! This seems to work - though it's not that efficient. It is not a true "floodfill", see note at end.
#!/usr/bin/env python3
from PIL import Image
import numpy as np
# Load image, ensure not palettised, and make into Numpy array
im = Image.open('heart.png').convert('RGB')
# Make Numpy array
na = np.array(im)
# Get greyscale version of image as Numpy array
grey = np.array(im.convert('L'))
# Find X,Y coordinates of all black pixels in image
blkY, blkX = np.where(grey==0)
# Just take one entry per row, even if multiple black pixels in it
uniqueRows = np.unique(blkY)
# How many rows are there with black pixels in?
nUniqueRows = len(uniqueRows)
for percent in range(1,101):
# Work out filename based on percentage
filename = f'result-{percent:03d}.png'
# How many rows do we need to fill?
nRows = int(nUniqueRows * percent/100.0)
# Which rows are they? Negative index because filling bottom-up.
rows = uniqueRows[-nRows:]
# print(f'DEBUG: filename: {filename}, percent: {percent}, nRows: {nRows}, rows: {rows}')
# What are the indices onto blkY, blkX ?
indices = np.argwhere(np.isin(blkY, rows))
# Make those pixels green
na[blkY[indices.ravel()], blkX[indices.ravel()], :] = [0,255,0]
res = Image.fromarray(na).save(filename)
Note that this isn't actually a true "flood fill" - it is more naïve than that - because it doesn't seem necessary for your image. If you add another shape, it will fill that too:
My goal with the code is to print an image where I have removed all the saturated pixels (the pixels that gives off their maximum value). This is because I am analyzing data from an .fit image of a star.
The instruction I got was:
"What you need to do is figure out what the maximum pixel value is in the 2d array from the data. Then write code that will remove those values, while still keeping the image. Basically, what I want to see is a star with a hole in the middle, where you have removed the saturated pixels."
I have succeeded to find the maximum pixel value (65535) and now I need to print my image without the pixels with that particular value.
This is my code so far but now I do not know how to remove the pixels from my image.
import numpy as np
from astropy.io import fits
import matplotlib.pyplot as plt
fits_image_filename = "Acturus_V_2s.fit"
hdul = fits.open(fits_image_filename)
data = hdul[0].data
datacut = data[610:710,755:855]
plt.imshow(datacut, origin="lower")
MaxPixelValue = np.amax(datacut)
print(MaxPixelValue)
And this gives the output:
65535 and my image
How am I supposed to remove those pixels?
As the comments have pointed out, it's not generally possible to "remove" pixels from an image. But there are a few ways of dealing with some pixels in an image while leaving the others alone. In general, to do this you'll end up creating a boolean mask -- an array with the same shape as your image but containing values that are either True or False.
mask = datacut >= MaxPixelValue
Now the mask array contains True wherever the original image is saturated and False everywhere else.
You can set the values of the saturated pixels to NaN, which Matplotlib will handle as blank pixels (i.e., they do not have any color when using imshow()).
datacut[mask] = np.nan
Alternatively, you can set the saturated pixels in your image to some indicative color (e.g. white or black):
datacut[mask] = 0
Or you can create a numpy maskedarray, which is a sort of combination of your original image and the mask:
masked_image = np.ma.masked_array(datacut, mask)
The numpy documentation has a brief description of how to use masked arrays.
https://numpy.org/doc/stable/reference/maskedarray.generic.html
I also found some documentation from the astrophysics community that might be useful, since you're also using fits and astropy.
https://prancer.physics.louisville.edu/astrowiki/index.php/Image_processing_with_Python_and_SciPy#Masked_Image_Operations
I am trying to convert an image into an array of pixels.
Here is my current code.
im = Image.open("beeleg.png")
pixels = im.load()
im.getdata() # doesn't work
print(pixels # doesn't work
Ideally, my end goal is to convert the image into a vector of just pixels, so for instance if I have an image of dimensions 100x100, then I want a vector of dimensions 1x10000, where each value is between [0, 255]. Then, divide each of the values in the array by 256 and add a bias of 1 in the front of the vector. However, I am not able to proceed with all this without being able to obtain an array. How to proceed?
Scipy's ndimage library is generally the go-to library for working with pixels as data (arrays). You can load an image from file (most common formats supported) using scipy.ndimage.imread into a numpy array which can be easily reshaped and mathematically operated on. The mode keyword can be used to specify a colorspace transformation upon load (convert an RGB image to black and white). In your case you asked for single color pixels from 0-255 (8bit grayscale) so you would use mode='L'. See The Documentation for usage / more useful functions.
If use OpenCV, gray=cv2.imread(image,0) will return a grayscale image with n rows x m cols single channel numpy array. rows, cols = gray.shape will return the height and width of the image.
I am new to python so I really need help with this one.
I have an image greyscaled and thresholded so that the only colors present are black and white.
I'm not sure how to go about writing an algorithm that will give me a list of coordinates (x,y) on the image array that correspond to the white pixels only.
Any help is appreciated!
Surely you must already have the image data in the form of a list of intensity values? If you're using Anaconda, you can use the PIL Image module and call getdata() to obtain this intensity information. Some people advise to use NumPy methods, or others, instead, which may improve performance. If you want to look into that then go for it, my answer can apply to any of them.
If you have already a function to convert a greyscale image to B&W, then you should have the intensity information on that output image, a list of 0's and 1's , starting from the top left corner to the bottom right. If you have that, you already have your location data, it just isnt in (x,y) form. To do that, use something like this:
data = image.getdata()
height = image.getHeight()
width = image.getWidth()
pixelList = []
for i in range(height):
for j in range(width):
stride = (width*i) + j
pixelList.append((j, i, data[stride]))
Where data is a list of 0's and 1's (B&W), and I assume you have written getWidth() and getHeight() Don't just copy what I've written, understand what the loops are doing. That will result in a list, pixelList, of tuples, each tuple containing intensity and location information, in the form (x, y, intensity). That may be a messy form for what you are doing, but that's the idea. It would be much cleaner and accessible to instead of making a list of tuples, pass the three values (x, y, intensity) to a Pixel object or something. Then you can get any of those values from anywhere. I would encourage you to do that, for better organization and so you can write the code on your own.
In either case, having the intensity and location stored together makes sorting out the white pixels very easy. Here it is using the list of tuples:
whites = []
for pixel in pixelList:
if pixel[2] == 1:
whites.append(pixel[0:2])
Then you have a list of white pixel coordinates.
You can usePIL and np.where to get the results efficiently and concisely
from PIL import Image
import numpy as np
img = Image.open('/your_pic.png')
pixel_mat = np.array(img.getdata())
width = img.size[0]
pixel_ind = np.where((pixel_mat[:, :3] > 0).any(axis=1))[0]
coordinate = np.concatenate(
[
(pixel_ind % width).reshape(-1, 1),
(pixel_ind // width).reshape(-1, 1),
],
axis=1,
)
Pick the required pixels and get their index, then calculate the coordinates based on it. Without using Loop expressions, this algorithm may be faster.
PIL is only used to get the pixel matrix and image width, you can use any library you are familiar with to replace it.
I'm would like to go from an image filename to a list of coordinates of the white pixels in the image.
I know it involves PIL. I have tried using Image.load() but this doesn't help because the output is not indexable (to use in a for loop).
You can dump an image as a numpy array and manipulate the pixel values that way.
from PIL import Image
import numpy as np
im=Image.open("someimage.png")
pixels=np.asarray(im.getdata())
npixels,bpp=pixels.shape
This will give you an array whose dimensions will depend on how many bands you have per pixel (bpp above) and the number of rows times the number of columns in the image -- shape will give you the size of the resulting array. Once you have the pixel values, it ought to be straightforward to filter out those whose values are 255
To convert a numpy array back to an image use:
im=Image.fromarray(pixels)