histogram equalization skimage - python

I'm tryng to get some statics from some images, and when I tryed to perform histogram equalization I get confused.
Because I tryed this:
img = io.imread(file);
img = exposure.equalize_hist(img);
And I get the warning warn("This might be a color image. The histogram will be "
Then I tryed to perform the equalization in each channel like this:
img = io.imread(file);
#img = exposure.equalize_hist(img);
height, width = len(img), len(img[0]);
r1 = [];
g1 = [];
b1 = [];
for i in range(height):
for j in range(width):
pixel = img[i, j];
r1.append(pixel[0]);
g1.append(pixel[1]);
b1.append(pixel[2]);
r = exposure.equalize_hist(r1);
g = exposure.equalize_hist(g1);
b = exposure.equalize_hist(b1);
And I get the error
AttributeError: 'list' object has no attribute 'shape'
So how should I do histogram equalization in an image with color, and if I wanna do it in an image in HSV, or CIELAB, is it the same way?!
histogram equalization

To equalize each channel separately:
from skimage import io, exposure
img = io.imread(img_path)
for channel in range(img.shape[2]): # equalizing each channel
img[:, :, channel] = exposure.equalize_hist(img[:, :, channel])
That's because img[:, :, channel] already gives you the 2d image array supported by equalize_hist, so that you don't need to create three lists (which may be considerably inefficient, by the way). The code supposes that you do have a image (3d array) with channels on the last dimension (which is the case if you load it with skimage.io.imread).
Also, it should work the same with RGB, HSV of Lab (skimage conversions will keep channels on the last dimension). For example img = color.rgb2hsv(img) or img = color.rgb2lab(img).
If you load a grey-scale image (already a 2d array), then your commented line should work (you could handle both cases with a simple if condition).
Just something else: you can drop the semicolons.

Related

Combining 3 channel numpy array to form an rgb image

I was trying to combine 3 gray scale images into a single overlapping image with three different colors for each.
For that, I added each into a 3 channel numpy array.
But when plotting with im.show I don't get a colourful image. Till adding 2nd channel it works, but when I add the third channel, it doesn't work. The final image has only red and blue colour.
It is supposed to be red, green and blue for corresponding to the overlapping images.
Why would it be?
image1 = Image.open("E:/imaging/04102022_Bronze/Copper_4_2/10.tif") #openingimage1
image1_norm =(np.array(image1)-np.array(image1).min() ) / (np.array(image1).max() -
np.array(image1).min()) #normalisingimage1
image2 = Image.open("E:/imaging/04102022_Bronze/Oxygen_1_2/10.tif")#openingimage2
image2_norm = (np.array(image2)-np.array(image2).min()) / (np.array(image2).max() -
np.array(image2).min())#normalisingimage2
image3 = Image.open("E:/imaging/04102022_Bronze/Oxygen_1_2/10.tif")#openingimage3
image3_norm = (np.array(image3)-np.array(image3).min()) / (np.array(image3).max() -
np.array(image3).min())#normalisingimage3
im=np.array(image2)
new_image = np.zeros(im.shape + (3,)) #creating an empty 3 channel numpy array .shape of this
array is (255, 1024, 3)
new_image[:,:,0] = image1_norm #adding the three images into three channels
new_image[:,:,1] = image2_norm
new_image[:,:,2] = image3_norm
new_image1=new_image*255.999
new_image2= new_image1.astype(np.uint8)
final_image=final_image=Image.fromarray(new_image2, mode='RGB')
A few possible issues...
When you open an image in PIL, if you want to be sure it is single-channel greyscale, and not accidentally 3-channel RGB, or a palette image, force it to greyscale:
im = Image.open('image.png').convert('L')
Try not to repeat complicated calculations or expressions several times - it just makes for a maintenance nightmare. Maybe use a function instead:
def normalize(im):
# Normalise image to range 0..1
min, max = im.min(), im.max()
return (im.astype(float)-min)/(max-min)
You can use Numpy's dstack() to merge channels - it means "depth"-stack, as opposed to np.vstack() which stacks images vertically above/below each other and np.hstack() which stacks images side-by-side horizontally. It is a lot simpler than creating an image of the right size and individually pushing each channel into it.
result = np.dstack((im1, im2, im3))
That would make the overall code more like this:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
def normalize(im):
# Normalise image to range 0..1
min, max = im.min(), im.max()
return (im.astype(float)-min)/(max-min)
# Load images as single channel Numpy arrays
im1 = np.array(Image.open('ch1.png').convert('L'))
im2 = np.array(Image.open('ch2.png').convert('L'))
im3 = np.array(Image.open('ch3.png').convert('L'))
# Normalize and scale
n1 = normalize(im1) * 255.999
n2 = normalize(im2) * 255.999
n3 = normalize(im3) * 255.999
# Merge channels to RGB
result = np.dstack((n1,n2,n3))
result = Image.fromarray(result.astype(np.uint8))
result.save('result.png')
That makes these three input images:
into this merged image:

How to merge two RGBA images

I'm trying to merge two RGBA images (with a shape of (h,w,4)), taking into account their alpha channels.
Example :
What I've tried
I tried to do this using opencv for that, but I getting some strange pixels on the output image.
Images Used:
and
import cv2
import numpy as np
import matplotlib.pyplot as plt
image1 = cv2.imread("image1.png", cv2.IMREAD_UNCHANGED)
image2 = cv2.imread("image2.png", cv2.IMREAD_UNCHANGED)
mask1 = image1[:,:,3]
mask2 = image2[:,:,3]
mask2_inv = cv2.bitwise_not(mask2)
mask2_bgra = cv2.cvtColor(mask2, cv2.COLOR_GRAY2BGRA)
mask2_inv_bgra = cv2.cvtColor(mask2_inv, cv2.COLOR_GRAY2BGRA)
# output = image2*mask2_bgra + image1
output = cv2.bitwise_or(cv2.bitwise_and(image2, mask2_bgra), cv2.bitwise_and(image1, mask2_inv_bgra))
output[:,:,3] = cv2.bitwise_or(mask1, mask2)
plt.figure(figsize=(12,12))
plt.imshow(cv2.cvtColor(output, cv2.COLOR_BGRA2RGBA))
plt.axis('off')
Output :
So what I figured out is that I'm getting those weird pixels because I used cv2.bitwise_and function (Which btw works perfectly with binary alpha channels).
I tried using different approaches
Question
Is there an approach to do this (While keeping the output image as an 8bit image).
I was able to obtain the expected result in 2 stages.
# Read both images preserving the alpha channel
hh1 = cv2.imread(r'C:\Users\524316\Desktop\Stack\house.png', cv2.IMREAD_UNCHANGED)
hh2 = cv2.imread(r'C:\Users\524316\Desktop\Stack\memo.png', cv2.IMREAD_UNCHANGED)
# store the alpha channels only
m1 = hh1[:,:,3]
m2 = hh2[:,:,3]
# invert the alpha channel and obtain 3-channel mask of float data type
m1i = cv2.bitwise_not(m1)
alpha1i = cv2.cvtColor(m1i, cv2.COLOR_GRAY2BGRA)/255.0
m2i = cv2.bitwise_not(m2)
alpha2i = cv2.cvtColor(m2i, cv2.COLOR_GRAY2BGRA)/255.0
# Perform blending and limit pixel values to 0-255 (convert to 8-bit)
b1i = cv2.convertScaleAbs(hh2*(1-alpha2i) + hh1*alpha2i)
Note: In the b=above the we are using only the inverse alpha channel of the memo image
But I guess this is not the expected result. So moving on ....
# Finding common ground between both the inverted alpha channels
mul = cv2.multiply(alpha1i,alpha2i)
# converting to 8-bit
mulint = cv2.normalize(mul, dst=None, alpha=0, beta=255,norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
# again create 3-channel mask of float data type
alpha = cv2.cvtColor(mulint[:,:,2], cv2.COLOR_GRAY2BGRA)/255.0
# perform blending using previous output and multiplied result
final = cv2.convertScaleAbs(b1i*(1-alpha) + mulint*alpha)
Sorry for the weird variable names. I would request you to analyze the result in each line. I hope this is the expected output.
You could use PIL library to achieve this
from PIL import Image
def merge_images(im1, im2):
bg = Image.open(im1).convert("RGBA")
fg = Image.open(im2).convert("RGBA")
x, y = ((bg.width - fg.width) // 2 , (bg.height - fg.height) // 2)
bg.paste(fg, (x, y), fg)
# convert to 8 bits (pallete mode)
return bg.convert("P")
we can test it using the provided images:
result_image = merge_images("image1.png", "image2.png")
result_image.save("image3.png")
Here's the result:

What is wrong with my focus stacking algorithm?

You can see in the final focus stacked image that the whole image is in focus. However, pieces of the image are missing and I have no clue why. The basic steps of my algorithm are:
Access images. Convert images to grayscale, blur the gray images a bit, then find the Laplacian of these images. I store all Laplaced images in a list.
Cycle through pixels in a blank image using for loops. Every iteration creates a list containing the pixel intensities of the gray, blurred, Laplaced images at that pixel value. Find the max pixel intensity. Then look at the BGR value of the ORIGINAL image where the max pixel intensity came from. Set the BGR value of the blank pixel equal to that of the max-intensity pixel.
Here is my code:
images = glob2.glob("Pics\\step*") # Accesses images in the Pics folder
laps = [] # A list to contain Laplacians of images in Pics
i=0
for image in images:
img = cv.imread(image) # Reads image in Pics
images[i] = img # Bc line 6 only creates a list of image NAMES (ie strings), not actual images, this replaces string w image
img = cv.cvtColor(img, cv.COLOR_BGR2GRAY) # Converts image to grayscale
gauss = cv.GaussianBlur(img, (3,3), 0) # Blurs grayed image a bit
lap = cv.Laplacian(gauss, cv.CV_64F) # Converts blurred, gray image to Laplacian
lap = np.uint8(np.absolute(lap)) # Converts to Laplacian
laps.append(lap) # Adds Laplacian to laps
i += 1
sample = laps[0] # Arbitrarily accesses the first image in laps so that we can get dimensions for line 22
fs = np.zeros((sample.shape[0], sample.shape[1], 3), dtype='uint8') # Creates a blank image with the dimensions of sample
for x in range(sample.shape[0]): # The for loops go through every x and y value
for y in range(sample.shape[1]):
intensities = [laps[0][x,y], laps[1][x,y], laps[2][x,y], laps[3][x,y], laps[4][x,y], laps[5][x,y]] # List of intensities of laplacian images
color = images[intensities.index(max(intensities))][x,y] # Finds BGR value of the x,y pixel in the ORIGINAL image corresponding to the highest intensity
fs[x, y] = color # Sets pixel of blank fs image to the color of the pixel with the strongest intensity
cv.imshow('FS', fs)
Here is what the code produces:
Broken Focus Stacked Image
I was inspired by your code and made this simple script, which seems to work fine. (I do not need to align images.) Using mask to select pixels in focus may be faster, but I haven't tried to compare both versions. I would appreciate any advice on how to improve it.
from pathlib import Path
from imageio import imread, imwrite
import numpy as np
import matplotlib.pyplot as plt
from skimage.color import rgb2hsv, rgb2gray
from skimage import img_as_float, img_as_ubyte
from scipy.ndimage.filters import gaussian_filter
from skimage.filters.rank import gradient
from skimage.morphology import disk
im_dir = Path("test")
sigma = 3
print("_____ load images _____")
fps = [f for f in im_dir.glob("*.jpg")]
print([f.name for f in fps])
images_rgb = [imread(f) for f in fps]
images_rgb_cube = np.array(images_rgb)
print("images_rgb_cube", images_rgb_cube.shape, images_rgb_cube.dtype)
print("_____ images to grey _____")
#images_grey = [rgb2hsv(im)[:,:,2] for im in ims] # slow
images_grey = [rgb2gray(im) for im in images_rgb] # faster
print("_____ get gradients _____")
selection_element = disk(sigma) # matrix of n pixels with a disk shape
grads = [gradient(im, selection_element) for im in images_grey]
grads = np.array(grads)
print("grads", grads.shape, grads.dtype)
print("_____ get mask _____")
mask_grey = grads.max(axis=0, keepdims=1) == grads # https://stackoverflow.com/questions/47678252/mask-from-max-values-in-numpy-array-specific-axis
mask_rgb = np.repeat(mask_grey[:, :, :, np.newaxis], 3, axis=3)
print("mask_rgb", mask_rgb.shape, mask_rgb.dtype)
print("_____ apply mask _____")
image_sharp = images_rgb_cube * mask_rgb
image_sharp = image_sharp.max(axis=0)
print("image_sharp", image_sharp.shape, image_sharp.dtype)
print("_____ save image _____")
imwrite(im_dir / "stacked.jpeg", image_sharp)
plt.imshow(image_sharp)
plt.show()
print("_____ save masks _____")
print("mask_grey", mask_grey.shape, mask_grey.dtype)
for i in range(mask_grey.shape[0]):
mask = mask_grey[i]
fp = im_dir / "{}_mask.jpeg".format(fps[i].stem)
imwrite(fp, img_as_ubyte(mask))
print("saved", fp, mask.shape, mask.dtype)

What is right way to combine RGBY images to single image which has all color channels?

I am trying to combine multiple color channel images to single image. Like:
img1 = cv2.imread('This image is red channel image')
img1 = cv2.imread('This image is red channel image')
img1 = cv2.imread('This image is red channel image')
img1 = cv2.imread('This image is red channel image')
Now, I have researched online and I got 2 different answers, First way:
finalImage = np.stack((img1, img2, img3, img4), axis=-1)
Other way I searched and I got is:
finalImage = img1/4 + img2/4 + img3/4 + img4/4
Which one is the right way ?
Thanks in advance.
It depends what you want.
If you want to read 4 distinct, single channel images and stack them along the final axis to get a 4-channel image, use:
finalImage = np.stack((img1, img2, img3, img4), axis=-1)
and the shape of finalImage will be
(H, W, 4)
If you want to read 4 distinct, single channel images and average them to get a 1-channel image, use:
finalImage = img1/4 + img2/4 + img3/4 + img4/4
and the shape of finalImage will be
(H, W)
I doubt you want this. IMHO it seems unlikely you would want to lump the Y channel in with the colour channels, but stranger things have happened.
You can combine separate channels like this:
# images are loaded in "bgr" format
img_r = cv2.imread("red.png");
img_g = cv2.imread("green.png");
img_b = cv2.imread("blue.png");
# split and extract relevant channel
_,_,r = cv2.split(img_r);
_,g,_ = cv2.split(img_g);
b,_,_ = cv2.split(img_b);
# merge together as one image
bgr = cv2.merge(b,g,r);
You can also load them in as grayscale and merge without doing the split, but then you need to remember to multiply them by the their relative scaling factor. OpenCV's grayscale algorithm for BGR -> Gray is this:
Y = 0.299 R + 0.587 G + 0.114 B

How can I soften just the edges of this image

I am using python and opencv to cut an image using a mask. The mask itself is quite jagged and so the resulting image becomes a bit jagged around the edges like below
Jagged image
Is there a way I can smooth out the edges so they look more like this without affecting the rest of the image?
Smoothed edge
Thanks
SoS
** UPDATE **
Added the original jagged image without the annotation
Original Jagged image
Here is one way using OpenCV, Numpy and Skimage. I assume you actually have an image with a transparent background and not just checkerboard pattern.
Input:
import cv2
import numpy as np
import skimage.exposure
# load image with alpha channel
img = cv2.imread('lena_circle.png', cv2.IMREAD_UNCHANGED)
# extract only bgr channels
bgr = img[:, :, 0:3]
# extract alpha channel
a = img[:, :, 3]
# blur alpha channel
ab = cv2.GaussianBlur(a, (0,0), sigmaX=2, sigmaY=2, borderType = cv2.BORDER_DEFAULT)
# stretch so that 255 -> 255 and 127.5 -> 0
aa = skimage.exposure.rescale_intensity(ab, in_range=(127.5,255), out_range=(0,255))
# replace alpha channel in input with new alpha channel
out = img.copy()
out[:, :, 3] = aa
# save output
cv2.imwrite('lena_circle_antialias.png', out)
# Display various images to see the steps
# NOTE: In and Out show heavy aliasing. This seems to be an artifact of imshow(), which did not display transparency for me. However, the saved image looks fine
cv2.imshow('In',img)
cv2.imshow('BGR', bgr)
cv2.imshow('A', a)
cv2.imshow('AB', ab)
cv2.imshow('AA', aa)
cv2.imshow('Out', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
I am by no means an expert with OpenCV. I looked at cv2.normalize(), but it did not look like I could provide my own sets of input and output values. So I also tried using the following adding the clipping to be sure there were no over-flows or under-flows:
aa = a*2.0 - 255.0
aa[aa<0] = 0
aa[aa>0] = 255
where I computed that from solving simultaneous equations such that in=255 becomes out=255 and in=127.5 becomes out=0 and doing a linear stretch between:
C = A*X+B
255 = A*255+B
0 = A*127.5+B
Thus A=2 and B=-127.5
But that does not work nearly as well as skimage rescale_intensity.
These are some effects you can do with the PIL image library:
from PIL import Image, ImageFilter
im_1 = Image.open("/constr/pics1/russian_doll.png")
im_2 = im_1.filter(ImageFilter.BLUR)
im_3 = im_1.filter(ImageFilter.CONTOUR)
im_4 = im_1.filter(ImageFilter.DETAIL)
im_5 = im_1.filter(ImageFilter.EDGE_ENHANCE)
im_6 = im_1.filter(ImageFilter.EDGE_ENHANCE_MORE)
im_7 = im_1.filter(ImageFilter.EMBOSS)
im_8 = im_1.filter(ImageFilter.FIND_EDGES)
im_9 = im_1.filter(ImageFilter.SMOOTH)
im_10 = im_1.filter(ImageFilter.SMOOTH_MORE)
im_11 = im_1.filter(ImageFilter.SHARPEN)
# now save the images
im_2.save("/constr/picsx/russian_dol_BLUR.png")
im_3.save("/constr/picsx/russian_doll_CONTOUR.png")
im_4.save("/constr/picsx/russian_doll_DETAIL.png")
im_5.save("/constr/picsx/russian_doll_EDGE_ENHANCE.png")
im_6.save("/constr/picsx/russian_doll_EDGE_ENHANCE_MORE.png")
im_7.save("/constr/picsx/russian_doll_EMBOSS.png")
im_8.save("/constr/picsx/russian_doll_FIND_EDGES.png")
im_9.save("/constr/picsx/russian_doll_SMOOTH.png")
im_10.save("/constr/picsx/russian_doll_SMOOTH_MORE.png")
im_11.save("/constr/picsx/russian_doll_SHARPEN.png")

Categories

Resources