I am trying to separate the CMYK channels from an RGB image to be used later.
The question np.dstack() does not recreate original image in OpenCV is my earlier question using a test image. I decided to run it again with a different image and it returned some results and I don't know what's happening.
The issue with this is the results do not match the original image as they are supposed to.
Left: Original Image
Top Right: Result as .jpg
Bottom Right: Result as .tif
Code:
import cv2
import numpy as np
# Load image
bgr = cv2.imread('xpwallpaper.jpg')
# Make float and divide by 255 to give BGRdash
bgrdash = bgr.astype(np.float)/255.
# Calculate K as (1 - whatever is biggest out of Rdash, Gdash, Bdash)
K = 1 - np.max(bgrdash, axis=2)
# Calculate C
C = (1-bgrdash[...,2] - K)/(1-K)
# Calculate M
M = (1-bgrdash[...,1] - K)/(1-K)
# Calculate Y
Y = (1-bgrdash[...,0] - K)/(1-K)
# Combine 4 channels into single image and re-scale back up to uint8
CMYK = (np.dstack((C,M,Y,K))*255).astype(np.uint8)
cv2.imwrite("CMYK.jpg", CMYK)
Related
I was trying to combine 3 gray scale images into a single overlapping image with three different colors for each.
For that, I added each into a 3 channel numpy array.
But when plotting with im.show I don't get a colourful image. Till adding 2nd channel it works, but when I add the third channel, it doesn't work. The final image has only red and blue colour.
It is supposed to be red, green and blue for corresponding to the overlapping images.
Why would it be?
image1 = Image.open("E:/imaging/04102022_Bronze/Copper_4_2/10.tif") #openingimage1
image1_norm =(np.array(image1)-np.array(image1).min() ) / (np.array(image1).max() -
np.array(image1).min()) #normalisingimage1
image2 = Image.open("E:/imaging/04102022_Bronze/Oxygen_1_2/10.tif")#openingimage2
image2_norm = (np.array(image2)-np.array(image2).min()) / (np.array(image2).max() -
np.array(image2).min())#normalisingimage2
image3 = Image.open("E:/imaging/04102022_Bronze/Oxygen_1_2/10.tif")#openingimage3
image3_norm = (np.array(image3)-np.array(image3).min()) / (np.array(image3).max() -
np.array(image3).min())#normalisingimage3
im=np.array(image2)
new_image = np.zeros(im.shape + (3,)) #creating an empty 3 channel numpy array .shape of this
array is (255, 1024, 3)
new_image[:,:,0] = image1_norm #adding the three images into three channels
new_image[:,:,1] = image2_norm
new_image[:,:,2] = image3_norm
new_image1=new_image*255.999
new_image2= new_image1.astype(np.uint8)
final_image=final_image=Image.fromarray(new_image2, mode='RGB')
A few possible issues...
When you open an image in PIL, if you want to be sure it is single-channel greyscale, and not accidentally 3-channel RGB, or a palette image, force it to greyscale:
im = Image.open('image.png').convert('L')
Try not to repeat complicated calculations or expressions several times - it just makes for a maintenance nightmare. Maybe use a function instead:
def normalize(im):
# Normalise image to range 0..1
min, max = im.min(), im.max()
return (im.astype(float)-min)/(max-min)
You can use Numpy's dstack() to merge channels - it means "depth"-stack, as opposed to np.vstack() which stacks images vertically above/below each other and np.hstack() which stacks images side-by-side horizontally. It is a lot simpler than creating an image of the right size and individually pushing each channel into it.
result = np.dstack((im1, im2, im3))
That would make the overall code more like this:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
def normalize(im):
# Normalise image to range 0..1
min, max = im.min(), im.max()
return (im.astype(float)-min)/(max-min)
# Load images as single channel Numpy arrays
im1 = np.array(Image.open('ch1.png').convert('L'))
im2 = np.array(Image.open('ch2.png').convert('L'))
im3 = np.array(Image.open('ch3.png').convert('L'))
# Normalize and scale
n1 = normalize(im1) * 255.999
n2 = normalize(im2) * 255.999
n3 = normalize(im3) * 255.999
# Merge channels to RGB
result = np.dstack((n1,n2,n3))
result = Image.fromarray(result.astype(np.uint8))
result.save('result.png')
That makes these three input images:
into this merged image:
I'm trying to merge two RGBA images (with a shape of (h,w,4)), taking into account their alpha channels.
Example :
What I've tried
I tried to do this using opencv for that, but I getting some strange pixels on the output image.
Images Used:
and
import cv2
import numpy as np
import matplotlib.pyplot as plt
image1 = cv2.imread("image1.png", cv2.IMREAD_UNCHANGED)
image2 = cv2.imread("image2.png", cv2.IMREAD_UNCHANGED)
mask1 = image1[:,:,3]
mask2 = image2[:,:,3]
mask2_inv = cv2.bitwise_not(mask2)
mask2_bgra = cv2.cvtColor(mask2, cv2.COLOR_GRAY2BGRA)
mask2_inv_bgra = cv2.cvtColor(mask2_inv, cv2.COLOR_GRAY2BGRA)
# output = image2*mask2_bgra + image1
output = cv2.bitwise_or(cv2.bitwise_and(image2, mask2_bgra), cv2.bitwise_and(image1, mask2_inv_bgra))
output[:,:,3] = cv2.bitwise_or(mask1, mask2)
plt.figure(figsize=(12,12))
plt.imshow(cv2.cvtColor(output, cv2.COLOR_BGRA2RGBA))
plt.axis('off')
Output :
So what I figured out is that I'm getting those weird pixels because I used cv2.bitwise_and function (Which btw works perfectly with binary alpha channels).
I tried using different approaches
Question
Is there an approach to do this (While keeping the output image as an 8bit image).
I was able to obtain the expected result in 2 stages.
# Read both images preserving the alpha channel
hh1 = cv2.imread(r'C:\Users\524316\Desktop\Stack\house.png', cv2.IMREAD_UNCHANGED)
hh2 = cv2.imread(r'C:\Users\524316\Desktop\Stack\memo.png', cv2.IMREAD_UNCHANGED)
# store the alpha channels only
m1 = hh1[:,:,3]
m2 = hh2[:,:,3]
# invert the alpha channel and obtain 3-channel mask of float data type
m1i = cv2.bitwise_not(m1)
alpha1i = cv2.cvtColor(m1i, cv2.COLOR_GRAY2BGRA)/255.0
m2i = cv2.bitwise_not(m2)
alpha2i = cv2.cvtColor(m2i, cv2.COLOR_GRAY2BGRA)/255.0
# Perform blending and limit pixel values to 0-255 (convert to 8-bit)
b1i = cv2.convertScaleAbs(hh2*(1-alpha2i) + hh1*alpha2i)
Note: In the b=above the we are using only the inverse alpha channel of the memo image
But I guess this is not the expected result. So moving on ....
# Finding common ground between both the inverted alpha channels
mul = cv2.multiply(alpha1i,alpha2i)
# converting to 8-bit
mulint = cv2.normalize(mul, dst=None, alpha=0, beta=255,norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
# again create 3-channel mask of float data type
alpha = cv2.cvtColor(mulint[:,:,2], cv2.COLOR_GRAY2BGRA)/255.0
# perform blending using previous output and multiplied result
final = cv2.convertScaleAbs(b1i*(1-alpha) + mulint*alpha)
Sorry for the weird variable names. I would request you to analyze the result in each line. I hope this is the expected output.
You could use PIL library to achieve this
from PIL import Image
def merge_images(im1, im2):
bg = Image.open(im1).convert("RGBA")
fg = Image.open(im2).convert("RGBA")
x, y = ((bg.width - fg.width) // 2 , (bg.height - fg.height) // 2)
bg.paste(fg, (x, y), fg)
# convert to 8 bits (pallete mode)
return bg.convert("P")
we can test it using the provided images:
result_image = merge_images("image1.png", "image2.png")
result_image.save("image3.png")
Here's the result:
How to downscale a tiff image of 10m resolution and create a new image of 50m where each pixel is stats from the first image?
The initial tiff image is a binary classification map - meaning each pixel (10m) belongs either to class "water" (value =0) or class "ice" (value=1).
I would like to create a new image, where each pixel is the percentage of water in a 5 x 5 block of the initial map, meaning each pixel of the new image will have a 50 m resolution and represents the ratio or percentage of "water" pixel on every 5x5 pixel of the former map. You can see the example here: Example
Here is an image sample (can be downloaded from google drive):
https://drive.google.com/uc?export=download&id=19hWQODERRsvoESiUZuL0GQHg4Mz4RbXj
Your image is saved in a rather odd format, using a 32-bit float to represent just two classes of data which could be represented in a single bit, so I converted it to PNG with ImageMagick using:
magick YOURIMAGE.TIF -alpha off image.png
Many Python libraries will stutter on your actual TIFF so maybe think about using a different way of writing it.
Once that is done, the code might look something like this:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
# Set size of tiles for subsampling
tileX = tileY = 5
# Open image and convert to greyscale and thence to Numpy array
im = Image.open('image.png').convert('L')
na = np.array(im)
# Round height and width down to next lower multiple of tile sizes
h = (na.shape[0] // tileY) * tileY
w = (na.shape[1] // tileX) * tileX
# Create empty output array to fill
res = np.empty((h//tileY,w//tileX), np.uint8)
pxPerTile = tileX * tileY
for yoffset in range(0,h,tileY):
for xoffset in range(0,w,tileX):
# Count ice pixels in this 5x5 tile
nonZero = np.count_nonzero(na[yoffset:yoffset+tileY, xoffset:xoffset+tileX])
percent = int((100.0 * (pxPerTile - nonZero))/pxPerTile)
res[yoffset//tileY, xoffset//tileX] = percent
# Make Numpy array back into PIL Image and save
Image.fromarray(res.astype(np.uint8)).save('result.png')
On reflection, you can probably do it faster and more simply with cv2.resize() and a decimation of 0.2 on both axes and interpolation cv2.INTER_AREA
I did a version in pyvips:
#!/usr/bin/python3
import sys
import pyvips
image = pyvips.Image.new_from_file(sys.argv[1])
# label (0 == water, 1 == ice) is in the first band
label = image[0]
# average 5x5 areas
label = label.shrink(5, 5)
# turn into a percentage of water
water_percent = 100 * (1 - label)
# ... and save
water_percent.write_to_file(sys.argv[2])
I can run it on your test image like this:
$ ./average.py ~/pics/meltPondClassifiedAndS12.tif x.png
To make this (rather dark) output:
My program breaks an image into 4 channels of CMYK. When I go to re stack these images something isn't working, it does not look like the original image although it does have color.
Input Image:
Code:
import cv2
import numpy as np
# Load image
bgr = cv2.imread('xpwallpaper.jpg')
# Make float and divide by 255 to give BGRdash
bgrdash = bgr.astype(np.float)/255.
# Calculate K as (1 - whatever is biggest out of Rdash, Gdash, Bdash)
K = 1 - np.max(bgrdash, axis=2)
# Calculate C
C = (1-bgrdash[...,2] - K)/(1-K)
# Calculate M
M = (1-bgrdash[...,1] - K)/(1-K)
# Calculate Y
Y = (1-bgrdash[...,0] - K)/(1-K)
# Combine 4 channels into single image and re-scale back up to uint8
CMYK = (np.dstack((C,M,Y,K))*255).astype(np.uint8)
cv2.imwrite("CMYK.jpg", CMYK)
Resulting Image:
This is an oddly specific issue I am having trouble searching.
Is there any OpenCV function to convert three channel binary image to 3 channel RGB image?
Here is my code where disparitySGB.jpg is a grayscale (3 channel image).
import cv2
import numpy as np
image=cv2.imread("disparitySGB.jpg")
thresh=cv2.inRange(image,np.array([89,89,89]),np.array([140,140,140]));
cv2.rectangle(np.array(thresh),(100,100),(120,120),(255,255,0),3)
cv2.imshow("thresh",thresh)
cv2.waitKey()
inRange returns 8U image.My concern is that the rectangle drawn must be colored.In this case it is white.(I think it is because the image is three channel binary image).
# my dummy channels:
r = np.ones((100,100),np.uint8) * 100
g = np.ones((100,100),np.uint8) * 70
b = np.ones((100,100),np.uint8) * 10
#now, just merge them:
rgb = cv2.merge((r,g,b))
you can only draw coloured things on 3channel mats. you probably need to : thresh_col = cv2.cvtColor(thresh,cv2.COLOR_GRAY2BGR) and then draw into that