Conversion of three channel binary image to three channel RGB image - python

Is there any OpenCV function to convert three channel binary image to 3 channel RGB image?
Here is my code where disparitySGB.jpg is a grayscale (3 channel image).
import cv2
import numpy as np
image=cv2.imread("disparitySGB.jpg")
thresh=cv2.inRange(image,np.array([89,89,89]),np.array([140,140,140]));
cv2.rectangle(np.array(thresh),(100,100),(120,120),(255,255,0),3)
cv2.imshow("thresh",thresh)
cv2.waitKey()
inRange returns 8U image.My concern is that the rectangle drawn must be colored.In this case it is white.(I think it is because the image is three channel binary image).

# my dummy channels:
r = np.ones((100,100),np.uint8) * 100
g = np.ones((100,100),np.uint8) * 70
b = np.ones((100,100),np.uint8) * 10
#now, just merge them:
rgb = cv2.merge((r,g,b))
you can only draw coloured things on 3channel mats. you probably need to : thresh_col = cv2.cvtColor(thresh,cv2.COLOR_GRAY2BGR) and then draw into that

Related

Combining 3 channel numpy array to form an rgb image

I was trying to combine 3 gray scale images into a single overlapping image with three different colors for each.
For that, I added each into a 3 channel numpy array.
But when plotting with im.show I don't get a colourful image. Till adding 2nd channel it works, but when I add the third channel, it doesn't work. The final image has only red and blue colour.
It is supposed to be red, green and blue for corresponding to the overlapping images.
Why would it be?
image1 = Image.open("E:/imaging/04102022_Bronze/Copper_4_2/10.tif") #openingimage1
image1_norm =(np.array(image1)-np.array(image1).min() ) / (np.array(image1).max() -
np.array(image1).min()) #normalisingimage1
image2 = Image.open("E:/imaging/04102022_Bronze/Oxygen_1_2/10.tif")#openingimage2
image2_norm = (np.array(image2)-np.array(image2).min()) / (np.array(image2).max() -
np.array(image2).min())#normalisingimage2
image3 = Image.open("E:/imaging/04102022_Bronze/Oxygen_1_2/10.tif")#openingimage3
image3_norm = (np.array(image3)-np.array(image3).min()) / (np.array(image3).max() -
np.array(image3).min())#normalisingimage3
im=np.array(image2)
new_image = np.zeros(im.shape + (3,)) #creating an empty 3 channel numpy array .shape of this
array is (255, 1024, 3)
new_image[:,:,0] = image1_norm #adding the three images into three channels
new_image[:,:,1] = image2_norm
new_image[:,:,2] = image3_norm
new_image1=new_image*255.999
new_image2= new_image1.astype(np.uint8)
final_image=final_image=Image.fromarray(new_image2, mode='RGB')
A few possible issues...
When you open an image in PIL, if you want to be sure it is single-channel greyscale, and not accidentally 3-channel RGB, or a palette image, force it to greyscale:
im = Image.open('image.png').convert('L')
Try not to repeat complicated calculations or expressions several times - it just makes for a maintenance nightmare. Maybe use a function instead:
def normalize(im):
# Normalise image to range 0..1
min, max = im.min(), im.max()
return (im.astype(float)-min)/(max-min)
You can use Numpy's dstack() to merge channels - it means "depth"-stack, as opposed to np.vstack() which stacks images vertically above/below each other and np.hstack() which stacks images side-by-side horizontally. It is a lot simpler than creating an image of the right size and individually pushing each channel into it.
result = np.dstack((im1, im2, im3))
That would make the overall code more like this:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
def normalize(im):
# Normalise image to range 0..1
min, max = im.min(), im.max()
return (im.astype(float)-min)/(max-min)
# Load images as single channel Numpy arrays
im1 = np.array(Image.open('ch1.png').convert('L'))
im2 = np.array(Image.open('ch2.png').convert('L'))
im3 = np.array(Image.open('ch3.png').convert('L'))
# Normalize and scale
n1 = normalize(im1) * 255.999
n2 = normalize(im2) * 255.999
n3 = normalize(im3) * 255.999
# Merge channels to RGB
result = np.dstack((n1,n2,n3))
result = Image.fromarray(result.astype(np.uint8))
result.save('result.png')
That makes these three input images:
into this merged image:

How to change the colour of pixels in an image depending on their initial luminosity?

The aim is to take a coloured image, and change any pixels within a certain luminosity range to black. For example, if luminosity is the average of a pixel's RGB values, any pixel with a value under 50 is changed to black.
I’ve attempted to begin using PIL and converting to grayscale, but having trouble trying to find a solution that can identify luminosity value and use that info to manipulate a pixel map.
There are many ways to do this, but the simplest and probably fastest is with Numpy, which you should get accustomed to using with image processing in Python:
from PIL import Image
import numpy as np
# Load image and ensure RGB, not palette image
im = Image.open('start.png').convert('RGB')
# Make into Numpy array
na = np.array(im)
# Make all pixels of "na" where the mean of the R,G,B channels is less than 50 into black (0)
na[np.mean(na, axis=-1)<50] = 0
# Convert back to PIL Image to save or display
result = Image.fromarray(na)
result.show()
That turns this:
Into this:
Another slightly different way would be to convert the image to a more conventional greyscale, rather than averaging for the luminosity:
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Calculate greyscale version
grey = im.convert('L')
# Point process over pixels to make mask of darker ones
mask = grey.point(lambda p: 255 if p<50 else 0)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=mask)
Notice that the blue channel is given considerably less significance in the ITU-R 601-2 luma transform that PIL uses (see the lower 114 weighting for Blue versus 299 for Red and 587 for Green) in the formula:
L = R * 299/1000 + G * 587/1000 + B * 114/1000
so the blue shades are considered darker and become black.
Another way would be to make a greyscale and a mask as above. but then choose the darker pixel at each location when comparing the original and the mask:
from PIL import Image, ImageChops
im = Image.open('start.png').convert('RGB')
grey = im.convert('L')
mask = grey.point(lambda p: 0 if p<50 else 255)
res = ImageChops.darker(im, mask.convert('RGB'))
That gives the same result as above.
Another way, pure PIL and probably closest to what you actually asked, would be to derive a luminosity value by averaging the channels:
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Calculate greyscale version by averaging R,G and B
grey = im.convert('L', matrix=(0.333, 0.333, 0.333, 0))
# Point process over pixels to make mask of darker ones
mask = grey.point(lambda p: 255 if p<50 else 0)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=mask)
Another approach could be to split the image into its constituent RGB channels, evaluate a mathematical function over the channels and mask with the result:
from PIL import Image, ImageMath
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Split into RGB channels
(R, G, B) = im.split()
# Evaluate mathematical function over channels
dark = ImageMath.eval('(((R+G+B)/3) <= 50) * 255', R=R, G=G, B=B)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=dark)
I created a function that returns a list with True if the pixel has a luminosity of less than a parameter, and False if it doesn't. It includes an RGB or RGBA option (True or False)
def get_avg_lum(pic,avg=50,RGBA=False):
num=3
numd=4
if RGBA==False:
num=2
numd=3
li=[[[0]for y in range(0,pic.size[1])] for x in range(0,pic.size[0])]
for x in range(0,pic.size[0]):
for y in range(0,pic.size[1]):
if sum(pic.getpixel((x,y))[:num])/numd<avg:
li[x][y]=True
else:
li[x][y]=False
return(li)
a=get_avg_lum(im)
The pixels match in the list, so (0,10) on the image is [0][10] in the list.
Hopefully this helps. My module is for standard PIL objects.

Simplest way to convert a low-resolution black and white picture to a matrix

I have a set of very low-resolution pictures (in .png but I can easily convert them to something else). They all only have black or white pixels, like QR codes.
What I want is to be able to read them as binary matrix (a 1 for a black pixel and a zero for a white one).
I don't need anything more fancy than that, what should I use?
Hi you can use PIL to read the image, and then numpy to convert it to a matrix
from PIL import Image
import numpy as np
im = Image.read("imageName.ext")
im_mat = np.asarray(im)
Alternatively you can do all in one step with opencv
import cv2
img = cv2.imread("imageName.ext")
in both cases you will have a matrix with size WxHxC with H the height in pixels, W the widht and c the number of channels (3 or 4 depending if there's an alpha for transparency).
If your image is black and white and you only want a matrix with size WxH take one channel with
img = img_mat[:,:,0] #8-bit matrix
and last you can binarize that givving an umbral or just by comparing
bin = img> 128
or
bin = img == 255
I corrected this last line I had a typo in it

How can I soften just the edges of this image

I am using python and opencv to cut an image using a mask. The mask itself is quite jagged and so the resulting image becomes a bit jagged around the edges like below
Jagged image
Is there a way I can smooth out the edges so they look more like this without affecting the rest of the image?
Smoothed edge
Thanks
SoS
** UPDATE **
Added the original jagged image without the annotation
Original Jagged image
Here is one way using OpenCV, Numpy and Skimage. I assume you actually have an image with a transparent background and not just checkerboard pattern.
Input:
import cv2
import numpy as np
import skimage.exposure
# load image with alpha channel
img = cv2.imread('lena_circle.png', cv2.IMREAD_UNCHANGED)
# extract only bgr channels
bgr = img[:, :, 0:3]
# extract alpha channel
a = img[:, :, 3]
# blur alpha channel
ab = cv2.GaussianBlur(a, (0,0), sigmaX=2, sigmaY=2, borderType = cv2.BORDER_DEFAULT)
# stretch so that 255 -> 255 and 127.5 -> 0
aa = skimage.exposure.rescale_intensity(ab, in_range=(127.5,255), out_range=(0,255))
# replace alpha channel in input with new alpha channel
out = img.copy()
out[:, :, 3] = aa
# save output
cv2.imwrite('lena_circle_antialias.png', out)
# Display various images to see the steps
# NOTE: In and Out show heavy aliasing. This seems to be an artifact of imshow(), which did not display transparency for me. However, the saved image looks fine
cv2.imshow('In',img)
cv2.imshow('BGR', bgr)
cv2.imshow('A', a)
cv2.imshow('AB', ab)
cv2.imshow('AA', aa)
cv2.imshow('Out', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
I am by no means an expert with OpenCV. I looked at cv2.normalize(), but it did not look like I could provide my own sets of input and output values. So I also tried using the following adding the clipping to be sure there were no over-flows or under-flows:
aa = a*2.0 - 255.0
aa[aa<0] = 0
aa[aa>0] = 255
where I computed that from solving simultaneous equations such that in=255 becomes out=255 and in=127.5 becomes out=0 and doing a linear stretch between:
C = A*X+B
255 = A*255+B
0 = A*127.5+B
Thus A=2 and B=-127.5
But that does not work nearly as well as skimage rescale_intensity.
These are some effects you can do with the PIL image library:
from PIL import Image, ImageFilter
im_1 = Image.open("/constr/pics1/russian_doll.png")
im_2 = im_1.filter(ImageFilter.BLUR)
im_3 = im_1.filter(ImageFilter.CONTOUR)
im_4 = im_1.filter(ImageFilter.DETAIL)
im_5 = im_1.filter(ImageFilter.EDGE_ENHANCE)
im_6 = im_1.filter(ImageFilter.EDGE_ENHANCE_MORE)
im_7 = im_1.filter(ImageFilter.EMBOSS)
im_8 = im_1.filter(ImageFilter.FIND_EDGES)
im_9 = im_1.filter(ImageFilter.SMOOTH)
im_10 = im_1.filter(ImageFilter.SMOOTH_MORE)
im_11 = im_1.filter(ImageFilter.SHARPEN)
# now save the images
im_2.save("/constr/picsx/russian_dol_BLUR.png")
im_3.save("/constr/picsx/russian_doll_CONTOUR.png")
im_4.save("/constr/picsx/russian_doll_DETAIL.png")
im_5.save("/constr/picsx/russian_doll_EDGE_ENHANCE.png")
im_6.save("/constr/picsx/russian_doll_EDGE_ENHANCE_MORE.png")
im_7.save("/constr/picsx/russian_doll_EMBOSS.png")
im_8.save("/constr/picsx/russian_doll_FIND_EDGES.png")
im_9.save("/constr/picsx/russian_doll_SMOOTH.png")
im_10.save("/constr/picsx/russian_doll_SMOOTH_MORE.png")
im_11.save("/constr/picsx/russian_doll_SHARPEN.png")

Reduce opacity of image using Opencv in Python

In the program given below I am adding alpha channel to a 3 channel image to control its opacity. But no matter what value of alpha channel I give there is no effect on image! Anyone could explain me why?
import numpy as np
import cv2
image = cv2.imread('image.jpg')
print image
b_channel,g_channel,r_channel = cv2.split(image)
a_channel = np.ones(b_channel.shape, dtype=b_channel.dtype)*10
image = cv2.merge((b_channel,g_channel,r_channel,a_channel))
print image
cv2.imshow('img',image)
cv2.waitKey(0)
cv2.destroyAllWindows()
I can see in the terminal that alpha channel is added and its value changes as I change it in the program, but there is no effect on the opacity of the image itself!
I am new to OpenCV so I might be missing something simple. Thanks for help!
Alpha is a channel that is used to control the opacity of an image. An alpha channel typically doesn't do anything unless you perform an action on it. It doesn't make an image transparent on its own.
Alpha is usually used to either remove unimportant areas of an image or to combine one image with another image. In the first case the image is usually simply multiplied by its alpha. This is sometimes referred to premultiplying. In this case the dark areas of the alpha channel darken the RGB and the bright areas leave the RGB untouched.
R = R*A
G = G*A
B = B*A
Here is a version of your code that might do what you want (Note- I converted to 32-bit because it's easier to use alpha channels when they are ranged from 0 to 1):
import numpy as np
import cv2
i = cv2.imread('image.jpg')
img = np.array(i, dtype=np.float)
img /= 255.0
cv2.imshow('img',img)
cv2.waitKey(0)
#pre-multiplication
a_channel = np.ones(img.shape, dtype=np.float)/2.0
image = img*a_channel
cv2.imshow('img',image)
cv2.waitKey(0)
cv2.destroyAllWindows()
The second case is used when trying to overlay an image over another image. This is a compositing operation that is often referred to as an "over" merge or a "blend" merge. In this case there is a foreground image "A" and a background image "B" and an alpha channel which could be included in the RGB images or on its own. In this case you can place A over B using:
output = (A * alpha) + (B * (1-alpha))
Actually, the answer is simple. OpenCV's imshow() function ignores the alpha channel.
If you want to see the effect of your alpha channel, save your image in PNG format (because that supports alpha channel) and display in a different viewer.
I also wrote a decorator/enhancement for imshow() here that helps visualise transparent images.

Categories

Resources