Reduce opacity of image using Opencv in Python - python

In the program given below I am adding alpha channel to a 3 channel image to control its opacity. But no matter what value of alpha channel I give there is no effect on image! Anyone could explain me why?
import numpy as np
import cv2
image = cv2.imread('image.jpg')
print image
b_channel,g_channel,r_channel = cv2.split(image)
a_channel = np.ones(b_channel.shape, dtype=b_channel.dtype)*10
image = cv2.merge((b_channel,g_channel,r_channel,a_channel))
print image
cv2.imshow('img',image)
cv2.waitKey(0)
cv2.destroyAllWindows()
I can see in the terminal that alpha channel is added and its value changes as I change it in the program, but there is no effect on the opacity of the image itself!
I am new to OpenCV so I might be missing something simple. Thanks for help!

Alpha is a channel that is used to control the opacity of an image. An alpha channel typically doesn't do anything unless you perform an action on it. It doesn't make an image transparent on its own.
Alpha is usually used to either remove unimportant areas of an image or to combine one image with another image. In the first case the image is usually simply multiplied by its alpha. This is sometimes referred to premultiplying. In this case the dark areas of the alpha channel darken the RGB and the bright areas leave the RGB untouched.
R = R*A
G = G*A
B = B*A
Here is a version of your code that might do what you want (Note- I converted to 32-bit because it's easier to use alpha channels when they are ranged from 0 to 1):
import numpy as np
import cv2
i = cv2.imread('image.jpg')
img = np.array(i, dtype=np.float)
img /= 255.0
cv2.imshow('img',img)
cv2.waitKey(0)
#pre-multiplication
a_channel = np.ones(img.shape, dtype=np.float)/2.0
image = img*a_channel
cv2.imshow('img',image)
cv2.waitKey(0)
cv2.destroyAllWindows()
The second case is used when trying to overlay an image over another image. This is a compositing operation that is often referred to as an "over" merge or a "blend" merge. In this case there is a foreground image "A" and a background image "B" and an alpha channel which could be included in the RGB images or on its own. In this case you can place A over B using:
output = (A * alpha) + (B * (1-alpha))

Actually, the answer is simple. OpenCV's imshow() function ignores the alpha channel.
If you want to see the effect of your alpha channel, save your image in PNG format (because that supports alpha channel) and display in a different viewer.
I also wrote a decorator/enhancement for imshow() here that helps visualise transparent images.

Related

Python Pillow(PIL) Not completely recoloring gradients?

I am having an issue where I'm using Pillow to recolor an image that has a lot of soft gradients but it seems to not completely color the most translucent part of these gradients, with the recolored image having a gradient that is not as smooth. Is there a way to fix this issue? Example Images and current code below.
enter image description here
Original Gradient: 1: https://i.stack.imgur.com/VFi75.png
enter image description here
Recolored Gradient: 1: https://i.stack.imgur.com/e5iNa.png
Here is the Original transparent PNG of the image
import random
import Owl_Attributes
from PIL import Image, ImageColor
# I create the image here and convert the color code to RGBA
RGB_im = image_base_accent3.convert("RGBA")
datas = RGB_im.getdata()
newData = []
for item in datas:
if item[0] == 208 and item[1] == 231 and item[2] == 161:
newData.append((255, 0, 0, item[3]))
else:
newData.append(item)
RGB_im.putdata(newData)
RGB_im.save('Owl_project_pictures\_final_RGB.png')
First, a couple of things to consider:
Inspect your images before you start work. Yours has an alpha channel that is pretty much pointless and irrelevant so I would discard that to save space and processing time.
Using for loops over Python lists of pixels is slow, inefficient, and error-prone in Python. Try to use built-in functions based on C code, or to use vectorised functions like Numpy.
On to your image. There are a whole load of shades and gradations of tone in your image and dealing with one separately through if statements is going to be difficult. I would suggest you want to use HSV colourspace instead.
I think you want the basic result to be a very saturated red with the lightness dictated by the lightness of the original image.
So, I would make an image with:
Hue=0 (see lower part of this diagram), and
Saturation=255 (i.e. fully saturated), and
Value (i.e. brightness) of the original image.
In code that might look like this:
#!/usr/bin/env python3
# ImageMagick command-line "equivalent"
# magick -size 599x452 xc:black xc:white \( VFi75.png -colorspace gray +level 0,60% \) +combine HSL result.png
from PIL import Image
# Load image and create HSV version
im = Image.open('VFi75.png')
HSV = im.convert('HSV')
# Split into separate channels for processing, discarding Hue and Saturation
_, _, V = HSV.split()
# Synthesize Hue channel, same size as input image, filled with 0, to make Red
H = Image.new('L', (im.width, im.height), 0)
# Synthesize Saturation channel, same size as input image, filled with 255, to make fully saturated
S = Image.new('L', (im.width, im.height), 255)
# Recombine synthesized H, S and V (based on original image brightness) back into a recombined image
RGB = Image.merge('HSV', (H,S,V)).convert('RGB')
# Save processed result
RGB.save('result.png')
If you wanted to make it lime green, you would change the Hue angle like this:
# Synthesize Hue channel, same size as input image, filled with 120, to make Lime Green
H = Image.new('L', (im.width, im.height), 120)
If you wanted to make it less saturated, you would change the saturation like this:
# Synthesize Saturation channel, same size as input image, filled with 64, to make less saturated
S = Image.new('L', (im.width, im.height), 64)

how to add an alpha channel of particular value in an BGR image

I tried the below code, it doesn't show any error and runs properly, but changing the value of the alpha channel, doesn't show any change in image
img3 = cv2.cvtColor(img2, cv2.COLOR_BGR2BGRA)
img3[:,:,3] = 100
cv2.imshow('img1',img2)
cv2.imshow('img',img3)
cv2.waitKey(0)
works ok, but the output of both images are same and there is no seen-able change after applying alpha channel
i have already tried the below code
Your code is actually correct.
The simple answer is that OpenCV's imshow() ignores transparency, so if you want to see its effect, save your image as a PNG/TIFF (both of which support transparency) and view it with a different viewer - such as GIMP, Photoshop or feh.
As an alternative, I made a wrapper/decorator for OpenCV's imshow() that displays images with transparency overlaid on a chessboard like Photoshop does. So, starting with this RGBA Paddington image and this grey+alpha Paddington image:
#!/usr/bin/env python3
import cv2
import numpy as np
def imshow(title,im):
"""Decorator for OpenCV "imshow()" to handle images with transparency"""
# Check we got np.uint8, 2-channel (grey + alpha) or 4-channel RGBA image
if (im.dtype == np.uint8) and (len(im.shape)==3) and (im.shape[2] in set([2,4])):
# Pick up the alpha channel and delete from original
alpha = im[...,-1]/255.0
im = np.delete(im, -1, -1)
# Promote greyscale image to RGB to make coding simpler
if len(im.shape) == 2:
im = np.stack((im,im,im))
h, w, _ = im.shape
# Make a checkerboard background image same size, dark squares are grey(102), light squares are grey(152)
f = lambda i, j: 102 + 50*((i+j)%2)
bg = np.fromfunction(np.vectorize(f), (16,16)).astype(np.uint8)
# Resize to square same length as longer side (so squares stay square), then trim
if h>w:
longer = h
else:
longer = w
bg = cv2.resize(bg, (longer,longer), interpolation=cv2.INTER_NEAREST)
# Trim to correct size
bg = bg[:h,:w]
# Blend, using result = alpha*overlay + (1-alpha)*background
im = (alpha[...,None] * im + (1.0-alpha[...,None])*bg[...,None]).astype(np.uint8)
cv2.imshow(title,im)
if __name__ == "__main__":
# Open RGBA image
im = cv2.imread('paddington.png',cv2.IMREAD_UNCHANGED)
imshow("Paddington (RGBA)",im)
key = cv2.waitKey(0)
cv2.destroyAllWindows()
# Open Grey + alpha image
im = cv2.imread('paddington-ga.png',cv2.IMREAD_UNCHANGED)
imshow("Paddington (grey + alpha)",im)
key = cv2.waitKey(0)
cv2.destroyAllWindows()
And you will get this:
and this:
Keywords: Image, image processing, Python, alpha channel, transparency, overlay, checkerboard, chessboard, blend, blending. OpenCV, imshow, cv2.imshow.

Edit image pixel by pixel - Python

I have two images, one overlay and one background.
I want to create a new image, by editing overlay image and manipulating it to show only the pixels which have blue colour in the background image.
I dont want to add the background, it is only for selecting the pixels.
Rest part should be transparent.
Any hints or ideas please? PS: I edited result image with paint so its not perfect.
Image 1 is background image.
Image 2 is overlay image.
Image 3 is the check I want to perform. (to find out which pixels have blue in background and making remaining pixels transparent)
Image 4 is the result image after editing.
I renamed your images according to my way of thinking, so I took this as image.png:
and this as mask.png:
Then I did what I think you want as follows. I wrote it quite verbosely so you can see all the steps along the way:
#!/usr/local/bin/python3
from PIL import Image
import numpy as np
# Open input images
image = Image.open("image.png")
mask = Image.open("mask.png")
# Get dimensions
h,w=image.size
# Resize mask to match image, taking care not to introduce new colours (Image.NEAREST)
mask = mask.resize((h,w), Image.NEAREST)
mask.save('mask_resized.png')
# Convert both images to numpy equivalents
npimage = np.array(image)
npmask = np.array(mask)
# Make image transparent where mask is not blue
# Blue pixels in mask seem to show up as RGB(163 204 255)
npimage[:,:,3] = np.where((npmask[:,:,0]<170) & (npmask[:,:,1]<210) & (npmask[:,:,2]>250),255,0).astype(np.uint8)
# Identify grey pixels in image, i.e. R=G=B, and make transparent also
RequalsG=np.where(npimage[:,:,0]==npimage[:,:,1],1,0)
RequalsB=np.where(npimage[:,:,0]==npimage[:,:,2],1,0)
grey=(RequalsG*RequalsB).astype(np.uint8)
npimage[:,:,3] *= 1-grey
# Convert numpy image to PIL image and save
PILrgba=Image.fromarray(npimage)
PILrgba.save("result.png")
And this is the result:
Notes:
a) Your image already has an (unused) alpha channel present.
b) Any lines starting:
npimage[:,:,3] = ...
are just modifying the 4th channel, i.e. the alpha/transparency channel of the image

How to apply brightness from one image onto another in Python

I'm trying to make a texture using an image with 3 colors, and a Perlin noise grayscale image.
This is the original image:
This is the grayscale Perlin noise image:
What I need to do is apply the original image's brightness to the grayscale image, such that darkest and lightest brightness in the Perlin noise image is no longer 100% black (0) and 100% white (1), but taken from the original image. Then, apply the new mapping of brightness from the grayscale Perlin noise image back to the original image.
This is what I tried:
from PIL import Image
alpha = 0.5
im = Image.open(filename1).convert("RGBA")
new_img = Image.open(filename2).convert("RGBA")
new_img = Image.blend(im, new_img, alpha)
new_img.save("foo.png","PNG")
And this is the output that I get:
Which is wrong, but imagine the dark and light orange and bright color having the same gradient as the grayscale image, BUT with no 100% black or 100% white.
I believe I need to:
Convert original image to HSV (properly, I've tried with a few functions from colorsys and matplotlib and they give me weird numbers.
Get highest and lowest V value from the original image.
Convert grayscale image to HSV.
Transform or normalize (I think that's what its called) the grayscale HSV using the V values from the original HSV image.
Remap all the original V values with the new transformed/normalized grayscale V values.
🤕 Why is it not working?
The approach that you are using will not work as expected because instead of keeping color and saturation information from one image and taking the other image's lightness information (totally or partially), you are just interpolating all the channels from both images at the same time, based on a constant alpha, as stated on the docs:
PIL.Image.blend(im1, im2, alpha)
Creates a new image by interpolating between two input images, using a constant alpha: out = image1 * (1.0 - alpha) + image2 * alpha
[...]
alpha – The interpolation alpha factor. If alpha is 0.0, a copy of the first image is returned. If alpha is 1.0, a copy of the second image is returned. There are no restrictions on the alpha value. If necessary, the result is clipped to fit into the allowed output range.
🔨 Basic working example
First, let's get a basic example working. I'm going to use cv2 instead of PIL, just because I'm more familiar with it and I already have it installed on my machine.
I will also use HSL (HLS in cv2) instead of HSV, as I think that will produce an output that is closer to what you might be looking for.
import cv2
filename1 = './f1.png'
filename2 = './f2.png'
# Load both images and convert them from BGR to HLS:
img1 = cv2.cvtColor(cv2.imread(filename1, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HLS)
img2 = cv2.cvtColor(cv2.imread(filename2, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HLS)
# Copy img1, the one with relevant color and saturation information:
texture = img1.copy()
# Replace its lightness information with the one from img2:
texture[:,:,1] = img2[:,:,1]
# Convert the image back from HLS to BGR and save it:
cv2.imwrite('./texture.png', cv2.cvtColor(texture, cv2.COLOR_HLS2BGR))
This is the final output:
🎛️ Adjust lightness
Ok, so we have a simple case working, but you might not want to replace img1's lightness with img2's completely, so in that case just replace this line:
texture[:,:,1] = img2[:,:,1]
With these two:
alpha = 0.25
texture[:,:,1] = alpha * img1[:,:,1] + (1.0 - alpha) * img2[:,:,1]
Now, you will retain 25% lightness from img1 and 75% from img2, and you can adjust it as needed.
For alpha = 0.25, the output will look like this:
🌈 HSL & HSV
Although HSL and HSV look quite similar, there are a few differences, mainly regarding how they represent pure white and light colors, that would make this script generate slightly different images when using one or the other:
We just need to change a couple of things to make it work with HSV:
import cv2
filename1 = './f1.png'
filename2 = './f2.png'
# Load both images and convert them from BGR to HSV:
img1 = cv2.cvtColor(cv2.imread(filename1, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HSV)
img2 = cv2.cvtColor(cv2.imread(filename2, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HSV)
# Copy img1, the one with relevant color and saturation information:
texture = img1.copy()
# Merge img1 and img2's value channel:
alpha = 0.25
texture[:,:,2] = alpha * img1[:,:,2] + (1.0 - alpha) * img2[:,:,2]
# Convert the image back from HSV to BGR and save it:
cv2.imwrite('./texture.png', cv2.cvtColor(texture, cv2.COLOR_HSV2BGR))
This is how the first example looks like when using HSV:
And this is the second example (with alpha = 0.25):
You can see the most noticeable differences are in the lightest areas.

What's the fastest way to increase color image contrast with OpenCV in python (cv2)?

I'm using OpenCV to process some images, and one of the first steps I need to perform is increasing the image contrast on a color image. The fastest method I've found so far uses this code (where np is the numpy import) to multiply and add as suggested in the original C-based cv1 docs:
if (self.array_alpha is None):
self.array_alpha = np.array([1.25])
self.array_beta = np.array([-100.0])
# add a beta value to every pixel
cv2.add(new_img, self.array_beta, new_img)
# multiply every pixel value by alpha
cv2.multiply(new_img, self.array_alpha, new_img)
Is there a faster way to do this in Python? I've tried using numpy's scalar multiply instead, but the performance is actually worse. I also tried using cv2.convertScaleAbs (the OpenCV docs suggested using convertTo, but cv2 seems to lack an interface to this function) but again the performance was worse in testing.
Simple arithmetic in numpy arrays is the fastest, as Abid Rahaman K commented.
Use this image for example: http://i.imgur.com/Yjo276D.png
Here is a bit of image processing that resembles brightness/contrast manipulation:
'''
Simple and fast image transforms to mimic:
- brightness
- contrast
- erosion
- dilation
'''
import cv2
from pylab import array, plot, show, axis, arange, figure, uint8
# Image data
image = cv2.imread('imgur.png',0) # load as 1-channel 8bit grayscale
cv2.imshow('image',image)
maxIntensity = 255.0 # depends on dtype of image data
x = arange(maxIntensity)
# Parameters for manipulating image data
phi = 1
theta = 1
# Increase intensity such that
# dark pixels become much brighter,
# bright pixels become slightly bright
newImage0 = (maxIntensity/phi)*(image/(maxIntensity/theta))**0.5
newImage0 = array(newImage0,dtype=uint8)
cv2.imshow('newImage0',newImage0)
cv2.imwrite('newImage0.jpg',newImage0)
y = (maxIntensity/phi)*(x/(maxIntensity/theta))**0.5
# Decrease intensity such that
# dark pixels become much darker,
# bright pixels become slightly dark
newImage1 = (maxIntensity/phi)*(image/(maxIntensity/theta))**2
newImage1 = array(newImage1,dtype=uint8)
cv2.imshow('newImage1',newImage1)
z = (maxIntensity/phi)*(x/(maxIntensity/theta))**2
# Plot the figures
figure()
plot(x,y,'r-') # Increased brightness
plot(x,x,'k:') # Original image
plot(x,z, 'b-') # Decreased brightness
#axis('off')
axis('tight')
show()
# Close figure window and click on other window
# Then press any keyboard key to close all windows
closeWindow = -1
while closeWindow<0:
closeWindow = cv2.waitKey(1)
cv2.destroyAllWindows()
Original image in grayscale:
Brightened image that appears to be dilated:
Darkened image that appears to be eroded, sharpened, with better contrast:
How the pixel intensities are being transformed:
If you play with the values of phi and theta you can get really interesting outcomes. You can also implement this trick for multichannel image data.
--- EDIT ---
have a look at the concepts of 'levels' and 'curves' on this youtube video showing image editing in photoshop. The equation for linear transform creates the same amount i.e. 'level' of change on every pixel. If you write an equation which can discriminate between types of pixel (e.g. those which are already of a certain value) then you can change the pixels based on the 'curve' described by that equation.
Try this code:
import cv2
img = cv2.imread('sunset.jpg', 1)
cv2.imshow("Original image",img)
# CLAHE (Contrast Limited Adaptive Histogram Equalization)
clahe = cv2.createCLAHE(clipLimit=3., tileGridSize=(8,8))
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB) # convert from BGR to LAB color space
l, a, b = cv2.split(lab) # split on 3 different channels
l2 = clahe.apply(l) # apply CLAHE to the L-channel
lab = cv2.merge((l2,a,b)) # merge channels
img2 = cv2.cvtColor(lab, cv2.COLOR_LAB2BGR) # convert from LAB to BGR
cv2.imshow('Increased contrast', img2)
#cv2.imwrite('sunset_modified.jpg', img2)
cv2.waitKey(0)
cv2.destroyAllWindows()
Sunset before:
Sunset after increased contrast:
Use the cv2::addWeighted function. It will be faster than any of the other methods presented thus far. It's designed to work on two images:
dst = cv.addWeighted( src1, alpha, src2, beta, gamma[, dst[, dtype]] )
But if you use the same image twice AND you set beta to zero, you can get the effect you want
dst = cv.addWeighted( src1, alpha, src1, 0, gamma)
The big advantage to using this function is that you will not have to worry about what happens when values go below 0 or above 255. In numpy, you have to figure out how to do all of the clipping yourself. Using the OpenCV function, it does all of the clipping for you and it's fast.

Categories

Resources