I am taking two images in python and overlapping the first image onto the second image. What I would like to do is blend the images where they overlap. Is there a way to do this in python other than a for loop?
PIL has a blend function which combines two RGB images with a fixed alpha:
out = image1 * (1.0 - alpha) + image2 * alpha
However, to use blend, image1 and image2 must be the same size.
So to prepare your images you'll need to paste each of them into a new image of
the appropriate (combined) size.
Since blending with alpha=0.5 averages the RGB values from both images equally,
we need to make two versions of the panorama -- one with img1 one top and one with img2 on top. Then regions with no overlap have RGB values which agree (so their averages will remain unchanged) and regions of overlap will get blended as desired.
import operator
from PIL import Image
from PIL import ImageDraw
# suppose img1 and img2 are your two images
img1 = Image.new('RGB', size=(100, 100), color=(255, 0, 0))
img2 = Image.new('RGB', size=(120, 130), color=(0, 255, 0))
# suppose img2 is to be shifted by `shift` amount
shift = (50, 60)
# compute the size of the panorama
nw, nh = map(max, map(operator.add, img2.size, shift), img1.size)
# paste img1 on top of img2
newimg1 = Image.new('RGBA', size=(nw, nh), color=(0, 0, 0, 0))
newimg1.paste(img2, shift)
newimg1.paste(img1, (0, 0))
# paste img2 on top of img1
newimg2 = Image.new('RGBA', size=(nw, nh), color=(0, 0, 0, 0))
newimg2.paste(img1, (0, 0))
newimg2.paste(img2, shift)
# blend with alpha=0.5
result = Image.blend(newimg1, newimg2, alpha=0.5)
img1:
img2:
result:
If you have two RGBA images here is a way to perform alpha compositing.
If you'd like a soft edge when stitching two images together you could blend them with a sigmoid function.
Here is a simple grayscale example:
import numpy as np
import matplotlib.image
import math
def sigmoid(x):
y = np.zeros(len(x))
for i in range(len(x)):
y[i] = 1 / (1 + math.exp(-x[i]))
return y
sigmoid_ = sigmoid(np.arange(-1, 1, 1/50))
alpha = np.repeat(sigmoid_.reshape((len(sigmoid_), 1)), repeats=100, axis=1)
image1_connect = np.ones((100, 100))
image2_connect = np.zeros((100, 100))
out = image1_connect * (1.0 - alpha) + image2_connect * alpha
matplotlib.image.imsave('blend.png', out, cmap = 'gray')
If you blend white and black squares result will look something like that:
+ =
Related
I'd like to know is it possible with Opencv to as an input image for instance use this one
and based on the input image generate another 3D image with the same shape as input but just an 3D empty room.
I tried following steps
Load image
Convert to gray scale
Apply Gaussian Blur to reduce noise
Added Canny edge detector
Drawing white(255,255,255) contours on the detected edges
Rest of the space filled with gray(43, 43, 43) look color
Added white (255, 255, 255) borders on the corners.
I thought that if will be able to detect all the edges in correct order, then i can just connect them with lines or contours and it will produce 3D image (But i didnt achieved that cuz i couldnt sort edges in the correct order)
I know that in order to achieve 3D image, i need x,y,z coordinates, but i dont know what is the correct way to do that.
As a result image i need to generate something like this one (ofcourse shape will depend on the input image).
helper.py
import cv2
import numpy as np
def apply_canny_edge_detector(sobel_gray_image_with_gaussian_blur, threshold1=0.33, threshold2=0.5, weak_th=None, strong_th=None):
return cv2.Canny(sobel_gray_image_with_gaussian_blur, threshold1, threshold2)
def calc_gaussian_kernel(img_size, sigma=1):
size = int(img_size) // 2
x, y = np.mgrid[-size:size + 1, -size:size + 1]
normal = 1 / (2.0 * np.pi * sigma ** 2)
g = np.exp(-(x ** 2 + y ** 2) / (2.0 * sigma ** 2)) * normal
return g
def apply_contours(original_img, gray_img, color=(0, 0, 255)):
_, threshold = cv2.threshold(gray_img, 127, 255, cv2.THRESH_BINARY)
contours, _ = cv2.findContours(threshold, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# list for storing names of shapes
for contour in contours:
approx = cv2.approxPolyDP(
contour, 0.01 * cv2.arcLength(contour, True), True
)
cv2.drawContours(original_img, [approx], -2, color, 1, cv2.LINE_8)
return original_img
def fill_space_with_color(pixels: [], color=(197, 194, 199)):
pixels[np.all(pixels != (10, 255, 255), axis=-1)] = color
return pixels
def apply_borders(image, color=(53, 11, 248)):
bordered = cv2.copyMakeBorder(image, 1, 1, 1, 1, cv2.BORDER_CONSTANT, value=color)
return bordered
main.py
from cv2_scripts.helper import *
if __name__ == '__main__':
for i in range(10, 11):
# Read the image
img_file = "p/0000{0}.jpg".format(str(i))
original_img = cv2.imread(img_file)
sobel_gray_image = cv2.cvtColor(original_img, cv2.COLOR_BGR2GRAY)
gaussian_kernel = calc_gaussian_kernel(11, 2)
sobel_gray_image_with_gaussian_blur = cv2.filter2D(sobel_gray_image, -1, gaussian_kernel)
canny_image = apply_canny_edge_detector(sobel_gray_image_with_gaussian_blur, 100)
contoured_image = apply_contours(original_img, canny_image, (255,255,255))
filled_image = fill_space_with_color(contoured_image, [43, 43, 43])
room_shape = apply_borders(filled_image, [255, 255, 255])
cv2.imshow("Room Shape", room_shape)
cv2.waitKey(0)
cv2.destroyAllWindows()
I have two binary images. The first is like this:
and the last one is like this:
They dont have the same sized curve. I want to add the second one's two white zones contained in the black zone to the first one's black zone.
My code runs like this,but this a wrong answer:
The question is like this,and I want get the the finally image which I draw in picture with the the final image:
How can I achieve this task?
Assuming img1 is your first array (larger solid blob) and img2 is the second (smaller blob with holes), you need a method to identify and remove the outer region of the second image. The flood fill algorithm is a good candidate. It is implemented in opencv as cv2.floodFill.
The easiest thing to do would be to fill the outer edge, then just add the results together:
mask = np.zeros((img2.shape[0] + 2, img2.shape[1] + 2), dtype=np.uint8)
cv2.floodFill(img2, mask, (0, 0), 0, 0)
result = img1 + img2
Here is a toy example that shows mini-images topologically equivalent to your originals:
img1 = np.full((9, 9), 255, dtype=np.uint8)
img1[1:-1, 1:-1] = 0
img2 = np.full((9, 9), 255, dtype=np.uint8)
img2[2:-2, 2:-2] = 0
img2[3, 3] = img2[5, 5] = 255
The images look like this:
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(img1)
ax2.imshow(img2)
After the flood fill, the images look like this:
Adding the resulting images together looks like this:
Keep in mind that floodFill operates in-place, so you may want to make a copy of img2 before going down this road.
I think you want this:
#!/usr/local/bin/python3
from PIL import Image,ImageDraw, ImageColor, ImageChops
# Load images
im1 = Image.open('im1.jpg')
im2 = Image.open('im2.jpg')
# Flood fill white edges of image 2 with black
seed = (0, 0)
black = ImageColor.getrgb("black")
ImageDraw.floodfill(im2, seed, black, thresh=127)
# Now select lighter pixel of image1 and image2 at each pixel location and save it
result = ImageChops.lighter(im1, im2)
result.save('result.png')
If you prefer OpenCV, it might look like this:
#!/usr/local/bin/python3
import cv2
# Load images
im1 = cv2.imread('im1.jpg', cv2.IMREAD_GRAYSCALE)
im2 = cv2.imread('im2.jpg', cv2.IMREAD_GRAYSCALE)
# Threshold, because JPEG is dodgy!
ret, im1 = cv2.threshold(im1, 127, 255, cv2.THRESH_BINARY)
ret, im2 = cv2.threshold(im2, 127, 255, cv2.THRESH_BINARY)
# Flood fill white edges of image 2 with black
h, w = im2.shape[:2]
mask = np.zeros((h+2, w+2), np.uint8)
cv2.floodFill(im2, mask, (0,0), 0)
# Now select lighter of image1 and image2 and save it
result = np.maximum(im1, im2)
cv2.imwrite('result.png', result)
How can I write the same code from PIL in OpenCV
img3 = ImageChops.screen(im1, im2)
You can implement it with the formula used by ImageChops.screen:
out = MAX - ((MAX - image1) * (MAX - image2) / MAX) (source)
The code:
import cv2
import numpy as np
im1 = cv2.imread('im1.png').astype(np.uint16)
im2 = cv2.imread('im2.png').astype(np.uint16)
im3 = (255 - ((255 - im1) * (255 - im2) / 255)).astype(np.uint8)
cv2.imwrite('im3.png', im3)
The promotion to uint16s is necessary because of the multiplication of two uint18 values, at the end I've casted it back into uint8s because the values are guaranteed to be < 256 again.
Screen superimposes two inverted images on top of each other (source)
you can do this too (without numpy):
import cv2
# read the input images, they can be color (RGB) images too
im1 = cv2.imread('im1.jpg')
im2 = cv2.imread('im2.jpg')
# images must be of same size, if not resize one of the images
if im1.shape != im2.shape:
im2 = cv2.resize(im2, im1.shape[:2][::-1], interpolation = cv2.INTER_AREA)
# invert and normalize first image
im1 = cv2.normalize(cv2.bitwise_not(im1), None, 0, 1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F)
# invert and normalize second image
im2 = cv2.normalize(cv2.bitwise_not(im2), None, 0, 1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F)
# superimpose two images, re-normalize and invert
im = cv2.bitwise_not(cv2.normalize(cv2.multiply(im1, im2), None, 0, 255, cv2.NORM_MINMAX, dtype=cv2.CV_8U))
# write the output image
cv2.imwrite('im.jpg', im)
I have an image and circle zone. I need to blur all, except for circle zone. Also i need to make border of circle smooth.
The input:
The output(made it in image redactor with mask, but i think opencv is using only bitmap masks):
For now i have code in python, which isn't blurring border of circle.
def blur_image(cv_image, radius, center, gaussian_core, sigma_x):
blurred = cv.GaussianBlur(cv_image, gaussian_core, sigma_x)
h, w, d = cv_image.shape
# masks
circle_mask = np.ones((h, w), cv_image.dtype)
cv.circle(circle_mask, center, radius, (0, 0, 0), -1)
circle_not_mask = np.zeros((h, w), cv_image.dtype)
cv.circle(circle_not_mask, center, radius, (2, 2, 2), -1)
# Computing
blur_around = cv.bitwise_and(blurred, blurred, mask=circle_mask)
image_in_circle = cv.bitwise_and(cv_image, cv_image, mask=circle_not_mask)
res = cv.bitwise_or(blur_around, image_in_circle)
return res
Current version:
How can i blur the border of circle? In example of output i've used gradient mask in program. Is there something similar in opencv?
UPDATE 04.03
So, i've tried formula from this answered topic and what i have:
Code:
def blend_with_mask_matrix(src1, src2, mask):
res = src2 * (1 - cv.divide(mask, 255.0)) + src1 * cv.divide(mask, 255.0)
return res
This code should work similar as recent one, but it doesn't. The image in circle is slightly different. It has some problems with color. The question is still open.
I think maybe you want something like that.
This is the source image:
The source-blured-pair :
The mask-alphablened-pair:
The code with description in the code comment.
#!/usr/bin/python3
# 2018.01.16 13:07:05 CST
# 2018.01.16 13:54:39 CST
import cv2
import numpy as np
def alphaBlend(img1, img2, mask):
""" alphaBlend img1 and img 2 (of CV_8UC3) with mask (CV_8UC1 or CV_8UC3)
"""
if mask.ndim==3 and mask.shape[-1] == 3:
alpha = mask/255.0
else:
alpha = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR)/255.0
blended = cv2.convertScaleAbs(img1*(1-alpha) + img2*alpha)
return blended
img = cv2.imread("test.png")
H,W = img.shape[:2]
mask = np.zeros((H,W), np.uint8)
cv2.circle(mask, (325, 350), 40, (255,255,255), -1, cv2.LINE_AA)
mask = cv2.GaussianBlur(mask, (21,21),11 )
blured = cv2.GaussianBlur(img, (21,21), 11)
blended1 = alphaBlend(img, blured, mask)
blended2 = alphaBlend(img, blured, 255- mask)
cv2.imshow("blened1", blended1);
cv2.imshow("blened2", blended2);
cv2.waitKey();cv2.destroyAllWindows()
Some useful links:
Alpha Blending in OpenCV C++ : Combining 2 images with transparent mask in opencv
Alpha Blending in OpenCV Python:
Gradient mask blending in opencv python
So the main problem with (mask/255) * blur + (1-mask/255)*another img was operators. They were working only with one channel. Next problem is working with float numbers for "smoothing".
I've changed code of blending with alpha channel to this:
1) i'm taking every channel for source images and mask
2) Performing formula
3) Merging channels
def blend_with_mask_matrix(src1, src2, mask):
res_channels = []
for c in range(0, src1.shape[2]):
a = src1[:, :, c]
b = src2[:, :, c]
m = mask[:, :, c]
res = cv.add(
cv.multiply(b, cv.divide(np.full_like(m, 255) - m, 255.0, dtype=cv.CV_32F), dtype=cv.CV_32F),
cv.multiply(a, cv.divide(m, 255.0, dtype=cv.CV_32F), dtype=cv.CV_32F),
dtype=cv.CV_8U)
res_channels += [res]
res = cv.merge(res_channels)
return res
And as a gradient mask i'm just using blurred circle.
def blur_image(cv_image, radius, center, gaussian_core, sigma_x):
blurred = cv.GaussianBlur(cv_image, gaussian_core, sigma_x)
circle_not_mask = np.zeros_like(cv_image)
cv.circle(circle_not_mask, center, radius, (255, 255, 255), -1)
#Smoothing borders
cv.GaussianBlur(circle_not_mask, (101, 101), 111, dst=circle_not_mask)
# Computing
res = blend_with_mask_matrix(cv_image, blurred, circle_not_mask)
return res
Result:
It is working a bit slower than very first version without smoother borders, but it's ok.
Closing question.
You can easily mask upon an image using the following funciton:
def transparentOverlay(src, overlay, pos=(0, 0), scale=1):
overlay = cv2.resize(overlay, (0, 0), fx=scale, fy=scale)
h, w, _ = overlay.shape # Size of foreground
rows, cols, _ = src.shape # Size of background Image
y, x = pos[0], pos[1] # Position of foreground/overlay image
# loop over all pixels and apply the blending equation
for i in range(h):
for j in range(w):
if x + i >= rows or y + j >= cols:
continue
alpha = float(overlay[i][j][3] / 255.0) # read the alpha channel
src[x + i][y + j] = alpha * overlay[i][j][:3] + (1 - alpha) * src[x + i][y + j]
return src
You need to pass the source image, then the overlay mask and position where you want to set the mask.
You can even set the masking scale. by calling it like this way.
transparentOverlay(face_cigar_roi_color,cigar,(int(w/2),int(sh_cigar/2)))
For details you can look at this link: Face masking and Overlay using OpenCV python
Output:
You can try using a function from PIL library.
example -
from PIL import Image, ImageFilter
blur_factor = 3 # for smooth borders as you have mentioned
blurred_mask = mask.filter(ImageFilter.GaussianBlur(blur_factor)) # your circle = 255, background = 0
final_img = Image.composite(blurred_img, original_img, blurred_mask) # here blurred image is the one which you have already blurred, original image is your sharp non blurred image
I want to use paste of the python PIL library to paste a image to a black background.
I know I can use the image itself as a alpha mask, but I only want to have the parts of the image where the alpha value is 255.
How is this possible?
Here is my code so far:
import PIL
from PIL import Image
img = Image.open('in.png')
background = Image.new('RGBA', (825, 1125), (0, 0, 0, 255))
offset = (50, 50)
background.paste(img, offset, img) #image as alpha mask as third param
background.save('out.png')
I can't find anything in the official but bad documentation
If I understand your question correctly, then
this is a possible solution. It generates
a dedicated mask, which is used for the paste:
from PIL import Image
img = Image.open('in.png')
# Extract alpha band from img
mask = img.split()[-1]
width, height = mask.size
# Iterate through alpha pixels,
# perform desired conversion
pixels = mask.load()
for x in range(0, width):
for y in range(0, height):
if pixels[x,y] < 255:
pixels[x,y] = 0
# Paste image with converted alpha mask
background = Image.new('RGBA', (825, 1125), (0, 0, 0, 255))
background.paste(img, (50, 50), mask)
background.save('out.png')
As a note, the alpha channel of the background image is fairly useless.
If you don't need it later on, you could also load the background with:
background = Image.new('RGB', (825, 1125), (0, 0, 0))