I am working on panorama with Python OpenCV. Can someone show me how to get rid of the black lines in my final images? I am thinking of maybe I should first check for the color I.e. 0,0,0 before copying it to the atlas image, but I am not quite sure how to do that.
def warpTwoImages(img1, img2, H):
'''warp img2 to img1 with homograph H'''
h1,w1 = img1.shape[:2]
h2,w2 = img2.shape[:2]
pts1 = np.float32([[0,0],[0,h1],[w1,h1],[w1,0]]).reshape(-1,1,2)
pts2 = np.float32([[0,0],[0,h2],[w2,h2],[w2,0]]).reshape(-1,1,2)
pts2_ = cv2.perspectiveTransform(pts2, H)
pts = np.concatenate((pts1, pts2_), axis=0)
[xmin, ymin] = np.int32(pts.min(axis=0).ravel() - 0.5)
[xmax, ymax] = np.int32(pts.max(axis=0).ravel() + 0.5)
t = [-xmin,-ymin]
Ht = np.array([[1,0,t[0]],[0,1,t[1]],[0,0,1]]) # translate
result = cv2.warpPerspective(img2, Ht.dot(H), (xmax-xmin, ymax-ymin))
result[t[1]:h1+t[1],t[0]:w1+t[0]] = img1
return result
This answer depends on warpPrespicteve function to work with RGBA.
You can try to use the alpha channel of each image.
Before wrapping convert each image to RGBA (See the code below) were the alpha channel will be 0 for the black lines and for all other pixels it will be 255.
import cv2
import numpy as np
# Read img
img = cv2.imread('i.jpg')
# Create mask from all the black lines
mask = np.zeros((img.shape[0],img.shape[1]),np.uint8)
cv2.inRange(img,(0,0,0),(1,1,1),mask)
mask[mask==0]=1
mask[mask==255]=0
mask = mask*255
b_channel, g_channel, r_channel = cv2.split(img)
# Create a new image with 4 channels the forth channel Aplha will give the opacity for each pixel
newImage = cv2.merge((b_channel, g_channel, r_channel, mask))
Related
I've bellow function:
def alphaMerge(small_foreground, background, top, left):
result = background.copy()
fg_b, fg_g, fg_r, fg_a = cv.split(small_foreground)
print(fg_b, fg_g, fg_r, fg_a)
fg_a = fg_a / 255.0
label_rgb = cv.merge([fg_b * fg_a, fg_g * fg_a, fg_r * fg_a])
height, width = small_foreground.shape[0], small_foreground.shape[1]
part_of_bg = result[top:top + height, left:left + width, :]
bg_b, bg_g, bg_r = cv.split(part_of_bg)
part_of_bg = cv.merge([bg_b * (1 - fg_a), bg_g * (1 - fg_a), bg_r * (1 - fg_a)])
cv.add(label_rgb, part_of_bg, part_of_bg)
result[top:top + height, left:left + width, :] = part_of_bg
return result
if __name__ == '__main__':
folder_dir = r"C:\photo_datasets\products_small"
logo = cv.imread(r"C:\Users\PiotrSnella\photo_datasets\discount.png", cv.IMREAD_UNCHANGED)
for images in os.listdir(folder_dir):
input_path = os.path.join(folder_dir, images)
image_size = os.stat(input_path).st_size
if image_size < 8388608:
img = cv.imread(input_path, cv.IMREAD_UNCHANGED)
height, width, channels = img.shape
if height > 500 and width > 500:
result = alphaMerge(logo, img, 0, 0)
cv.imwrite(r'C:\photo_datasets\products_small_output_cv\{}.png'.format(images), result)
I want to combine two pictures, one with the logo which I would like to apply on full dataset from folder products_small. I'm getting a error part_of_bg = cv.merge([bg_b * (1 - fg_a), bg_g * (1 - fg_a), bg_r * (1 - fg_a)]) ValueError: operands could not be broadcast together with shapes (720,540) (766,827)
I tried other combining options and still get the error about problem with shapes, the photo could be a problem or something with the code?
Thank you for your help guys :)
Here is one way to do that in Python/OpenCV. I will place a 20% resized logo onto the pants image at coordinates 660,660 on the right side pocket.
Read the background image (pants)
Read the foreground image (logo) unchanged to preserve the alpha channel
Resize the foreground (logo) to 20%
Create a transparent image the size of the background image
Insert the resized foreground (logo) into the transparent image at the desired location
Extract the alpha channel from the inserted, resized foreground image
Extract the base BGR channels from the inserted, resized foreground image
Blend the background image and the base BGR image using the alpha channel as a mask using np.where(). Note all images must be the same dimensions and 3 channels
Save the result
Background Image:
Foreground Image:
import cv2
import numpy as np
# read background image
bimg = cv2.imread('pants.jpg')
hh, ww = bimg.shape[:2]
# read foreground image
fimg = cv2.imread('flashsale.png', cv2.IMREAD_UNCHANGED)
# resize foreground image
fimg_small = cv2.resize(fimg, (0,0), fx=0.2, fy=0.2)
ht, wd = fimg_small.shape[:2]
# create transparent image
fimg_new = np.full((hh,ww,4), (0,0,0,0), dtype=np.uint8)
# insert resized image into transparent image at desired coordinates
fimg_new[660:660+ht, 660:660+wd] = fimg_small
# extract alpha channel from foreground image as mask and make 3 channels
alpha = fimg_new[:,:,3]
alpha = cv2.merge([alpha,alpha,alpha])
# extract bgr channels from foreground image
base = fimg_new[:,:,0:3]
# blend the two images using the alpha channel as controlling mask
result = np.where(alpha==(0,0,0), bimg, base)
# save result
cv2.imwrite("pants_flashsale.png", result)
# show result
cv2.imshow("RESULT", result)
cv2.waitKey(0)
Result:
This just requires some multiplication and subtraction.
Your overlay has an actual alpha channel, not just a boolean mask. You should use it. It makes edges look better than just a hard boolean mask.
I see one issue with your overlay: it doesn't have any "shadow" to give the white text contrast against a potentially white background.
When you resize RGBA data, it's not trivial. You'd better export the graphic from your vector graphics program in the desired resolution in the first place. Resizing after the fact requires operations to make sure partially transparent pixels (neither 100% opaque nor 100% transparent) are calculated properly so undefined "background" from the fully transparent areas of the overlay image is not mixed into those partially transparent pixels.
base = cv.imread("U3hRd.jpg")
overlay = cv.imread("OBxGQ.png", cv.IMREAD_UNCHANGED)
(bheight, bwidth) = base.shape[:2]
(oheight, owidth) = overlay.shape[:2]
print("base:", bheight, bwidth)
print("overlay:", oheight, owidth)
# place overlay in center
#ox = (bwidth - owidth) // 2
#oy = (bheight - oheight) // 2
# place overlay in top left
ox = 0
oy = 0
overlay_color = overlay[:,:,:3]
overlay_alpha = overlay[:,:,3] * np.float32(1/255)
# "unsqueeze" (insert 1-sized color dimension) so numpy broadcasting works
overlay_alpha = np.expand_dims(overlay_alpha, axis=2)
composite = base.copy()
base_roi = base[oy:oy+oheight, ox:ox+owidth]
composite_roi = composite[oy:oy+oheight, ox:ox+owidth]
composite_roi[:,:] = overlay_color * overlay_alpha + base_roi * (1 - overlay_alpha)
This is what you wanted on top left corner. Noticed, the logo on white foreground doesn't work on background on pant.jpg.
Just 17 lines of codes compared to
import cv2
import numpy as np
img1 = cv2.imread('pant.jpg')
overlay_img1 = np.ones(img1.shape,np.uint8)*255
img2 = cv2.imread('logo3.png')
rows,cols,channels = img2.shape
overlay_img1[0:rows, 0:cols ] = img2
img2gray = cv2.cvtColor(overlay_img1,cv2.COLOR_BGR2GRAY)
ret, mask = cv2.threshold(img2gray,220,255,cv2.THRESH_BINARY_INV)
mask_inv = cv2.bitwise_not(mask)
temp1 = cv2.bitwise_and(img1,img1,mask = mask_inv)
temp2 = cv2.bitwise_and(overlay_img1,overlay_img1, mask = mask)
result = cv2.add(temp1,temp2)
cv2.imshow("Result",result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
Logo resize(320x296):
I'm trying to merge two RGBA images (with a shape of (h,w,4)), taking into account their alpha channels.
Example :
What I've tried
I tried to do this using opencv for that, but I getting some strange pixels on the output image.
Images Used:
and
import cv2
import numpy as np
import matplotlib.pyplot as plt
image1 = cv2.imread("image1.png", cv2.IMREAD_UNCHANGED)
image2 = cv2.imread("image2.png", cv2.IMREAD_UNCHANGED)
mask1 = image1[:,:,3]
mask2 = image2[:,:,3]
mask2_inv = cv2.bitwise_not(mask2)
mask2_bgra = cv2.cvtColor(mask2, cv2.COLOR_GRAY2BGRA)
mask2_inv_bgra = cv2.cvtColor(mask2_inv, cv2.COLOR_GRAY2BGRA)
# output = image2*mask2_bgra + image1
output = cv2.bitwise_or(cv2.bitwise_and(image2, mask2_bgra), cv2.bitwise_and(image1, mask2_inv_bgra))
output[:,:,3] = cv2.bitwise_or(mask1, mask2)
plt.figure(figsize=(12,12))
plt.imshow(cv2.cvtColor(output, cv2.COLOR_BGRA2RGBA))
plt.axis('off')
Output :
So what I figured out is that I'm getting those weird pixels because I used cv2.bitwise_and function (Which btw works perfectly with binary alpha channels).
I tried using different approaches
Question
Is there an approach to do this (While keeping the output image as an 8bit image).
I was able to obtain the expected result in 2 stages.
# Read both images preserving the alpha channel
hh1 = cv2.imread(r'C:\Users\524316\Desktop\Stack\house.png', cv2.IMREAD_UNCHANGED)
hh2 = cv2.imread(r'C:\Users\524316\Desktop\Stack\memo.png', cv2.IMREAD_UNCHANGED)
# store the alpha channels only
m1 = hh1[:,:,3]
m2 = hh2[:,:,3]
# invert the alpha channel and obtain 3-channel mask of float data type
m1i = cv2.bitwise_not(m1)
alpha1i = cv2.cvtColor(m1i, cv2.COLOR_GRAY2BGRA)/255.0
m2i = cv2.bitwise_not(m2)
alpha2i = cv2.cvtColor(m2i, cv2.COLOR_GRAY2BGRA)/255.0
# Perform blending and limit pixel values to 0-255 (convert to 8-bit)
b1i = cv2.convertScaleAbs(hh2*(1-alpha2i) + hh1*alpha2i)
Note: In the b=above the we are using only the inverse alpha channel of the memo image
But I guess this is not the expected result. So moving on ....
# Finding common ground between both the inverted alpha channels
mul = cv2.multiply(alpha1i,alpha2i)
# converting to 8-bit
mulint = cv2.normalize(mul, dst=None, alpha=0, beta=255,norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
# again create 3-channel mask of float data type
alpha = cv2.cvtColor(mulint[:,:,2], cv2.COLOR_GRAY2BGRA)/255.0
# perform blending using previous output and multiplied result
final = cv2.convertScaleAbs(b1i*(1-alpha) + mulint*alpha)
Sorry for the weird variable names. I would request you to analyze the result in each line. I hope this is the expected output.
You could use PIL library to achieve this
from PIL import Image
def merge_images(im1, im2):
bg = Image.open(im1).convert("RGBA")
fg = Image.open(im2).convert("RGBA")
x, y = ((bg.width - fg.width) // 2 , (bg.height - fg.height) // 2)
bg.paste(fg, (x, y), fg)
# convert to 8 bits (pallete mode)
return bg.convert("P")
we can test it using the provided images:
result_image = merge_images("image1.png", "image2.png")
result_image.save("image3.png")
Here's the result:
I have an image of a human body showing skin. How can I change the color of the skin assuming I have another skin color and assuming I have a mask of the exposed skin in the body image ?
Here is one way to do that in Python/OpenCV. I am not sure how robust it is.
Basically, we get the average color of the face. The get the difference color (in each channel) between that and the desired color. Then we add the difference to the input image. Then we use the mask to combine the original and new images.
Input:
Facemask:
import cv2
import numpy as np
import skimage.exposure
# specify desired bgr color for new face and make into array
desired_color = (180, 128, 200)
desired_color = np.asarray(desired_color, dtype=np.float64)
# create swatch
swatch = np.full((200,200,3), desired_color, dtype=np.uint8)
# read image
img = cv2.imread("zelda1.jpg")
# read face mask as grayscale and threshold to binary
facemask = cv2.imread("zelda1_facemask.png", cv2.IMREAD_GRAYSCALE)
facemask = cv2.threshold(facemask, 128, 255, cv2.THRESH_BINARY)[1]
# get average bgr color of face
ave_color = cv2.mean(img, mask=facemask)[:3]
print(ave_color)
# compute difference colors and make into an image the same size as input
diff_color = desired_color - ave_color
diff_color = np.full_like(img, diff_color, dtype=np.uint8)
# shift input image color
# cv2.add clips automatically
new_img = cv2.add(img, diff_color)
# antialias mask, convert to float in range 0 to 1 and make 3-channels
facemask = cv2.GaussianBlur(facemask, (0,0), sigmaX=3, sigmaY=3, borderType = cv2.BORDER_DEFAULT)
facemask = skimage.exposure.rescale_intensity(facemask, in_range=(100,150), out_range=(0,1)).astype(np.float32)
facemask = cv2.merge([facemask,facemask,facemask])
# combine img and new_img using mask
result = (img * (1 - facemask) + new_img * facemask)
result = result.clip(0,255).astype(np.uint8)
# save result
cv2.imwrite('zelda1_swatch.png', swatch)
cv2.imwrite('zelda1_recolor.png', result)
cv2.imshow('swatch', swatch)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Desired color swatch:
Result:
import cv2
import numpy as np
import skimage.exposure
#usage
#put this script and the image face.jpg in the same directory /dir
#run these 2 commands inside bash
#cd /dir
#python change_skin_v1.py
#script_name= change_skin_v1.py
#you can change the 3 parameters: alpha, skincolor_low, skincolor_high
#path file
path_face="./face.jpg"
result_partial="./result_partial.png"
result_final="./result_partial.png"
#blending parameter
alpha = 0.7
# Define lower and uppper limits of what we call "skin color"
skincolor_low=np.array([0,10,60])
skincolor_high=np.array([180,150,255])
#specify desired bgr color (brown) for the new face.
#this value is approximated
desired_color_brg = (2, 70, 140)
# read face
img_main_face = cv2.imread(path_face)
# face.jpg has by default the BGR format, convert BGR to HSV
hsv=cv2.cvtColor(img_main_face,cv2.COLOR_BGR2HSV)
#create the HSV mask
mask=cv2.inRange(hsv,skincolor_low,skincolor_high)
# Change image to brown where we found pink
img_main_face[mask>0]=desired_color_brg
cv2.imwrite(result_partial,img_main_face)
#blending block start
#alpha range for blending is 0-1
# load images for blending
src1 = cv2.imread(result_partial)
src2 = cv2.imread(path_face)
if src1 is None:
print("Error loading src1")
exit(-1)
elif src2 is None:
print("Error loading src2")
exit(-1)
# actually blend_images
result_final = cv2.addWeighted(src1, alpha, src2, 1-alpha, 0.0)
cv2.imwrite('./result_final.png', result_final)
#blending block end
Hello I want to reflect an object in the image as in this image[enter image description here][1]
[1]: https://i.stack.imgur.com/N9J3I.jpg How can I get this kind of result?
It is possible that OpenCV does not have good solutions for this, take a closer look at Pillow.
from PIL import Image, ImageFilter
def drop_shadow(image, iterations=3, border=8, offset=(5,5), background_colour=0xffffff, shadow_colour=0x444444):
shadow_width = image.size[0] + abs(offset[0]) + 2 * border
shadow_height = image.size[1] + abs(offset[1]) + 2 * border
shadow = Image.new(image.mode, (shadow_width, shadow_height), background_colour)
shadow_left = border + max(offset[0], 0)
shadow_top = border + max(offset[1], 0)
shadow.paste(shadow_colour, [shadow_left, shadow_top, shadow_left + image.size[0], shadow_top + image.size[1]])
for i in range(iterations):
shadow = shadow.filter(ImageFilter.BLUR)
img_left = border - min(offset[0], 0)
img_top = border - min(offset[1], 0)
shadow.paste(image, (img_left, img_top))
return shadow
drop_shadow(Image.open('boobs.jpg')).save('shadowed_boobs.png')
Here is one way to do the reflection in Python/OpenCV.
One flips the image. Then makes a vertical ramp (gradient) image and puts that into the alpha channel of the flipped image. Then one concatenates the original and the flipped images.
Input:
import cv2
import numpy as np
# set top and bottom opacity percentages
top = 85
btm = 15
# load image
img = cv2.imread('bear2.png')
hh, ww = img.shape[:2]
# flip the input
flip = np.flip(img, axis=0)
# add opaque alpha channel to input
img = cv2.cvtColor(img, cv2.COLOR_BGR2BGRA)
# make vertical gradient that is bright at top and dark at bottom as alpha channel for the flipped image
gtop = 255*top//100
gbtm = 255*btm//100
grady = np.linspace(gbtm, gtop, hh, dtype=np.uint8)
gradx = np.linspace(1, 1, ww, dtype=np.uint8)
grad = np.outer(grady, gradx)
grad = np.flip(grad, axis=0)
# alternate method
#grad = np.linspace(0, 255, hh, dtype=np.uint8)
#grad = np.tile(grad, (ww,1))
#grad = np.transpose(grad)
#grad = np.flip(grad, axis=0)
# put the gradient into the alpha channel of the flipped image
flip = cv2.cvtColor(flip, cv2.COLOR_BGR2BGRA)
flip[:,:,3] = grad
# concatenate the original and the flipped versions
result = np.vstack((img, flip))
# save output
cv2.imwrite('bear2_vertical_gradient.png', grad)
cv2.imwrite('bear2_reflection.png', result)
# Display various images to see the steps
cv2.imshow('flip',flip)
cv2.imshow('grad',grad)
cv2.imshow('result',result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Ramped (Gradient) Image:
Result:
I am using python and opencv to cut an image using a mask. The mask itself is quite jagged and so the resulting image becomes a bit jagged around the edges like below
Jagged image
Is there a way I can smooth out the edges so they look more like this without affecting the rest of the image?
Smoothed edge
Thanks
SoS
** UPDATE **
Added the original jagged image without the annotation
Original Jagged image
Here is one way using OpenCV, Numpy and Skimage. I assume you actually have an image with a transparent background and not just checkerboard pattern.
Input:
import cv2
import numpy as np
import skimage.exposure
# load image with alpha channel
img = cv2.imread('lena_circle.png', cv2.IMREAD_UNCHANGED)
# extract only bgr channels
bgr = img[:, :, 0:3]
# extract alpha channel
a = img[:, :, 3]
# blur alpha channel
ab = cv2.GaussianBlur(a, (0,0), sigmaX=2, sigmaY=2, borderType = cv2.BORDER_DEFAULT)
# stretch so that 255 -> 255 and 127.5 -> 0
aa = skimage.exposure.rescale_intensity(ab, in_range=(127.5,255), out_range=(0,255))
# replace alpha channel in input with new alpha channel
out = img.copy()
out[:, :, 3] = aa
# save output
cv2.imwrite('lena_circle_antialias.png', out)
# Display various images to see the steps
# NOTE: In and Out show heavy aliasing. This seems to be an artifact of imshow(), which did not display transparency for me. However, the saved image looks fine
cv2.imshow('In',img)
cv2.imshow('BGR', bgr)
cv2.imshow('A', a)
cv2.imshow('AB', ab)
cv2.imshow('AA', aa)
cv2.imshow('Out', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
I am by no means an expert with OpenCV. I looked at cv2.normalize(), but it did not look like I could provide my own sets of input and output values. So I also tried using the following adding the clipping to be sure there were no over-flows or under-flows:
aa = a*2.0 - 255.0
aa[aa<0] = 0
aa[aa>0] = 255
where I computed that from solving simultaneous equations such that in=255 becomes out=255 and in=127.5 becomes out=0 and doing a linear stretch between:
C = A*X+B
255 = A*255+B
0 = A*127.5+B
Thus A=2 and B=-127.5
But that does not work nearly as well as skimage rescale_intensity.
These are some effects you can do with the PIL image library:
from PIL import Image, ImageFilter
im_1 = Image.open("/constr/pics1/russian_doll.png")
im_2 = im_1.filter(ImageFilter.BLUR)
im_3 = im_1.filter(ImageFilter.CONTOUR)
im_4 = im_1.filter(ImageFilter.DETAIL)
im_5 = im_1.filter(ImageFilter.EDGE_ENHANCE)
im_6 = im_1.filter(ImageFilter.EDGE_ENHANCE_MORE)
im_7 = im_1.filter(ImageFilter.EMBOSS)
im_8 = im_1.filter(ImageFilter.FIND_EDGES)
im_9 = im_1.filter(ImageFilter.SMOOTH)
im_10 = im_1.filter(ImageFilter.SMOOTH_MORE)
im_11 = im_1.filter(ImageFilter.SHARPEN)
# now save the images
im_2.save("/constr/picsx/russian_dol_BLUR.png")
im_3.save("/constr/picsx/russian_doll_CONTOUR.png")
im_4.save("/constr/picsx/russian_doll_DETAIL.png")
im_5.save("/constr/picsx/russian_doll_EDGE_ENHANCE.png")
im_6.save("/constr/picsx/russian_doll_EDGE_ENHANCE_MORE.png")
im_7.save("/constr/picsx/russian_doll_EMBOSS.png")
im_8.save("/constr/picsx/russian_doll_FIND_EDGES.png")
im_9.save("/constr/picsx/russian_doll_SMOOTH.png")
im_10.save("/constr/picsx/russian_doll_SMOOTH_MORE.png")
im_11.save("/constr/picsx/russian_doll_SHARPEN.png")