This question already has an answer here:
Read Image pixels, row by row
(1 answer)
Closed 12 months ago.
I have this image:
And for the beard, I have this mask:
I want to cut the beard out using the mask with a transparent background like this:
I followed this SO post's attempt. Here it is:
for img in input_images:
gaberiel = Image.open(path + '/gaberiel-images/' + img)
beard_mask = imread(path + '/gaberiel-masks/' + 'beard_binary_' + img[:-4] + '.png', cv2.IMREAD_GRAYSCALE)
gaberiel_x, gaberiel_y = gaberiel.size
beard_mask_x, beard_mask_y, _ = beard_mask.shape
x_beard_mask= min(gaberiel_x, beard_mask_x)
x_half_beard_mask = beard_mask.shape[0] // 2
mask_beard = beard_mask[x_half_beard_mask - x_beard_mask // 2: x_half_beard_mask + x_beard_mask // 2 + 1, :gaberiel_y]
gaberiel_width_half = gaberiel.size[1] // 2
gaberiel_to_mask = gaberiel[:, gaberiel_width_half - x_half_beard_mask:gaberiel_width_half + x_half_beard_mask]
masked = cv2.bitwise_and(gaberiel_to_mask, gaberiel_to_mask, mask=mask_beard)
tmp = cv2.cvtColor(masked, cv2.COLOR_BGR2GRAY)
_, alpha = cv2.threshold(tmp, 0, 255, cv2.THRESH_BINARY)
b, g, r = cv2.split(masked)
rgba = [b, g, r, alpha]
masked_tr = cv2.merge(rgba, 4)
plt.axis('off')
plt.imshow(masked_tr)
But this is the error I am getting:
gaberiel_to_mask = gaberiel[:, gaberiel_width_half - x_half_beard_mask:gaberiel_width_half + x_half_beard_mask]
TypeError: 'PngImageFile' object is not subscriptable
I think that my attempt is overall bad. Is there a way I can simplify this process?
Make sure your mask is of the same size as your image and then use the function as in below:
def apply_mask(image, mask):
# Convert to numpy arrays
image = np.array(image)
mask = np.array(mask)
# Convert grayscale image to RGB
mask = np.stack((mask,)*3, axis=-1)
# Multiply arrays
resultant = image*mask
return resultant
image = ...
mask = ...
resultant = apply_mask(image, mask)
Result:
For the code to work, the image array must be in range 0-255 and the mask array must be in binary (either 0 or 1). In the mask array, the area of beard must have pixel value of 1 and the rest 0, so that multiplying the image with the mask will give the desired image as shown above.
Save image as a png with transparent background:
import matplotlib.pyplot as plt
plt.plot(resultant)
plt.savefig('image.png', transparent=True)
Related
I'm trying to merge two RGBA images (with a shape of (h,w,4)), taking into account their alpha channels.
Example :
What I've tried
I tried to do this using opencv for that, but I getting some strange pixels on the output image.
Images Used:
and
import cv2
import numpy as np
import matplotlib.pyplot as plt
image1 = cv2.imread("image1.png", cv2.IMREAD_UNCHANGED)
image2 = cv2.imread("image2.png", cv2.IMREAD_UNCHANGED)
mask1 = image1[:,:,3]
mask2 = image2[:,:,3]
mask2_inv = cv2.bitwise_not(mask2)
mask2_bgra = cv2.cvtColor(mask2, cv2.COLOR_GRAY2BGRA)
mask2_inv_bgra = cv2.cvtColor(mask2_inv, cv2.COLOR_GRAY2BGRA)
# output = image2*mask2_bgra + image1
output = cv2.bitwise_or(cv2.bitwise_and(image2, mask2_bgra), cv2.bitwise_and(image1, mask2_inv_bgra))
output[:,:,3] = cv2.bitwise_or(mask1, mask2)
plt.figure(figsize=(12,12))
plt.imshow(cv2.cvtColor(output, cv2.COLOR_BGRA2RGBA))
plt.axis('off')
Output :
So what I figured out is that I'm getting those weird pixels because I used cv2.bitwise_and function (Which btw works perfectly with binary alpha channels).
I tried using different approaches
Question
Is there an approach to do this (While keeping the output image as an 8bit image).
I was able to obtain the expected result in 2 stages.
# Read both images preserving the alpha channel
hh1 = cv2.imread(r'C:\Users\524316\Desktop\Stack\house.png', cv2.IMREAD_UNCHANGED)
hh2 = cv2.imread(r'C:\Users\524316\Desktop\Stack\memo.png', cv2.IMREAD_UNCHANGED)
# store the alpha channels only
m1 = hh1[:,:,3]
m2 = hh2[:,:,3]
# invert the alpha channel and obtain 3-channel mask of float data type
m1i = cv2.bitwise_not(m1)
alpha1i = cv2.cvtColor(m1i, cv2.COLOR_GRAY2BGRA)/255.0
m2i = cv2.bitwise_not(m2)
alpha2i = cv2.cvtColor(m2i, cv2.COLOR_GRAY2BGRA)/255.0
# Perform blending and limit pixel values to 0-255 (convert to 8-bit)
b1i = cv2.convertScaleAbs(hh2*(1-alpha2i) + hh1*alpha2i)
Note: In the b=above the we are using only the inverse alpha channel of the memo image
But I guess this is not the expected result. So moving on ....
# Finding common ground between both the inverted alpha channels
mul = cv2.multiply(alpha1i,alpha2i)
# converting to 8-bit
mulint = cv2.normalize(mul, dst=None, alpha=0, beta=255,norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
# again create 3-channel mask of float data type
alpha = cv2.cvtColor(mulint[:,:,2], cv2.COLOR_GRAY2BGRA)/255.0
# perform blending using previous output and multiplied result
final = cv2.convertScaleAbs(b1i*(1-alpha) + mulint*alpha)
Sorry for the weird variable names. I would request you to analyze the result in each line. I hope this is the expected output.
You could use PIL library to achieve this
from PIL import Image
def merge_images(im1, im2):
bg = Image.open(im1).convert("RGBA")
fg = Image.open(im2).convert("RGBA")
x, y = ((bg.width - fg.width) // 2 , (bg.height - fg.height) // 2)
bg.paste(fg, (x, y), fg)
# convert to 8 bits (pallete mode)
return bg.convert("P")
we can test it using the provided images:
result_image = merge_images("image1.png", "image2.png")
result_image.save("image3.png")
Here's the result:
Hello I want to reflect an object in the image as in this image[enter image description here][1]
[1]: https://i.stack.imgur.com/N9J3I.jpg How can I get this kind of result?
It is possible that OpenCV does not have good solutions for this, take a closer look at Pillow.
from PIL import Image, ImageFilter
def drop_shadow(image, iterations=3, border=8, offset=(5,5), background_colour=0xffffff, shadow_colour=0x444444):
shadow_width = image.size[0] + abs(offset[0]) + 2 * border
shadow_height = image.size[1] + abs(offset[1]) + 2 * border
shadow = Image.new(image.mode, (shadow_width, shadow_height), background_colour)
shadow_left = border + max(offset[0], 0)
shadow_top = border + max(offset[1], 0)
shadow.paste(shadow_colour, [shadow_left, shadow_top, shadow_left + image.size[0], shadow_top + image.size[1]])
for i in range(iterations):
shadow = shadow.filter(ImageFilter.BLUR)
img_left = border - min(offset[0], 0)
img_top = border - min(offset[1], 0)
shadow.paste(image, (img_left, img_top))
return shadow
drop_shadow(Image.open('boobs.jpg')).save('shadowed_boobs.png')
Here is one way to do the reflection in Python/OpenCV.
One flips the image. Then makes a vertical ramp (gradient) image and puts that into the alpha channel of the flipped image. Then one concatenates the original and the flipped images.
Input:
import cv2
import numpy as np
# set top and bottom opacity percentages
top = 85
btm = 15
# load image
img = cv2.imread('bear2.png')
hh, ww = img.shape[:2]
# flip the input
flip = np.flip(img, axis=0)
# add opaque alpha channel to input
img = cv2.cvtColor(img, cv2.COLOR_BGR2BGRA)
# make vertical gradient that is bright at top and dark at bottom as alpha channel for the flipped image
gtop = 255*top//100
gbtm = 255*btm//100
grady = np.linspace(gbtm, gtop, hh, dtype=np.uint8)
gradx = np.linspace(1, 1, ww, dtype=np.uint8)
grad = np.outer(grady, gradx)
grad = np.flip(grad, axis=0)
# alternate method
#grad = np.linspace(0, 255, hh, dtype=np.uint8)
#grad = np.tile(grad, (ww,1))
#grad = np.transpose(grad)
#grad = np.flip(grad, axis=0)
# put the gradient into the alpha channel of the flipped image
flip = cv2.cvtColor(flip, cv2.COLOR_BGR2BGRA)
flip[:,:,3] = grad
# concatenate the original and the flipped versions
result = np.vstack((img, flip))
# save output
cv2.imwrite('bear2_vertical_gradient.png', grad)
cv2.imwrite('bear2_reflection.png', result)
# Display various images to see the steps
cv2.imshow('flip',flip)
cv2.imshow('grad',grad)
cv2.imshow('result',result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Ramped (Gradient) Image:
Result:
First of all, I am not asking anyone to do my homework. I would like to get an explanation or clarification about my difficulties in understanding the following question.
I just finished my image processing test, but one question that I could not solve due to my confusion.
The question is:
Write the code to detect the red eye in a given image in RGB color space using the following formula for HSL color space:
LS_ratio = L / S
eye_pixel = (L >= 64) and (S >= 100) and (LS_ratio > 0.5) and (LS_ratio < 1.5) and ((H <= 7) or (H >= 162))
Please note that in above formula, H, S and L represent a single pixel value for the image in HSL color space and the value of ‘eye_pixel’ will be either True or False depending on the values of H, S and L (i.e. it will be either a red eye color pixel or not).
Your task is to write the code to check all pixels in the image. Store the result as a numpy array and display the resulted image.
My code is:
from __future__ import print_function
import numpy as np
import argparse
import cv2
#argument paser
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required = True, help = "Path to the image")
args = vars(ap.parse_args())
#load the image
image = cv2.imread(args["image"])
#Convert image to HLS
hls = cv2.cvtColor(image, cv2.COLOR_BGR2HLS)
#Split HLS Channels
H = hls[:, :, 0]
S = hls[:, :, 1]
L = hls[:, :, 2]
LS_ratio = L / S
#eye_pixel = (L >= 64) and (S >= 100) and (LS_ratio > 0.5) and (LS_ratio < 1.5) and ((H <= 7) or (H >= 162))
#if HSL pixel
#eye pixel either red or not
#show the image
#cv2.imshow("Image", np.hstack([image, red_eye]))
#debug
print("Lightness is: {}".format(L))
print("Saturation is: {}".format(S))
print("Hue is: {}".format(H))
#print("LS ratio: {}", LS_ratio)
cv2.waitKey(0)
Suppose that the image is:
I literally feel confused about what needs to be done. Highly appreciate if anyone helps explains to me what should be done.
Thank you.
All you need to do is implement the formula in term of the entire H, L, S images.
#Convert image to HLS
hls = cv2.cvtColor(image, cv2.COLOR_BGR2HLS)
#Split HLS Channels
H = hls[:, :, 0]
L = hls[:, :, 1]
S = hls[:, :, 2]
LS_ratio = L/(S + 1e-6)
redeye = ((L>=64) * (S>=100) * np.logical_or(H<=7, H>=162) * (LS_ratio>0.5) * (LS_ratio<1.5)).astype(bool)
Here redeye is a bool array the same size of your original image, where each pixel contains a True or False, representing whether if it's a redeye pixel or not. If I display the image:
redeye = cv2.cvtColor(redeye.astype(np.uint8)*255, cv2.COLOR_GRAY2BGR)
cv2.imshow('image-redeye', np.hstack([image, redeye]))
Forgive me if I am unable to explain well because I am not native speaker.
I am working on blurring the part of image according to the white part of segmentation map. For example here is my segmentation image ( bmp image ).
.
Now what I want is to blur the part of original image where the pixels are white in the segmentation map. I just wrote the following code to so.
mask = mask >= 0.5
mask = np.reshape(mask, (512, 512))
mh, mw = 512, 512
mask_n = np.ones((mh, mw, 3))
mask_n[:,:,0] *= mask
mask_n[:,:,1] *= mask
mask_n[:,:,2] *= mask
# discard padded area
ih, iw, _ = image_n.shape
delta_h = mh - ih
delta_w = mw - iw
top = delta_h // 2
bottom = mh - (delta_h - top)
left = delta_w // 2
right = mw - (delta_w - left)
mask_n = mask_n[top:bottom, left:right, :]
# addWeighted
image_n = image_n *1 + cv2.blur(mask_n * 0.8, (800, 800))
Please help me, Thanks.
You can do it in the following steps:
Load original image and mask image.
Blur the whole original image and save it in a different variable.
Use np.where() method to select the pixels from the mask where you want blurred values and then replace it.
See the sample code below:
import cv2
import numpy as np
img = cv2.imread("./image.png")
blurred_img = cv2.GaussianBlur(img, (21, 21), 0)
mask = cv2.imread("./mask.png")
output = np.where(mask==np.array([255, 255, 255]), blurred_img, img)
cv2.imwrite("./output.png", output)
Here's an alternative to the solution proposed by #Chris Henri. It relies on scipy.ndimage.filters.gaussian_filter and NumPy's boolean indexing:
from skimage import io
import numpy as np
from scipy.ndimage.filters import gaussian_filter
import matplotlib.pyplot as plt
mask = io.imread('https://i.stack.imgur.com/qJiKf.png')
img = np.random.random(size=mask.shape[:2])
idx = mask.min(axis=-1) == 255
blurred = gaussian_filter(img, sigma=3)
blurred[~idx] = 0
fig, axs = plt.subplots(1, 3, figsize=(12, 4))
for ax, im in zip(axs, [img, mask, blurred]):
ax.imshow(im, cmap='gray')
ax.set_axis_off()
plt.show(fig)
Here is yet another alternative to do so, useful though when you have a 2D segmentation array indicating the segmented object class of pixel (mutually exclusive) for every index (i,j), and a 3D image on which you want to apply the blur.
def gaussian_blur(image: np.ndarray,
segmentation: np.ndarray,
classes_of_interest: list,
gaussian_variance: float = 10) -> np.ndarray:
'''
Function that applies a gaussian filter to the image,
specifically to the pixels contained in the possible segmented classes.
Returns an image (np.ndarray) where the gaussian blur intensity is
regulated by the parameter gaussian_variance.
'''
#Apply masking to select only the indices where the specific class is present
mask = np.isin(segmentation, classes_of_interest)
#Creating a 3D mask for all the channels and place it at channel axis
mask_3d = np.stack([mask,mask,mask], axis=2)
#Mask the image according to the 3D mask
img_masked = np.where(mask_3d, img, 0).astype(np.int8)
#Define gaussian blur noisy function
def noisy(image):
row,col,ch= image.shape
mean = 0
var = gaussian_variance
sigma = np.sqrt(var)
gauss = np.random.normal(mean,sigma,(row,col,ch))
gauss = gauss.reshape(row,col,ch)
#Sums up gaussian noise to img
noisy = image + gauss
return noisy.astype(np.uint8)
#Blurs the masked segmentation
img_masked_noisy = noisy(img_masked)
#Puts the blurred part back in the original image as substitution
img[mask_3d] = img_masked_noisy[mask_3d]
return img
And here is a toy example:
import numpy as np
possible_classes = [1,2,3]
#Setting up a toy example with a small image,
#shape (N, N, 3)
img = np.floor(np.random.random(size=(8,8,3)) * 256).astype(np.uint8)
#Setting up a fake segmentation with 3 mutually exclusive possible classes,
#shape (N, N)
segmentation = np.random.choice(possible_classes, size=(8,8))
new_img_blurred = gaussian_blur(img,
segmentation= segmentation,
classes_of_interest= possible_classes[:2])
Like the image above suggests, how can I convert the image to the left into an array that represent the darkness of the image between 0 for white and decimals for darker colours closer to 1? as shown in the image usingpython 3`?
Update:
I have tried to work abit more on this. There are good answers below too.
# Load image
filename = tf.constant("one.png")
image_file = tf.read_file(filename)
# Show Image
Image("one.png")
#convert method
def convertRgbToWeight(rgbArray):
arrayWithPixelWeight = []
for i in range(int(rgbArray.size / rgbArray[0].size)):
for j in range(int(rgbArray[0].size / 3)):
lum = 255-((rgbArray[i][j][0]+rgbArray[i][j][1]+rgbArray[i][j][2])/3) # Reversed luminosity
arrayWithPixelWeight.append(lum/255) # Map values from range 0-255 to 0-1
return arrayWithPixelWeight
# Convert image to numbers and print them
image_decoded_png = tf.image.decode_png(image_file,channels=3)
image_as_float32 = tf.cast(image_decoded_png, tf.float32)
numpy.set_printoptions(threshold=numpy.nan)
sess = tf.Session()
squeezedArray = sess.run(image_as_float32)
convertedList = convertRgbToWeight(squeezedArray)
print(convertedList) # This will give me an array of numbers.
I would recommend to read in images with opencv. The biggest advantage of opencv is that it supports multiple image formats and it automatically transforms the image into a numpy array. For example:
import cv2
import numpy as np
img_path = '/YOUR/PATH/IMAGE.png'
img = cv2.imread(img_path, 0) # read image as grayscale. Set second parameter to 1 if rgb is required
Now img is a numpy array with values between 0 - 255. By default 0 equals black and 255 equals white. To change this you can use the opencv built in function bitwise_not:
img_reverted= cv2.bitwise_not(img)
We can now scale the array with:
new_img = img_reverted / 255.0 // now all values are ranging from 0 to 1, where white equlas 0.0 and black equals 1.0
Load the image and then just invert and divide by 255.
Here is the image ('Untitled.png') that I used for this example: https://ufile.io/h8ncw
import numpy as np
import cv2
import matplotlib.pyplot as plt
my_img = cv2.imread('Untitled.png')
inverted_img = (255.0 - my_img)
final = inverted_img / 255.0
# Visualize the result
plt.imshow(final)
plt.show()
print(final.shape)
(661, 667, 3)
Results (final object represented as image):
You can use PIL package to manage images. Here's example how it can be done.
from PIL import Image
image = Image.open('sample.png')
width, height = image.size
pixels = image.load()
# Check if has alpha, to avoid "too many values to unpack" error
has_alpha = len(pixels[0,0]) == 4
# Create empty 2D list
fill = 1
array = [[fill for x in range(width)] for y in range(height)]
for y in range(height):
for x in range(width):
if has_alpha:
r, g, b, a = pixels[x,y]
else:
r, g, b = pixels[x,y]
lum = 255-((r+g+b)/3) # Reversed luminosity
array[y][x] = lum/255 # Map values from range 0-255 to 0-1
I think it works but please note that the only test I did was if values are in desired range:
# Test max and min values
h, l = 0,1
for row in array:
h = max([max(row), h])
l = min([min(row), l])
print(h, l)
You have to load the image from the path and then transform it to a numpy array.
The values of the image will be between 0 and 255. The next step is to standardize the numpy array.
Hope it helps.