I want to add color to a black and white image, I assume that changing the values of the pixels should work.
for rows in rgb:
for e in rows:
for i in range(len(e)):
max_val = e.max()
min_val = e.min()
if e[i] == max_val:
e[i] * 2.5
if e[i] == min_val:
e[i] * 0.75
else:
e[i] * 1.5
the code doesnt return an error, but it also doesnt change the values. I want the numbers to be multiplied and reassigned by the same array
Instead of manually iterating through each pixel which has an inefficient O(n^3) run-time, we can take advantage of Numpy's broadcasting feature.
We first split the grayscale image into individual BGR channels using cv2.split(). This will give us separate B, G, and R channels each with the same values. Next we multiply each channel with a scalar value using np.multiply(). Finally we combine each individual channel into a color image using cv2.merge() to create a single multi-channel array
Before
>>> print(before.shape)
(331, 500, 3)
You might be wondering why the image has three channels even though it's obviously grayscale. Well it's because each channel has the same values ranging from [0 ... 255]
After
>>> print(after.shape)
(331, 500, 3)
Again, same number of channels, but we modified each individual channel
TLDR: To add color to a black and white image, we have to extract each individual BGR channel, modify each channel, then reconstruct the image
import cv2
import numpy as np
before = cv2.imread('2.png')
b, g, r = cv2.split(before)
np.multiply(b, 1.5, out=b, casting="unsafe")
np.multiply(g, .75, out=g, casting="unsafe")
np.multiply(r, 1.25, out=r, casting="unsafe")
after = cv2.merge([b, g, r])
cv2.imshow('before', before)
cv2.imshow('after', after)
cv2.waitKey()
Here is one way to apply a gradient color to a grayscale image.
Load the grayscale image
Convert it to 3 equal channels
Create a 1 pixel red image
Create a 1 pixel blue image
Concatenate the two
Resize linearly to 256 pixels as a Lookup Table (LUT)
Apply the LUT
Input:
import cv2
import numpy as np
# load image as grayscale
img = cv2.imread('lena_gray.png', cv2.IMREAD_GRAYSCALE)
# convert to 3 equal channels
img = cv2.merge((img, img, img))
# create 1 pixel red image
red = np.zeros((1, 1, 3), np.uint8)
red[:] = (0,0,255)
# create 1 pixel blue image
blue = np.zeros((1, 1, 3), np.uint8)
blue[:] = (255,0,0)
# append the two images
lut = np.concatenate((red, blue), axis=0)
# resize lut to 256 values
lut = cv2.resize(lut, (1,256), interpolation=cv2.INTER_CUBIC)
# apply lut
result = cv2.LUT(img, lut)
# save result
cv2.imwrite('lena_red_blue_lut_mapped.png', result)
# display result
cv2.imshow('RESULT', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Concatenate as many different colored pixels as you want to form a rainbow LUT, if desired.
Related
It's my first time using np.where to select pixels from a bgr image, i have no idea how to select pixels that r>g from image by using np.where, I tried to do that by using codes like this:
bgr = cv2.imread('im.jpg')
bgr = np.where(bgr[1]>bgr[2],np.full_like(bgr,[255,255,255]),bgr)
cv2.imshow('result',bgr)
cv2.waitKey(0)
but it seems didn't work. Can anybody help me?
I think that it doesn't work because RGB is the last dimension of your image. Rewrite the slicing in np.where. Something like that:
bgr = cv2.imread('im.jpg')
print(bgr.shape) # (h, w, 3)
bgr = np.where(bgr[..., 1:2] > bgr[..., 2:3], # make sure that tensors are of shape 3
np.full_like(bgr, 255), bgr)
cv2.imshow('result', bgr)
cv2.waitKey(0)
Note that the ellipsis bgr[..., 1:2] means bgr[:, :, 1:2] here.
You seem to want to make all pixels white where red exceeds green and retain the original colours elsewhere. If so, I find it clearer this way:
# Load image
im = cv2.imread('image.png')
# Make simple synonyms for red and green
R = im[...,2]
G = im[...,1]
# Make True/False Boolean of pixels where red exceeds green
MoreRedThanGreen = R > G
# Make the image white wherever it is more red than green
im[MoreRedThanGreen,:] = [255,255,255]
# Save result
cv2.imwrite('result.png', im)
That makes this start image:
into this:
Note: Read the colon (:) in this line as meaning "all the channels":
im[MoreRedThanGreen,:] = [255,255,255]
The aim is to take a coloured image, and change any pixels within a certain luminosity range to black. For example, if luminosity is the average of a pixel's RGB values, any pixel with a value under 50 is changed to black.
I’ve attempted to begin using PIL and converting to grayscale, but having trouble trying to find a solution that can identify luminosity value and use that info to manipulate a pixel map.
There are many ways to do this, but the simplest and probably fastest is with Numpy, which you should get accustomed to using with image processing in Python:
from PIL import Image
import numpy as np
# Load image and ensure RGB, not palette image
im = Image.open('start.png').convert('RGB')
# Make into Numpy array
na = np.array(im)
# Make all pixels of "na" where the mean of the R,G,B channels is less than 50 into black (0)
na[np.mean(na, axis=-1)<50] = 0
# Convert back to PIL Image to save or display
result = Image.fromarray(na)
result.show()
That turns this:
Into this:
Another slightly different way would be to convert the image to a more conventional greyscale, rather than averaging for the luminosity:
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Calculate greyscale version
grey = im.convert('L')
# Point process over pixels to make mask of darker ones
mask = grey.point(lambda p: 255 if p<50 else 0)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=mask)
Notice that the blue channel is given considerably less significance in the ITU-R 601-2 luma transform that PIL uses (see the lower 114 weighting for Blue versus 299 for Red and 587 for Green) in the formula:
L = R * 299/1000 + G * 587/1000 + B * 114/1000
so the blue shades are considered darker and become black.
Another way would be to make a greyscale and a mask as above. but then choose the darker pixel at each location when comparing the original and the mask:
from PIL import Image, ImageChops
im = Image.open('start.png').convert('RGB')
grey = im.convert('L')
mask = grey.point(lambda p: 0 if p<50 else 255)
res = ImageChops.darker(im, mask.convert('RGB'))
That gives the same result as above.
Another way, pure PIL and probably closest to what you actually asked, would be to derive a luminosity value by averaging the channels:
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Calculate greyscale version by averaging R,G and B
grey = im.convert('L', matrix=(0.333, 0.333, 0.333, 0))
# Point process over pixels to make mask of darker ones
mask = grey.point(lambda p: 255 if p<50 else 0)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=mask)
Another approach could be to split the image into its constituent RGB channels, evaluate a mathematical function over the channels and mask with the result:
from PIL import Image, ImageMath
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Split into RGB channels
(R, G, B) = im.split()
# Evaluate mathematical function over channels
dark = ImageMath.eval('(((R+G+B)/3) <= 50) * 255', R=R, G=G, B=B)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=dark)
I created a function that returns a list with True if the pixel has a luminosity of less than a parameter, and False if it doesn't. It includes an RGB or RGBA option (True or False)
def get_avg_lum(pic,avg=50,RGBA=False):
num=3
numd=4
if RGBA==False:
num=2
numd=3
li=[[[0]for y in range(0,pic.size[1])] for x in range(0,pic.size[0])]
for x in range(0,pic.size[0]):
for y in range(0,pic.size[1]):
if sum(pic.getpixel((x,y))[:num])/numd<avg:
li[x][y]=True
else:
li[x][y]=False
return(li)
a=get_avg_lum(im)
The pixels match in the list, so (0,10) on the image is [0][10] in the list.
Hopefully this helps. My module is for standard PIL objects.
I have an image of a human body showing skin. How can I change the color of the skin assuming I have another skin color and assuming I have a mask of the exposed skin in the body image ?
Here is one way to do that in Python/OpenCV. I am not sure how robust it is.
Basically, we get the average color of the face. The get the difference color (in each channel) between that and the desired color. Then we add the difference to the input image. Then we use the mask to combine the original and new images.
Input:
Facemask:
import cv2
import numpy as np
import skimage.exposure
# specify desired bgr color for new face and make into array
desired_color = (180, 128, 200)
desired_color = np.asarray(desired_color, dtype=np.float64)
# create swatch
swatch = np.full((200,200,3), desired_color, dtype=np.uint8)
# read image
img = cv2.imread("zelda1.jpg")
# read face mask as grayscale and threshold to binary
facemask = cv2.imread("zelda1_facemask.png", cv2.IMREAD_GRAYSCALE)
facemask = cv2.threshold(facemask, 128, 255, cv2.THRESH_BINARY)[1]
# get average bgr color of face
ave_color = cv2.mean(img, mask=facemask)[:3]
print(ave_color)
# compute difference colors and make into an image the same size as input
diff_color = desired_color - ave_color
diff_color = np.full_like(img, diff_color, dtype=np.uint8)
# shift input image color
# cv2.add clips automatically
new_img = cv2.add(img, diff_color)
# antialias mask, convert to float in range 0 to 1 and make 3-channels
facemask = cv2.GaussianBlur(facemask, (0,0), sigmaX=3, sigmaY=3, borderType = cv2.BORDER_DEFAULT)
facemask = skimage.exposure.rescale_intensity(facemask, in_range=(100,150), out_range=(0,1)).astype(np.float32)
facemask = cv2.merge([facemask,facemask,facemask])
# combine img and new_img using mask
result = (img * (1 - facemask) + new_img * facemask)
result = result.clip(0,255).astype(np.uint8)
# save result
cv2.imwrite('zelda1_swatch.png', swatch)
cv2.imwrite('zelda1_recolor.png', result)
cv2.imshow('swatch', swatch)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Desired color swatch:
Result:
import cv2
import numpy as np
import skimage.exposure
#usage
#put this script and the image face.jpg in the same directory /dir
#run these 2 commands inside bash
#cd /dir
#python change_skin_v1.py
#script_name= change_skin_v1.py
#you can change the 3 parameters: alpha, skincolor_low, skincolor_high
#path file
path_face="./face.jpg"
result_partial="./result_partial.png"
result_final="./result_partial.png"
#blending parameter
alpha = 0.7
# Define lower and uppper limits of what we call "skin color"
skincolor_low=np.array([0,10,60])
skincolor_high=np.array([180,150,255])
#specify desired bgr color (brown) for the new face.
#this value is approximated
desired_color_brg = (2, 70, 140)
# read face
img_main_face = cv2.imread(path_face)
# face.jpg has by default the BGR format, convert BGR to HSV
hsv=cv2.cvtColor(img_main_face,cv2.COLOR_BGR2HSV)
#create the HSV mask
mask=cv2.inRange(hsv,skincolor_low,skincolor_high)
# Change image to brown where we found pink
img_main_face[mask>0]=desired_color_brg
cv2.imwrite(result_partial,img_main_face)
#blending block start
#alpha range for blending is 0-1
# load images for blending
src1 = cv2.imread(result_partial)
src2 = cv2.imread(path_face)
if src1 is None:
print("Error loading src1")
exit(-1)
elif src2 is None:
print("Error loading src2")
exit(-1)
# actually blend_images
result_final = cv2.addWeighted(src1, alpha, src2, 1-alpha, 0.0)
cv2.imwrite('./result_final.png', result_final)
#blending block end
I have an aerial image:
I was able to get a binary image of the riverbed of the river part:
After applying a distance transform and some segmentation techniques I was able to get a binary image of the mean riverline:
My question is: how to overlay the white pixels from the riverline so that they're on "top" of the original image?
Here´s an example:
This is a very simple way to solve your problem. But it works.
import cv2
original = cv2.imread('original.png') # Orignal image
mask = cv2.imread('line.png') # binary mask image
result = original.copy()
for i in range(original.shape[0]):
for j in range(original.shape[1]):
result[i, j] = [255, 255, 255] if mask[i, j][0] == 255 else result[i, j]
cv2.imwrite('result.png', result) # saves modified image to result.png
Result
Let's assume your images are numpy arrays called img and mask. Let's also assume that img has shape (M, N, 3), while mask has shape (M, N). Finally, let's assume that img is off dtype np.uint8 while mask is of type np.bool_. If the last assumption isn't true, start with
mask = mask.astype(bool)
Now you can set your river channel to 255 directly:
img[mask, :] = 255
If img were a single grayscale image without a third dimension, as in your last example, you would just remove the : from the index expression above. In fact, you could write it to work for any number of dimensions with
img[mask, ...] = 255
Say I have 2 white images (RGB 800x600 image) that is 'dirty' at some unknown positions, I want to create a final combined image that has all the dirty parts of both images.
Just adding the images together reduces the 'dirtyness' of each blob, since I half the pixel values and then add them (to stay in the 0->255 rgb range), this is amplified when you have more than 2 images.
What I want to do is create a mask for all relatively white pixels in the 3 channel image, I've seen that if all RGB values are within 10-15 of each other, a pixel is relatively white. How would I create this mask using numpy?
Pseudo code for what I want to do:
img = cv2.imread(img) #BGR image
mask = np.where( BGR within 10 of each other)
Then I can use the first image, and replace pixels on it where the second picture is not masked, keeping the 'dirtyness level' relatively dirty. (I know some dirtyness of the second image will replace that of the first, but that's okay)
Edit:
People asked for images so I created some sample images, the white would not always be so exactly white as in these samples which is why I need to use a 'within 10 BGR' range.
Image 1
Image 2
Image 3 (combined, ignore the difference in yellow blob from image 2 to here, they should be the same)
What you asked for is having the pixels in which the distance between colors is under 10.
Here it is, translated to numpy.
img = cv2.imread(img) # assuming rgb image in naming
r = img[:, :, 0]
g = img[:, :, 1]
b = img[:, :, 2]
rg_close = np.abs(r - g) < 10
gb_close = np.abs(g - b) < 10
br_close = np.abs(b - r) < 10
all_close = np.logical_and(np.logical_and(rg_close, gb_close), br_close)
I do believe, however, that this is not what you REALLY want.
I think what you want in a mask that segments the background.
This is actually simpler, assuming the background is completely white:
img = cv2.imread(img)
background_mask = 245 * 3 < img[: ,: ,0] + img[: ,: ,1] + img[: ,: ,2]
Please note this code required thresholding games, and only shows a concept.
I would suggest you convert to HSV colourspace and look for saturated (colourful) pixels like this:
import cv2
# Load background and foreground images
bg = cv2.imread('A.jpg')
fg = cv2.imread('B.jpg')
# Convert to HSV colourspace and extract just the Saturation
Sat = cv2.cvtColor(fg, cv2.COLOR_BGR2HSV)[..., 1]
# Find best (Otsu) threshold to divide black from white, and apply it
_ , mask = cv2.threshold(Sat,0,1,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# At each pixel, choose foreground where mask is set and background elsewhere
res = np.where(mask[...,np.newaxis], fg, bg)
# Save the result
cv2.imwrite('result.png', res)
Note that you can modify this if it picks up too many or too few coloured pixels. If it picks up too few, you could dilate the mask and if it it picks up too many, you could erode the mask. You could also blur the image a little bit before masking which might not be a bad idea as it is a "nasty" JPEG with compression artefacts in it. You could change the saturation test and make it more clinical and targeted if you only wanted to allow certain colours through, or a certain brightness or a comnbination.