I'm trying to make a texture using an image with 3 colors, and a Perlin noise grayscale image.
This is the original image:
This is the grayscale Perlin noise image:
What I need to do is apply the original image's brightness to the grayscale image, such that darkest and lightest brightness in the Perlin noise image is no longer 100% black (0) and 100% white (1), but taken from the original image. Then, apply the new mapping of brightness from the grayscale Perlin noise image back to the original image.
This is what I tried:
from PIL import Image
alpha = 0.5
im = Image.open(filename1).convert("RGBA")
new_img = Image.open(filename2).convert("RGBA")
new_img = Image.blend(im, new_img, alpha)
new_img.save("foo.png","PNG")
And this is the output that I get:
Which is wrong, but imagine the dark and light orange and bright color having the same gradient as the grayscale image, BUT with no 100% black or 100% white.
I believe I need to:
Convert original image to HSV (properly, I've tried with a few functions from colorsys and matplotlib and they give me weird numbers.
Get highest and lowest V value from the original image.
Convert grayscale image to HSV.
Transform or normalize (I think that's what its called) the grayscale HSV using the V values from the original HSV image.
Remap all the original V values with the new transformed/normalized grayscale V values.
🤕 Why is it not working?
The approach that you are using will not work as expected because instead of keeping color and saturation information from one image and taking the other image's lightness information (totally or partially), you are just interpolating all the channels from both images at the same time, based on a constant alpha, as stated on the docs:
PIL.Image.blend(im1, im2, alpha)
Creates a new image by interpolating between two input images, using a constant alpha: out = image1 * (1.0 - alpha) + image2 * alpha
[...]
alpha – The interpolation alpha factor. If alpha is 0.0, a copy of the first image is returned. If alpha is 1.0, a copy of the second image is returned. There are no restrictions on the alpha value. If necessary, the result is clipped to fit into the allowed output range.
🔨 Basic working example
First, let's get a basic example working. I'm going to use cv2 instead of PIL, just because I'm more familiar with it and I already have it installed on my machine.
I will also use HSL (HLS in cv2) instead of HSV, as I think that will produce an output that is closer to what you might be looking for.
import cv2
filename1 = './f1.png'
filename2 = './f2.png'
# Load both images and convert them from BGR to HLS:
img1 = cv2.cvtColor(cv2.imread(filename1, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HLS)
img2 = cv2.cvtColor(cv2.imread(filename2, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HLS)
# Copy img1, the one with relevant color and saturation information:
texture = img1.copy()
# Replace its lightness information with the one from img2:
texture[:,:,1] = img2[:,:,1]
# Convert the image back from HLS to BGR and save it:
cv2.imwrite('./texture.png', cv2.cvtColor(texture, cv2.COLOR_HLS2BGR))
This is the final output:
🎛️ Adjust lightness
Ok, so we have a simple case working, but you might not want to replace img1's lightness with img2's completely, so in that case just replace this line:
texture[:,:,1] = img2[:,:,1]
With these two:
alpha = 0.25
texture[:,:,1] = alpha * img1[:,:,1] + (1.0 - alpha) * img2[:,:,1]
Now, you will retain 25% lightness from img1 and 75% from img2, and you can adjust it as needed.
For alpha = 0.25, the output will look like this:
🌈 HSL & HSV
Although HSL and HSV look quite similar, there are a few differences, mainly regarding how they represent pure white and light colors, that would make this script generate slightly different images when using one or the other:
We just need to change a couple of things to make it work with HSV:
import cv2
filename1 = './f1.png'
filename2 = './f2.png'
# Load both images and convert them from BGR to HSV:
img1 = cv2.cvtColor(cv2.imread(filename1, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HSV)
img2 = cv2.cvtColor(cv2.imread(filename2, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HSV)
# Copy img1, the one with relevant color and saturation information:
texture = img1.copy()
# Merge img1 and img2's value channel:
alpha = 0.25
texture[:,:,2] = alpha * img1[:,:,2] + (1.0 - alpha) * img2[:,:,2]
# Convert the image back from HSV to BGR and save it:
cv2.imwrite('./texture.png', cv2.cvtColor(texture, cv2.COLOR_HSV2BGR))
This is how the first example looks like when using HSV:
And this is the second example (with alpha = 0.25):
You can see the most noticeable differences are in the lightest areas.
Related
I am having an issue where I'm using Pillow to recolor an image that has a lot of soft gradients but it seems to not completely color the most translucent part of these gradients, with the recolored image having a gradient that is not as smooth. Is there a way to fix this issue? Example Images and current code below.
enter image description here
Original Gradient: 1: https://i.stack.imgur.com/VFi75.png
enter image description here
Recolored Gradient: 1: https://i.stack.imgur.com/e5iNa.png
Here is the Original transparent PNG of the image
import random
import Owl_Attributes
from PIL import Image, ImageColor
# I create the image here and convert the color code to RGBA
RGB_im = image_base_accent3.convert("RGBA")
datas = RGB_im.getdata()
newData = []
for item in datas:
if item[0] == 208 and item[1] == 231 and item[2] == 161:
newData.append((255, 0, 0, item[3]))
else:
newData.append(item)
RGB_im.putdata(newData)
RGB_im.save('Owl_project_pictures\_final_RGB.png')
First, a couple of things to consider:
Inspect your images before you start work. Yours has an alpha channel that is pretty much pointless and irrelevant so I would discard that to save space and processing time.
Using for loops over Python lists of pixels is slow, inefficient, and error-prone in Python. Try to use built-in functions based on C code, or to use vectorised functions like Numpy.
On to your image. There are a whole load of shades and gradations of tone in your image and dealing with one separately through if statements is going to be difficult. I would suggest you want to use HSV colourspace instead.
I think you want the basic result to be a very saturated red with the lightness dictated by the lightness of the original image.
So, I would make an image with:
Hue=0 (see lower part of this diagram), and
Saturation=255 (i.e. fully saturated), and
Value (i.e. brightness) of the original image.
In code that might look like this:
#!/usr/bin/env python3
# ImageMagick command-line "equivalent"
# magick -size 599x452 xc:black xc:white \( VFi75.png -colorspace gray +level 0,60% \) +combine HSL result.png
from PIL import Image
# Load image and create HSV version
im = Image.open('VFi75.png')
HSV = im.convert('HSV')
# Split into separate channels for processing, discarding Hue and Saturation
_, _, V = HSV.split()
# Synthesize Hue channel, same size as input image, filled with 0, to make Red
H = Image.new('L', (im.width, im.height), 0)
# Synthesize Saturation channel, same size as input image, filled with 255, to make fully saturated
S = Image.new('L', (im.width, im.height), 255)
# Recombine synthesized H, S and V (based on original image brightness) back into a recombined image
RGB = Image.merge('HSV', (H,S,V)).convert('RGB')
# Save processed result
RGB.save('result.png')
If you wanted to make it lime green, you would change the Hue angle like this:
# Synthesize Hue channel, same size as input image, filled with 120, to make Lime Green
H = Image.new('L', (im.width, im.height), 120)
If you wanted to make it less saturated, you would change the saturation like this:
# Synthesize Saturation channel, same size as input image, filled with 64, to make less saturated
S = Image.new('L', (im.width, im.height), 64)
The aim is to take a coloured image, and change any pixels within a certain luminosity range to black. For example, if luminosity is the average of a pixel's RGB values, any pixel with a value under 50 is changed to black.
I’ve attempted to begin using PIL and converting to grayscale, but having trouble trying to find a solution that can identify luminosity value and use that info to manipulate a pixel map.
There are many ways to do this, but the simplest and probably fastest is with Numpy, which you should get accustomed to using with image processing in Python:
from PIL import Image
import numpy as np
# Load image and ensure RGB, not palette image
im = Image.open('start.png').convert('RGB')
# Make into Numpy array
na = np.array(im)
# Make all pixels of "na" where the mean of the R,G,B channels is less than 50 into black (0)
na[np.mean(na, axis=-1)<50] = 0
# Convert back to PIL Image to save or display
result = Image.fromarray(na)
result.show()
That turns this:
Into this:
Another slightly different way would be to convert the image to a more conventional greyscale, rather than averaging for the luminosity:
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Calculate greyscale version
grey = im.convert('L')
# Point process over pixels to make mask of darker ones
mask = grey.point(lambda p: 255 if p<50 else 0)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=mask)
Notice that the blue channel is given considerably less significance in the ITU-R 601-2 luma transform that PIL uses (see the lower 114 weighting for Blue versus 299 for Red and 587 for Green) in the formula:
L = R * 299/1000 + G * 587/1000 + B * 114/1000
so the blue shades are considered darker and become black.
Another way would be to make a greyscale and a mask as above. but then choose the darker pixel at each location when comparing the original and the mask:
from PIL import Image, ImageChops
im = Image.open('start.png').convert('RGB')
grey = im.convert('L')
mask = grey.point(lambda p: 0 if p<50 else 255)
res = ImageChops.darker(im, mask.convert('RGB'))
That gives the same result as above.
Another way, pure PIL and probably closest to what you actually asked, would be to derive a luminosity value by averaging the channels:
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Calculate greyscale version by averaging R,G and B
grey = im.convert('L', matrix=(0.333, 0.333, 0.333, 0))
# Point process over pixels to make mask of darker ones
mask = grey.point(lambda p: 255 if p<50 else 0)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=mask)
Another approach could be to split the image into its constituent RGB channels, evaluate a mathematical function over the channels and mask with the result:
from PIL import Image, ImageMath
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Split into RGB channels
(R, G, B) = im.split()
# Evaluate mathematical function over channels
dark = ImageMath.eval('(((R+G+B)/3) <= 50) * 255', R=R, G=G, B=B)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=dark)
I created a function that returns a list with True if the pixel has a luminosity of less than a parameter, and False if it doesn't. It includes an RGB or RGBA option (True or False)
def get_avg_lum(pic,avg=50,RGBA=False):
num=3
numd=4
if RGBA==False:
num=2
numd=3
li=[[[0]for y in range(0,pic.size[1])] for x in range(0,pic.size[0])]
for x in range(0,pic.size[0]):
for y in range(0,pic.size[1]):
if sum(pic.getpixel((x,y))[:num])/numd<avg:
li[x][y]=True
else:
li[x][y]=False
return(li)
a=get_avg_lum(im)
The pixels match in the list, so (0,10) on the image is [0][10] in the list.
Hopefully this helps. My module is for standard PIL objects.
I'm experimenting with saturation adjustments in OpenCV. A standard approach to the problem is to convert input image from BGR(A) to HSV colour space and simply adjust the S channel like so:
# Convert from BGR to HSV
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# We want to increase saturation by 50
value = 50
# Grab saturation channel
saturation = hsv[..., 1]
# Increase saturation by a given value
saturation = cv2.add(saturation, value)
# Clip resulting values to fit within 0 - 255 range
np.clip(saturation, 0, 255)
# Put back adjusted channel into the HSV image
hsv[..., 1] = saturation
# Convert back from HSV to BGR
cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)
Below is the input image I am working with and the result of the above operations. You can clearly see that something is terribly off with low frequency areas as well with highlights.
Perhaps there is another approach to solve the problem without producing such blockiness?
P.S
The blockiness is not a result of JPEG compression as the artefact blocks do not fit into the standard JPEGs 8x8 coding units. Also, I've confirmed the problem persists on the lossless PNG as both input and output format.
Before:
After:
I suggest another approach using black and white version of the picture, it worked best for me:
Just sum the picture with the b/w version with coefficient that give 1 in sum. Alpha is the saturation parameter: alpha = 0 gives b/w picture, higher alpha gives more saturated picture. C++ code example:
addWeighted(image, alpha, black_and_white(image), 1 - alpha, 0.0, res);
how I can determine brightness value of a photo?
Here's my code, I can not understand how to determine it:
def rgb2hsv(img_path):
img = cv2.imread(img_path)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
return hsv
Any ideas?
Not sure what you mean by the "brightness value" of an image, but whatever you mean, it is stored in the Value channel (i.e. 3rd channel) of the Hue, Saturation and Value image you have already calculated.
So, if you want a single, mean brightness number for the whole image, you can use:
hsv[...,2].mean()
If you want a single, peak brightness number for the brightest spot in the image:
hsv[...,2].max()
And if you want a greyscale "map" of the brightness at each point of the image, just display or save the 3rd channel:
cv2.imwrite('brightness.png',hsv[...,2])
In HSV, 'value' is the brightness of the color and varies with color saturation. It ranges from 0 to 100%. When the value is ’0′ the color space will be totally black. With the increase in the value, the color space brightness up and shows various colors."
So use OpenCV method:
cvCvtColor(const CvArr* src, CvArr* dst, int code)
that converts an image from one color space to another. You may use:
code = CV_BGR2HSV
Then calculate histogram of third channel V, which is the brightness.
Probably it might help you!
If you need brightest pixel in the image use the following code:
import numpy as np
import cv2
img = cv2.imread(img_path)
img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(img_hsv)
bright_pixel = np.amax(v)
print(bright_pixel)
# bright_pixel will give max illumination value in the image
In the program given below I am adding alpha channel to a 3 channel image to control its opacity. But no matter what value of alpha channel I give there is no effect on image! Anyone could explain me why?
import numpy as np
import cv2
image = cv2.imread('image.jpg')
print image
b_channel,g_channel,r_channel = cv2.split(image)
a_channel = np.ones(b_channel.shape, dtype=b_channel.dtype)*10
image = cv2.merge((b_channel,g_channel,r_channel,a_channel))
print image
cv2.imshow('img',image)
cv2.waitKey(0)
cv2.destroyAllWindows()
I can see in the terminal that alpha channel is added and its value changes as I change it in the program, but there is no effect on the opacity of the image itself!
I am new to OpenCV so I might be missing something simple. Thanks for help!
Alpha is a channel that is used to control the opacity of an image. An alpha channel typically doesn't do anything unless you perform an action on it. It doesn't make an image transparent on its own.
Alpha is usually used to either remove unimportant areas of an image or to combine one image with another image. In the first case the image is usually simply multiplied by its alpha. This is sometimes referred to premultiplying. In this case the dark areas of the alpha channel darken the RGB and the bright areas leave the RGB untouched.
R = R*A
G = G*A
B = B*A
Here is a version of your code that might do what you want (Note- I converted to 32-bit because it's easier to use alpha channels when they are ranged from 0 to 1):
import numpy as np
import cv2
i = cv2.imread('image.jpg')
img = np.array(i, dtype=np.float)
img /= 255.0
cv2.imshow('img',img)
cv2.waitKey(0)
#pre-multiplication
a_channel = np.ones(img.shape, dtype=np.float)/2.0
image = img*a_channel
cv2.imshow('img',image)
cv2.waitKey(0)
cv2.destroyAllWindows()
The second case is used when trying to overlay an image over another image. This is a compositing operation that is often referred to as an "over" merge or a "blend" merge. In this case there is a foreground image "A" and a background image "B" and an alpha channel which could be included in the RGB images or on its own. In this case you can place A over B using:
output = (A * alpha) + (B * (1-alpha))
Actually, the answer is simple. OpenCV's imshow() function ignores the alpha channel.
If you want to see the effect of your alpha channel, save your image in PNG format (because that supports alpha channel) and display in a different viewer.
I also wrote a decorator/enhancement for imshow() here that helps visualise transparent images.