Get brightness value from HSV - Python - python

how I can determine brightness value of a photo?
Here's my code, I can not understand how to determine it:
def rgb2hsv(img_path):
img = cv2.imread(img_path)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
return hsv
Any ideas?

Not sure what you mean by the "brightness value" of an image, but whatever you mean, it is stored in the Value channel (i.e. 3rd channel) of the Hue, Saturation and Value image you have already calculated.
So, if you want a single, mean brightness number for the whole image, you can use:
hsv[...,2].mean()
If you want a single, peak brightness number for the brightest spot in the image:
hsv[...,2].max()
And if you want a greyscale "map" of the brightness at each point of the image, just display or save the 3rd channel:
cv2.imwrite('brightness.png',hsv[...,2])

In HSV, 'value' is the brightness of the color and varies with color saturation. It ranges from 0 to 100%. When the value is ’0′ the color space will be totally black. With the increase in the value, the color space brightness up and shows various colors."
So use OpenCV method:
cvCvtColor(const CvArr* src, CvArr* dst, int code)
that converts an image from one color space to another. You may use:
code = CV_BGR2HSV
Then calculate histogram of third channel V, which is the brightness.
Probably it might help you!

If you need brightest pixel in the image use the following code:
import numpy as np
import cv2
img = cv2.imread(img_path)
img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(img_hsv)
bright_pixel = np.amax(v)
print(bright_pixel)
# bright_pixel will give max illumination value in the image

Related

Python Pillow(PIL) Not completely recoloring gradients?

I am having an issue where I'm using Pillow to recolor an image that has a lot of soft gradients but it seems to not completely color the most translucent part of these gradients, with the recolored image having a gradient that is not as smooth. Is there a way to fix this issue? Example Images and current code below.
enter image description here
Original Gradient: 1: https://i.stack.imgur.com/VFi75.png
enter image description here
Recolored Gradient: 1: https://i.stack.imgur.com/e5iNa.png
Here is the Original transparent PNG of the image
import random
import Owl_Attributes
from PIL import Image, ImageColor
# I create the image here and convert the color code to RGBA
RGB_im = image_base_accent3.convert("RGBA")
datas = RGB_im.getdata()
newData = []
for item in datas:
if item[0] == 208 and item[1] == 231 and item[2] == 161:
newData.append((255, 0, 0, item[3]))
else:
newData.append(item)
RGB_im.putdata(newData)
RGB_im.save('Owl_project_pictures\_final_RGB.png')
First, a couple of things to consider:
Inspect your images before you start work. Yours has an alpha channel that is pretty much pointless and irrelevant so I would discard that to save space and processing time.
Using for loops over Python lists of pixels is slow, inefficient, and error-prone in Python. Try to use built-in functions based on C code, or to use vectorised functions like Numpy.
On to your image. There are a whole load of shades and gradations of tone in your image and dealing with one separately through if statements is going to be difficult. I would suggest you want to use HSV colourspace instead.
I think you want the basic result to be a very saturated red with the lightness dictated by the lightness of the original image.
So, I would make an image with:
Hue=0 (see lower part of this diagram), and
Saturation=255 (i.e. fully saturated), and
Value (i.e. brightness) of the original image.
In code that might look like this:
#!/usr/bin/env python3
# ImageMagick command-line "equivalent"
# magick -size 599x452 xc:black xc:white \( VFi75.png -colorspace gray +level 0,60% \) +combine HSL result.png
from PIL import Image
# Load image and create HSV version
im = Image.open('VFi75.png')
HSV = im.convert('HSV')
# Split into separate channels for processing, discarding Hue and Saturation
_, _, V = HSV.split()
# Synthesize Hue channel, same size as input image, filled with 0, to make Red
H = Image.new('L', (im.width, im.height), 0)
# Synthesize Saturation channel, same size as input image, filled with 255, to make fully saturated
S = Image.new('L', (im.width, im.height), 255)
# Recombine synthesized H, S and V (based on original image brightness) back into a recombined image
RGB = Image.merge('HSV', (H,S,V)).convert('RGB')
# Save processed result
RGB.save('result.png')
If you wanted to make it lime green, you would change the Hue angle like this:
# Synthesize Hue channel, same size as input image, filled with 120, to make Lime Green
H = Image.new('L', (im.width, im.height), 120)
If you wanted to make it less saturated, you would change the saturation like this:
# Synthesize Saturation channel, same size as input image, filled with 64, to make less saturated
S = Image.new('L', (im.width, im.height), 64)

OpenCV saturation adjustment gives very bad results

I'm experimenting with saturation adjustments in OpenCV. A standard approach to the problem is to convert input image from BGR(A) to HSV colour space and simply adjust the S channel like so:
# Convert from BGR to HSV
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# We want to increase saturation by 50
value = 50
# Grab saturation channel
saturation = hsv[..., 1]
# Increase saturation by a given value
saturation = cv2.add(saturation, value)
# Clip resulting values to fit within 0 - 255 range
np.clip(saturation, 0, 255)
# Put back adjusted channel into the HSV image
hsv[..., 1] = saturation
# Convert back from HSV to BGR
cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)
Below is the input image I am working with and the result of the above operations. You can clearly see that something is terribly off with low frequency areas as well with highlights.
Perhaps there is another approach to solve the problem without producing such blockiness?
P.S
The blockiness is not a result of JPEG compression as the artefact blocks do not fit into the standard JPEGs 8x8 coding units. Also, I've confirmed the problem persists on the lossless PNG as both input and output format.
Before:
After:
I suggest another approach using black and white version of the picture, it worked best for me:
Just sum the picture with the b/w version with coefficient that give 1 in sum. Alpha is the saturation parameter: alpha = 0 gives b/w picture, higher alpha gives more saturated picture. C++ code example:
addWeighted(image, alpha, black_and_white(image), 1 - alpha, 0.0, res);

How to have a partial grayscale image using Python Pillow (PIL)?

Example:
1st image: the original image.
2nd, 3rd and 4th images: the outputs I
want.
I know PIL has the method PIL.ImageOps.grayscale(image) that returns the 4th image, but it doesn't have parameters to produce the 2nd and 3rd ones (partial grayscale).
When you convert an image to greyscale, you are essentially desaturating it to remove saturated colours. So, in order to achieve your desired effect, you probably want to convert to HSV mode, reduce the saturation and convert back to RGB mode.
from PIL import Image
# Open input image
im = Image.open('potato.png')
# Convert to HSV mode and separate the channels
H, S, V = im.convert('HSV').split()
# Halve the saturation - you might consider 2/3 and 1/3 saturation
S = S.point(lambda p: p//2)
# Recombine channels
HSV = Image.merge('HSV', (H,S,V))
# Convert to RGB and save
result = HSV.convert('RGB')
result.save('result.png')
If you prefer to do your image processing in Numpy rather than PIL, you can achieve the same result as above with this code:
from PIL import Image
import numpy as np
# Open input image
im = Image.open('potato.png')
# Convert to HSV and go to Numpy
HSV = np.array(im.convert('HSV'))
# Halve the saturation with Numpy. Hue will be channel 0, Saturation is channel 1, Value is channel 2
HSV[..., 1] = HSV[..., 1] // 2
# Go back to "PIL Image", go back to RGB and save
Image.fromarray(HSV, mode="HSV").convert('RGB').save('result.png')
Of course, set the entire Saturation channel to zero for full greyscale.
from PIL import ImageEnhance
# value: float between 0.0 (grayscale) and 1.0 (original)
ImageEnhance.Color(image).enhance(value)
P.S.: Mark's solution works, but it seems to be increasing the exposure.

Get pixel location of binary image with intensity 255 in python opencv

I want to get the pixel coordinates of the blue dots in an image.
To get it, I first converted it to gray scale and use threshold function.
import numpy as np
import cv2
img = cv2.imread("dot.jpg")
img_g = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret1,th1 = cv2.threshold(img_g,127,255,cv2.THRESH_BINARY_INV)
What to do next if I want to get the pixel location with intensity 255? Please tell if there is some simpler method to do the same.
I don't think this is going to work as you would expect.
Usually, in order to get a stable tracking over a shape with a specific color, you do that in RGB/HSV/HSL plane, you could start with HSV which is more robust in terms of lighting.
1-Convert to HSV using cv2.cvtColor()
2-Use cv2.inRagne(blue_lower, blue_upper) to "filter" all un-wanted colors.
Now you have a good-looking binary image with only blue color in it (assuming you have a static background or more filters should be added).
3-Now if you want to detect dots (which is usually more than one pixel) you could try cv2.findContours
4- You can get x,y pixel of contours using many methods(depends on the shape of what you want to detect) like this cv2.boundingRect()

How to apply brightness from one image onto another in Python

I'm trying to make a texture using an image with 3 colors, and a Perlin noise grayscale image.
This is the original image:
This is the grayscale Perlin noise image:
What I need to do is apply the original image's brightness to the grayscale image, such that darkest and lightest brightness in the Perlin noise image is no longer 100% black (0) and 100% white (1), but taken from the original image. Then, apply the new mapping of brightness from the grayscale Perlin noise image back to the original image.
This is what I tried:
from PIL import Image
alpha = 0.5
im = Image.open(filename1).convert("RGBA")
new_img = Image.open(filename2).convert("RGBA")
new_img = Image.blend(im, new_img, alpha)
new_img.save("foo.png","PNG")
And this is the output that I get:
Which is wrong, but imagine the dark and light orange and bright color having the same gradient as the grayscale image, BUT with no 100% black or 100% white.
I believe I need to:
Convert original image to HSV (properly, I've tried with a few functions from colorsys and matplotlib and they give me weird numbers.
Get highest and lowest V value from the original image.
Convert grayscale image to HSV.
Transform or normalize (I think that's what its called) the grayscale HSV using the V values from the original HSV image.
Remap all the original V values with the new transformed/normalized grayscale V values.
🤕 Why is it not working?
The approach that you are using will not work as expected because instead of keeping color and saturation information from one image and taking the other image's lightness information (totally or partially), you are just interpolating all the channels from both images at the same time, based on a constant alpha, as stated on the docs:
PIL.Image.blend(im1, im2, alpha)
Creates a new image by interpolating between two input images, using a constant alpha: out = image1 * (1.0 - alpha) + image2 * alpha
[...]
alpha – The interpolation alpha factor. If alpha is 0.0, a copy of the first image is returned. If alpha is 1.0, a copy of the second image is returned. There are no restrictions on the alpha value. If necessary, the result is clipped to fit into the allowed output range.
🔨 Basic working example
First, let's get a basic example working. I'm going to use cv2 instead of PIL, just because I'm more familiar with it and I already have it installed on my machine.
I will also use HSL (HLS in cv2) instead of HSV, as I think that will produce an output that is closer to what you might be looking for.
import cv2
filename1 = './f1.png'
filename2 = './f2.png'
# Load both images and convert them from BGR to HLS:
img1 = cv2.cvtColor(cv2.imread(filename1, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HLS)
img2 = cv2.cvtColor(cv2.imread(filename2, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HLS)
# Copy img1, the one with relevant color and saturation information:
texture = img1.copy()
# Replace its lightness information with the one from img2:
texture[:,:,1] = img2[:,:,1]
# Convert the image back from HLS to BGR and save it:
cv2.imwrite('./texture.png', cv2.cvtColor(texture, cv2.COLOR_HLS2BGR))
This is the final output:
🎛️ Adjust lightness
Ok, so we have a simple case working, but you might not want to replace img1's lightness with img2's completely, so in that case just replace this line:
texture[:,:,1] = img2[:,:,1]
With these two:
alpha = 0.25
texture[:,:,1] = alpha * img1[:,:,1] + (1.0 - alpha) * img2[:,:,1]
Now, you will retain 25% lightness from img1 and 75% from img2, and you can adjust it as needed.
For alpha = 0.25, the output will look like this:
🌈 HSL & HSV
Although HSL and HSV look quite similar, there are a few differences, mainly regarding how they represent pure white and light colors, that would make this script generate slightly different images when using one or the other:
We just need to change a couple of things to make it work with HSV:
import cv2
filename1 = './f1.png'
filename2 = './f2.png'
# Load both images and convert them from BGR to HSV:
img1 = cv2.cvtColor(cv2.imread(filename1, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HSV)
img2 = cv2.cvtColor(cv2.imread(filename2, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HSV)
# Copy img1, the one with relevant color and saturation information:
texture = img1.copy()
# Merge img1 and img2's value channel:
alpha = 0.25
texture[:,:,2] = alpha * img1[:,:,2] + (1.0 - alpha) * img2[:,:,2]
# Convert the image back from HSV to BGR and save it:
cv2.imwrite('./texture.png', cv2.cvtColor(texture, cv2.COLOR_HSV2BGR))
This is how the first example looks like when using HSV:
And this is the second example (with alpha = 0.25):
You can see the most noticeable differences are in the lightest areas.

Categories

Resources