How to control image contrast based on HSV/RGB values - python

I was wondering if it was possible to modify the contrast of an image, by modifying its RGB, HSV (or similar) values.
I am currently doing the following to mess with luminance, saturation and hue (in python):
import numpy as np
from PIL import Image as img
import colorsys as cs
#Fix colorsys rgb_to_hsv function
#cs.rgb_to_hsv only works on arrays of shape: [112, 112,255] and non n-dimensional arrays
rgb_to_hsv = np.vectorize(cs.rgb_to_hsv)
hsv_to_rgb = np.vectorize(cs.hsv_to_rgb)
def luminance_edit(a, h, s, new_v):
#Edits V - Luminance
#Changes RGB based on new luminance value
r, g, b = hsv_to_rgb(h, s, new_v)
#Merges R,G,B,A values to form new array
arr = np.dstack((r, g, b, a))
return arr
I have a separate function to deal with converting to and fro RGB and HSV. A is the alpha channel, h is the hue, s is saturation and new_v is the new V value (luminance).
Is it possible to edit contrast based on these values, or am I missing something?
Edit:
I have a separate function that imports images, extracts the RGBA values, and converts them into HSL/HSV. Lets call this function x.
In the code provided (function y), we take the hue(h), saturation(s), luminance (v) and the alpha channel (a) - the HSL values provided from function x, of some image.
The code edits the V value, or the luminance. It does not actually edit the contrast, It's just an example of what I'm aiming to achieve. Using the above data (HSL/HSV/RGB) or similar, I was wondering if it was possible to edit the contrast of an image.

I find it very hard to understand what you are trying to do in your question, so here is a "stab in the dark" that you are trying to increase contrast in an image without changing colours.
You are correct in going from RGB to HSL/HSV colourspace so that you can adjust luminance without affecting saturation and hue. So, I have basically taken the Luminance channel of a sombre image and normalised it so that the luminance now spans the entire brightness range from 0..255, and put it back into the image. I started with this image:
And ended up with this one:

Related

How to edit pixels via PIL with a 1D array [0:255]

Using the following code, PIL easily returns an array of single pixel values from an image. Not sure what the term for it is; but instead of a 3d array (RGB), it simplifies each pixel into one of 256 values.
from PIL import Image
im = Image.open(image_path, 'r')
pixel_values = list(im.getdata())
The question is, how can I edit pixels on an image with this same method? I believe the default arg for the putpixel method expects a 3d array (RGB), and if I only give one value; it only ranges over shades of black.
im.putpixel((x, y), value)
im.show()
I would like to be able to substitute integers (0-255) in for value and have access to the wider spectrum of discrete colors.
Is this possible? Seems like it should already be a built in method.

How to convert rgb to grayscale without using numpy scipy opencv or other imaging processing packages?

If the problem I am given is a nested tuple with rgb pixels, how do I convert that to grayscale and return a tuple with the grayscale pixel values. This should all be within one function.
Thanks
I honestly have no where to start since I am beginner programmer so would appreciate any help
Assuming output entries are integers in range [0; 255] and your initial tuple named image:
from statistics import mean
gray_image = tuple(int(mean(pixel)) for pixel in image)
or more beginner friendly (and assuming pixels just default python integers, not uint8):
gray_image = [] # create empty list
for pixel in image:
R, G, B = pixel # get rgb values
gray_pixel = (R + G + B) // 3 # averaging to get gray
gray_image.append(gray_pixel)
gray_image = tuple(gray_image) # turn list to tuple
It's simple averaging, but if there is need to get more technical, please take a look at this answer.

Small change in colors during JPEG compression

It looks like default library under Ubuntu changes colors a bit during the compression. I tried to set quality and sampling but I see no improvements, anyone ever challenged similar issue?
subsampling = 0 , quality = 100
#CORRECT COLORS FROM NPARRAY
cv2.imshow("Object cam:{}".format(self.camera_id), self.out)
print(self.out.item(1,1,0)) # B
print(self.out.item(1,1,1)) # G
print(self.out.item(1,1,2)) # R
self.out=cv2.cvtColor(self.out, cv2.COLOR_BGR2RGB)
#from PIL import Image
im = Image.fromarray(self.out)
r, g, b = im.getpixel((1, 1))
## just printing pixel and they are matching
print(r, g, b)
## WRONG COLORS
im.save(self.out_ramdisk_img,format='JPEG', subsampling=0, quality=100)
JPEG image should have the same colors as in imshow, but it's a bit more purple.
That is a natural result of JPEG compression. JPEG uses floating point arithmetic to calculate integer pixel values. This occurs in several stages of JPEG compression. Thus, small pixel value changes are expected.
When you have blanket changes in color they are usually the result input color values that are outside the gamut of the YCbCr color space. Such values get clamped.

Calculate color visibility from hue values respecting saturation and value in image numpy array

For a fun project I want to analyze a few images, especially which colors (hue) are more visible than others. As I want to take the "visibility" of the colors in account, just counting the hue of pixels is not enough (e.g. perfect black would count as red as its hue is 0°). I came up with a formula which is IMO good enough for my project.
Currently I do the following:
Read the image with opencv (results in BGR numpy array)
Translate the image to HSV
For each pixel, calculate the visibility of its hue (from saturation and value) and sum it in a dict of hues.
The formula is color_visibility = sqrt(saturation * value). So a full-red RGB=255,0,0; HSV=0,1,1 would result in 1 while e.g. a light-red RGB=255,128,128; HSV=0,0.5,1 would result in 0.70.
Here is the (full working) code I use:
import urllib
import cv2
import numpy as np
url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/0/02/Leuchtturm_in_Westerheversand_crop.jpg/299px-Leuchtturm_in_Westerheversand_crop.jpg'
image = np.asarray(bytearray(urllib.urlopen(url).read()), dtype="uint8")
image = cv2.imdecode(image, cv2.IMREAD_COLOR)
d = {}
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
pixels = hsv.reshape((hsv.shape[0] * hsv.shape[1], 3))
for h,s,v in pixels:
d[h] = d.get(h, 0.) + (s/255. * v/255.) ** 0.5
As you might guess, the code gets really slow when the image have more pixels.
My question is, how can I do the calculation of my formula without the dict and for-loop? Maybe directly with numpy?
The magic you are looking for is in np.bincount, as it translates pretty straight-forwardly to the loopy version using the h values as the bins -
H,S,V = pixels.T
d_arr = np.bincount(H, ((S/255.0) * (V/255.0))**0.5 )
Note that the resultant array might have elements have zero valued counts

rgb to yuv conversion and accessing Y, U and V channels

I have been looking this conversion for a while. What are the ways of converting RGB image to YUV image and accessing Y, U and V channels using Python on Linux? (using opencv, skimage, or etc...)
Update:
I used opencv
img_yuv = cv2.cvtColor(image, cv2.COLOR_BGR2YUV)
y, u, v = cv2.split(img_yuv)
cv2.imshow('y', y)
cv2.imshow('u', u)
cv2.imshow('v', v)
cv2.waitKey(0)
and got this result but they are all seems gray. Couldn't get an result represented like on the wikipedia page
Am I doing something wrong?
NB: The YUV <-> RGB conversions in OpenCV versions prior to 3.2.0 are buggy! For one, in many cases the order of U and V channels was swapped. As far as I can tell, 2.x is still broken as of 2.4.13.2 release.
The reason they appear grayscale is that in splitting the 3-channel YUV image you created three 1-channel images. Since the data structures that contain the pixels do not store any information about what the values represent, imshow treats any 1-channel image as grayscale for display. Similarly, it would treat any 3-channel image as BGR.
What you see in the Wikipedia example is a false color rendering of the chrominance channels. In order to achieve this, you need to either apply a pre-defined colormap or use a custom look-up table (LUT). This will map the U and V values to appropriate BGR values which can then be displayed.
As it turns out, the colormaps used for the Wikipedia example are rather simple.
Colormap for U channel
Simple progression between green and blue:
colormap_u = np.array([[[i,255-i,0] for i in range(256)]],dtype=np.uint8)
Colormap for V channel
Simple progression between green and red:
colormap_v = np.array([[[0,255-i,i] for i in range(256)]],dtype=np.uint8)
Visualizing YUV Like the Example
Now, we can put it all together, to recreate the example:
import cv2
import numpy as np
def make_lut_u():
return np.array([[[i,255-i,0] for i in range(256)]],dtype=np.uint8)
def make_lut_v():
return np.array([[[0,255-i,i] for i in range(256)]],dtype=np.uint8)
img = cv2.imread('shed.png')
img_yuv = cv2.cvtColor(img, cv2.COLOR_BGR2YUV)
y, u, v = cv2.split(img_yuv)
lut_u, lut_v = make_lut_u(), make_lut_v()
# Convert back to BGR so we can apply the LUT and stack the images
y = cv2.cvtColor(y, cv2.COLOR_GRAY2BGR)
u = cv2.cvtColor(u, cv2.COLOR_GRAY2BGR)
v = cv2.cvtColor(v, cv2.COLOR_GRAY2BGR)
u_mapped = cv2.LUT(u, lut_u)
v_mapped = cv2.LUT(v, lut_v)
result = np.vstack([img, y, u_mapped, v_mapped])
cv2.imwrite('shed_combo.png', result)
Result:
Using the LUT values as described might be exactly how the Wikipedia article image was made but the description implies it's arbitrary and used maybe because it's simple. It isn't arbitrary; the results essentially match how RGB <-> YUV conversions work. If you are using OpenCV then the methods BGR2YUV and YUV2BGR give the result using the conversion formula found in the same Wikipedia YUV article. (My images generated using Java were slightly darker otherwise the same.)
Addendum: I feel bad that I picked on Dan Mašek after he answered the question perfectly and astutely by showing us the lookup table trick. The author of the Wikipedia YUV article didn't do a bad job depicting the green-blue and green-red gradient shown in the article but as Dan Mašek pointed out it wasn't perfect. The images of color for U and V do somewhat resemble what really happens so I'd call them exaggerated-color and not false-color. The Wikipedia article on YCrCb is similar but different somehow.
// most of the Java program which should work in other languages with OpenCV:
// everything duplicated to do both the U and V at the same time
Mat src = new Mat();
Mat dstA = new Mat();
Mat dstB = new Mat();
src = Imgcodecs.imread("shed.jpg", Imgcodecs.IMREAD_COLOR);
List<Mat> channelsYUVa = new ArrayList<Mat>();
List<Mat> channelsYUVb = new ArrayList<Mat>();
Imgproc.cvtColor(src, dstA, Imgproc.COLOR_BGR2YUV); // convert bgr image to yuv
Imgproc.cvtColor(src, dstB, Imgproc.COLOR_BGR2YUV);
Core.split(dstA, channelsYUVa); // isolate the channels y u v
Core.split(dstB, channelsYUVb);
// zero the 2 channels we do not want to see isolating the 1 channel we want to see
channelsYUVa.set(0, Mat.zeros(channelsYUVa.get(0).rows(),channelsYUVa.get(0).cols(),channelsYUVa.get(0).type()));
channelsYUVa.set(1, Mat.zeros(channelsYUVa.get(0).rows(),channelsYUVa.get(0).cols(),channelsYUVa.get(0).type()));
channelsYUVb.set(0, Mat.zeros(channelsYUVb.get(0).rows(),channelsYUVb.get(0).cols(),channelsYUVb.get(0).type()));
channelsYUVb.set(2, Mat.zeros(channelsYUVb.get(0).rows(),channelsYUVb.get(0).cols(),channelsYUVb.get(0).type()));
Core.merge(channelsYUVa, dstA); // combine channels (two of which are zero)
Core.merge(channelsYUVb, dstB);
Imgproc.cvtColor(dstA, dstA, Imgproc.COLOR_YUV2BGR); // convert to bgr so it can be displayed
Imgproc.cvtColor(dstB, dstB, Imgproc.COLOR_YUV2BGR);
HighGui.imshow("V channel", dstA); // display the image
HighGui.imshow("U channel", dstB);
HighGui.waitKey(0);

Categories

Resources