I want a way or steps to unify the brightness of 2 images or in other words make their brightness the same but without assigning them. I know how to get the brightness of an image using PIL, the code is found below:
from PIL import Image
imag = Image.open("test.png")
# Convert the image te RGB if it is a .gif for example
imag = imag.convert('RGB')
# coordinates of the pixel
X, Y = 0, 0
# Get RGB
pixelRGB = imag.getpixel((X, Y))
R, G, B = pixelRGB
brightness = sum([R, G, B]) / 3 ##0 is dark (black) and 255 is bright (white)
print(brightness)
Does anyone have an idea of how to make 2 images having the same brightness. Thank you
You can use the mean/standard deviation color transfer technique in Python/OpenCV as described at https://www.pyimagesearch.com/2014/06/30/super-fast-color-transfer-images/. But to force it so as not to modify the color and only adjust the brightness/contrast, convert your image to HSV. Process only the V channel using the method described in that reference. Then combine the new V and old S and H channels and convert that back to BRG.
Related
The aim is to take a coloured image, and change any pixels within a certain luminosity range to black. For example, if luminosity is the average of a pixel's RGB values, any pixel with a value under 50 is changed to black.
I’ve attempted to begin using PIL and converting to grayscale, but having trouble trying to find a solution that can identify luminosity value and use that info to manipulate a pixel map.
There are many ways to do this, but the simplest and probably fastest is with Numpy, which you should get accustomed to using with image processing in Python:
from PIL import Image
import numpy as np
# Load image and ensure RGB, not palette image
im = Image.open('start.png').convert('RGB')
# Make into Numpy array
na = np.array(im)
# Make all pixels of "na" where the mean of the R,G,B channels is less than 50 into black (0)
na[np.mean(na, axis=-1)<50] = 0
# Convert back to PIL Image to save or display
result = Image.fromarray(na)
result.show()
That turns this:
Into this:
Another slightly different way would be to convert the image to a more conventional greyscale, rather than averaging for the luminosity:
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Calculate greyscale version
grey = im.convert('L')
# Point process over pixels to make mask of darker ones
mask = grey.point(lambda p: 255 if p<50 else 0)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=mask)
Notice that the blue channel is given considerably less significance in the ITU-R 601-2 luma transform that PIL uses (see the lower 114 weighting for Blue versus 299 for Red and 587 for Green) in the formula:
L = R * 299/1000 + G * 587/1000 + B * 114/1000
so the blue shades are considered darker and become black.
Another way would be to make a greyscale and a mask as above. but then choose the darker pixel at each location when comparing the original and the mask:
from PIL import Image, ImageChops
im = Image.open('start.png').convert('RGB')
grey = im.convert('L')
mask = grey.point(lambda p: 0 if p<50 else 255)
res = ImageChops.darker(im, mask.convert('RGB'))
That gives the same result as above.
Another way, pure PIL and probably closest to what you actually asked, would be to derive a luminosity value by averaging the channels:
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Calculate greyscale version by averaging R,G and B
grey = im.convert('L', matrix=(0.333, 0.333, 0.333, 0))
# Point process over pixels to make mask of darker ones
mask = grey.point(lambda p: 255 if p<50 else 0)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=mask)
Another approach could be to split the image into its constituent RGB channels, evaluate a mathematical function over the channels and mask with the result:
from PIL import Image, ImageMath
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Split into RGB channels
(R, G, B) = im.split()
# Evaluate mathematical function over channels
dark = ImageMath.eval('(((R+G+B)/3) <= 50) * 255', R=R, G=G, B=B)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=dark)
I created a function that returns a list with True if the pixel has a luminosity of less than a parameter, and False if it doesn't. It includes an RGB or RGBA option (True or False)
def get_avg_lum(pic,avg=50,RGBA=False):
num=3
numd=4
if RGBA==False:
num=2
numd=3
li=[[[0]for y in range(0,pic.size[1])] for x in range(0,pic.size[0])]
for x in range(0,pic.size[0]):
for y in range(0,pic.size[1]):
if sum(pic.getpixel((x,y))[:num])/numd<avg:
li[x][y]=True
else:
li[x][y]=False
return(li)
a=get_avg_lum(im)
The pixels match in the list, so (0,10) on the image is [0][10] in the list.
Hopefully this helps. My module is for standard PIL objects.
Say I have 2 white images (RGB 800x600 image) that is 'dirty' at some unknown positions, I want to create a final combined image that has all the dirty parts of both images.
Just adding the images together reduces the 'dirtyness' of each blob, since I half the pixel values and then add them (to stay in the 0->255 rgb range), this is amplified when you have more than 2 images.
What I want to do is create a mask for all relatively white pixels in the 3 channel image, I've seen that if all RGB values are within 10-15 of each other, a pixel is relatively white. How would I create this mask using numpy?
Pseudo code for what I want to do:
img = cv2.imread(img) #BGR image
mask = np.where( BGR within 10 of each other)
Then I can use the first image, and replace pixels on it where the second picture is not masked, keeping the 'dirtyness level' relatively dirty. (I know some dirtyness of the second image will replace that of the first, but that's okay)
Edit:
People asked for images so I created some sample images, the white would not always be so exactly white as in these samples which is why I need to use a 'within 10 BGR' range.
Image 1
Image 2
Image 3 (combined, ignore the difference in yellow blob from image 2 to here, they should be the same)
What you asked for is having the pixels in which the distance between colors is under 10.
Here it is, translated to numpy.
img = cv2.imread(img) # assuming rgb image in naming
r = img[:, :, 0]
g = img[:, :, 1]
b = img[:, :, 2]
rg_close = np.abs(r - g) < 10
gb_close = np.abs(g - b) < 10
br_close = np.abs(b - r) < 10
all_close = np.logical_and(np.logical_and(rg_close, gb_close), br_close)
I do believe, however, that this is not what you REALLY want.
I think what you want in a mask that segments the background.
This is actually simpler, assuming the background is completely white:
img = cv2.imread(img)
background_mask = 245 * 3 < img[: ,: ,0] + img[: ,: ,1] + img[: ,: ,2]
Please note this code required thresholding games, and only shows a concept.
I would suggest you convert to HSV colourspace and look for saturated (colourful) pixels like this:
import cv2
# Load background and foreground images
bg = cv2.imread('A.jpg')
fg = cv2.imread('B.jpg')
# Convert to HSV colourspace and extract just the Saturation
Sat = cv2.cvtColor(fg, cv2.COLOR_BGR2HSV)[..., 1]
# Find best (Otsu) threshold to divide black from white, and apply it
_ , mask = cv2.threshold(Sat,0,1,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# At each pixel, choose foreground where mask is set and background elsewhere
res = np.where(mask[...,np.newaxis], fg, bg)
# Save the result
cv2.imwrite('result.png', res)
Note that you can modify this if it picks up too many or too few coloured pixels. If it picks up too few, you could dilate the mask and if it it picks up too many, you could erode the mask. You could also blur the image a little bit before masking which might not be a bad idea as it is a "nasty" JPEG with compression artefacts in it. You could change the saturation test and make it more clinical and targeted if you only wanted to allow certain colours through, or a certain brightness or a comnbination.
It looks like default library under Ubuntu changes colors a bit during the compression. I tried to set quality and sampling but I see no improvements, anyone ever challenged similar issue?
subsampling = 0 , quality = 100
#CORRECT COLORS FROM NPARRAY
cv2.imshow("Object cam:{}".format(self.camera_id), self.out)
print(self.out.item(1,1,0)) # B
print(self.out.item(1,1,1)) # G
print(self.out.item(1,1,2)) # R
self.out=cv2.cvtColor(self.out, cv2.COLOR_BGR2RGB)
#from PIL import Image
im = Image.fromarray(self.out)
r, g, b = im.getpixel((1, 1))
## just printing pixel and they are matching
print(r, g, b)
## WRONG COLORS
im.save(self.out_ramdisk_img,format='JPEG', subsampling=0, quality=100)
JPEG image should have the same colors as in imshow, but it's a bit more purple.
That is a natural result of JPEG compression. JPEG uses floating point arithmetic to calculate integer pixel values. This occurs in several stages of JPEG compression. Thus, small pixel value changes are expected.
When you have blanket changes in color they are usually the result input color values that are outside the gamut of the YCbCr color space. Such values get clamped.
I'm trying to implement binary image filter (to get monochrome binary image) using python & PyQT5, and, to retrieve new pixel colors I use the following method:
def _new_pixel_colors(self, x, y):
color = QColor(self.pixmap.pixel(x, y))
result = qRgb(0, 0, 0) if all(c < 127 for c in color.getRgb()[:3]) else qRgb(255, 255, 255)
return result
Could It be a correct sample of binary filter for RGB image? I mean, is that a sufficient condition to check whether the pixel is brighter or darker then (127,127,127) Gray color?
And please, do not provide any solutions with opencv, pillow, etc. I'm only asking about the algorithm itself.
I would at least compare against intensity i=R+G+B ...
For ROI like masks you can use any thresholding techniques (adaptive thresholding is the best) but if your resulting image is not a ROI mask and should resemble the visual features of the original image then the best conversion I know of is to use Dithering.
The Idea behind BW dithering is to convert gray scales into BW patterns preserwing the shading. The result is often noisy but preserves much much more visual details. Here simple naive C++ dithering (sorry not a Python coder):
picture pic0,pic1;
// pic0 - source img
// pic1 - output img
int x,y,i;
color c;
// resize output to source image size clear with black
pic1=pic0; pic1.clear(0);
// dithering
i=0;
for (y=0;y<pic0.ys;y++)
for (x=0;x<pic0.xs;x++)
{
// get source pixel color (AARRGGBB)
c=pic0.p[y][x];
// add to leftovers
i+=WORD(c.db[picture::_r]); // _r,_g,_b are just constants 0,1,2
i+=WORD(c.db[picture::_g]);
i+=WORD(c.db[picture::_b]);
// threshold white intensity is 255+255+255=765
if (i>=384){ i-=765; c.dd=0x00FFFFFF; } else c.dd=0;
// copy to destination image
pic1.p[y][x]=c;
}
So its the same as in the link above but using just black and white. i is the accumulated intensity to be placed on the image. xs,ys is the resolution and c.db[] is color channel access.
If I apply this on colored image like this:
The result looks like this:
As you can see all the details where preserved but a noisy patterns emerge ... For printing purposes was sometimes the resolution of the image multiplied to enhance the quality. If you change the naive 2 nested for loops with a better pattern (like 16x16 squares etc) then the noise will be conserved near its source limiting artifacts. There are also approaches that use pseudo random patterns (put the leftover i near its source pixel in random location) that is even better ...
But for a BW dithering even naive approach is enough as the artifacts are just one pixel in size. For colored dithering the artifacts could create unwanted horizontal line patterns of several pixels in size (depends on used palette mis match the worse palette the bigger artifacts...)
PS just for comparison to other answer threshold outputs this is the same image dithered:
Image thresholding is the class of algorithms you're looking for - a binary threshold would set pixels to 0 or 1, yes.
Depending on the desired output, consider converting your image first to other color spaces, in particular HSL, with the luminance channel. Using (127, 127, 127) as a threshold does not uniformly take brightness into account because each channel of RGB is the saturation of R, G, or B; consider this image:
from PIL import Image
import colorsys
def threshold_pixel(r, g, b):
h, l, s = colorsys.rgb_to_hls(r / 255., g / 255., b / 255.)
return 1 if l > .36 else 0
# return 1 if r > 127 and g > 127 and b > 127 else 0
def hlsify(img):
pixels = img.load()
width, height = img.size
# Create a new blank monochrome image.
output_img = Image.new('1', (width, height), 0)
output_pixels = output_img.load()
for i in range(width):
for j in range(height):
output_pixels[i, j] = threshold_pixel(*pixels[i, j])
return output_img
binarified_img = hlsify(Image.open('./sample_img.jpg'))
binarified_img.show()
binarified_img.save('./out.jpg')
There is lots of discussion on other StackExchange sites on this topic, e.g.
Binarize image data
How do you binarize a colored image?
how can I get good binary image using Otsu method for this image?
I'm trying to make a texture using an image with 3 colors, and a Perlin noise grayscale image.
This is the original image:
This is the grayscale Perlin noise image:
What I need to do is apply the original image's brightness to the grayscale image, such that darkest and lightest brightness in the Perlin noise image is no longer 100% black (0) and 100% white (1), but taken from the original image. Then, apply the new mapping of brightness from the grayscale Perlin noise image back to the original image.
This is what I tried:
from PIL import Image
alpha = 0.5
im = Image.open(filename1).convert("RGBA")
new_img = Image.open(filename2).convert("RGBA")
new_img = Image.blend(im, new_img, alpha)
new_img.save("foo.png","PNG")
And this is the output that I get:
Which is wrong, but imagine the dark and light orange and bright color having the same gradient as the grayscale image, BUT with no 100% black or 100% white.
I believe I need to:
Convert original image to HSV (properly, I've tried with a few functions from colorsys and matplotlib and they give me weird numbers.
Get highest and lowest V value from the original image.
Convert grayscale image to HSV.
Transform or normalize (I think that's what its called) the grayscale HSV using the V values from the original HSV image.
Remap all the original V values with the new transformed/normalized grayscale V values.
🤕 Why is it not working?
The approach that you are using will not work as expected because instead of keeping color and saturation information from one image and taking the other image's lightness information (totally or partially), you are just interpolating all the channels from both images at the same time, based on a constant alpha, as stated on the docs:
PIL.Image.blend(im1, im2, alpha)
Creates a new image by interpolating between two input images, using a constant alpha: out = image1 * (1.0 - alpha) + image2 * alpha
[...]
alpha – The interpolation alpha factor. If alpha is 0.0, a copy of the first image is returned. If alpha is 1.0, a copy of the second image is returned. There are no restrictions on the alpha value. If necessary, the result is clipped to fit into the allowed output range.
🔨 Basic working example
First, let's get a basic example working. I'm going to use cv2 instead of PIL, just because I'm more familiar with it and I already have it installed on my machine.
I will also use HSL (HLS in cv2) instead of HSV, as I think that will produce an output that is closer to what you might be looking for.
import cv2
filename1 = './f1.png'
filename2 = './f2.png'
# Load both images and convert them from BGR to HLS:
img1 = cv2.cvtColor(cv2.imread(filename1, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HLS)
img2 = cv2.cvtColor(cv2.imread(filename2, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HLS)
# Copy img1, the one with relevant color and saturation information:
texture = img1.copy()
# Replace its lightness information with the one from img2:
texture[:,:,1] = img2[:,:,1]
# Convert the image back from HLS to BGR and save it:
cv2.imwrite('./texture.png', cv2.cvtColor(texture, cv2.COLOR_HLS2BGR))
This is the final output:
🎛️ Adjust lightness
Ok, so we have a simple case working, but you might not want to replace img1's lightness with img2's completely, so in that case just replace this line:
texture[:,:,1] = img2[:,:,1]
With these two:
alpha = 0.25
texture[:,:,1] = alpha * img1[:,:,1] + (1.0 - alpha) * img2[:,:,1]
Now, you will retain 25% lightness from img1 and 75% from img2, and you can adjust it as needed.
For alpha = 0.25, the output will look like this:
🌈 HSL & HSV
Although HSL and HSV look quite similar, there are a few differences, mainly regarding how they represent pure white and light colors, that would make this script generate slightly different images when using one or the other:
We just need to change a couple of things to make it work with HSV:
import cv2
filename1 = './f1.png'
filename2 = './f2.png'
# Load both images and convert them from BGR to HSV:
img1 = cv2.cvtColor(cv2.imread(filename1, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HSV)
img2 = cv2.cvtColor(cv2.imread(filename2, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HSV)
# Copy img1, the one with relevant color and saturation information:
texture = img1.copy()
# Merge img1 and img2's value channel:
alpha = 0.25
texture[:,:,2] = alpha * img1[:,:,2] + (1.0 - alpha) * img2[:,:,2]
# Convert the image back from HSV to BGR and save it:
cv2.imwrite('./texture.png', cv2.cvtColor(texture, cv2.COLOR_HSV2BGR))
This is how the first example looks like when using HSV:
And this is the second example (with alpha = 0.25):
You can see the most noticeable differences are in the lightest areas.