Example:
1st image: the original image.
2nd, 3rd and 4th images: the outputs I
want.
I know PIL has the method PIL.ImageOps.grayscale(image) that returns the 4th image, but it doesn't have parameters to produce the 2nd and 3rd ones (partial grayscale).
When you convert an image to greyscale, you are essentially desaturating it to remove saturated colours. So, in order to achieve your desired effect, you probably want to convert to HSV mode, reduce the saturation and convert back to RGB mode.
from PIL import Image
# Open input image
im = Image.open('potato.png')
# Convert to HSV mode and separate the channels
H, S, V = im.convert('HSV').split()
# Halve the saturation - you might consider 2/3 and 1/3 saturation
S = S.point(lambda p: p//2)
# Recombine channels
HSV = Image.merge('HSV', (H,S,V))
# Convert to RGB and save
result = HSV.convert('RGB')
result.save('result.png')
If you prefer to do your image processing in Numpy rather than PIL, you can achieve the same result as above with this code:
from PIL import Image
import numpy as np
# Open input image
im = Image.open('potato.png')
# Convert to HSV and go to Numpy
HSV = np.array(im.convert('HSV'))
# Halve the saturation with Numpy. Hue will be channel 0, Saturation is channel 1, Value is channel 2
HSV[..., 1] = HSV[..., 1] // 2
# Go back to "PIL Image", go back to RGB and save
Image.fromarray(HSV, mode="HSV").convert('RGB').save('result.png')
Of course, set the entire Saturation channel to zero for full greyscale.
from PIL import ImageEnhance
# value: float between 0.0 (grayscale) and 1.0 (original)
ImageEnhance.Color(image).enhance(value)
P.S.: Mark's solution works, but it seems to be increasing the exposure.
Related
how I can determine brightness value of a photo?
Here's my code, I can not understand how to determine it:
def rgb2hsv(img_path):
img = cv2.imread(img_path)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
return hsv
Any ideas?
Not sure what you mean by the "brightness value" of an image, but whatever you mean, it is stored in the Value channel (i.e. 3rd channel) of the Hue, Saturation and Value image you have already calculated.
So, if you want a single, mean brightness number for the whole image, you can use:
hsv[...,2].mean()
If you want a single, peak brightness number for the brightest spot in the image:
hsv[...,2].max()
And if you want a greyscale "map" of the brightness at each point of the image, just display or save the 3rd channel:
cv2.imwrite('brightness.png',hsv[...,2])
In HSV, 'value' is the brightness of the color and varies with color saturation. It ranges from 0 to 100%. When the value is ’0′ the color space will be totally black. With the increase in the value, the color space brightness up and shows various colors."
So use OpenCV method:
cvCvtColor(const CvArr* src, CvArr* dst, int code)
that converts an image from one color space to another. You may use:
code = CV_BGR2HSV
Then calculate histogram of third channel V, which is the brightness.
Probably it might help you!
If you need brightest pixel in the image use the following code:
import numpy as np
import cv2
img = cv2.imread(img_path)
img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(img_hsv)
bright_pixel = np.amax(v)
print(bright_pixel)
# bright_pixel will give max illumination value in the image
Observe the following image:
Observe the following Python code:
import cv2
img = cv2.imread("rainbow.png", cv2.IMREAD_COLOR)
img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # convert it to hsv
img = cv2.cvtColor(img, cv2.COLOR_HSV2BGR) # convert back to BGR
cv2.imwrite("out.png", img)
Here's the output image:
If you can't see it, there's a clear loss of visual fidelity in the image here. For comparison's sake, here's the original next to the output image zoomed in around the yellows:
What's going on here? Is there any way to prevent these blocky artifacts from appearing? I need to convert to the HSL color space to rotate the hue, but I can't do that if I'm going to get these kinds of artifacts.
As a note, the output image does not have the artifacts when I don't do the two conversions; the conversions themselves are indeed the cause.
Back at a computer now - try like this:
#!/usr/bin/env python3
import numpy as np
import cv2
img = cv2.imread("rainbow.png", cv2.IMREAD_COLOR)
img = img.astype(np.float32)/255 # go to 32-bit float on 0..1
img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # convert it to hsv
img = cv2.cvtColor(img, cv2.COLOR_HSV2BGR) # convert back to BGR
cv2.imwrite("output.png", (img*255).astype(np.uint8))
I think the problem is that when you use unsigned 8-bit representation, the Hue gets "squished" from a range of 0..360 into a range of 0..180, in 2 degree increments in order to stay within 8-bit unsigned range of 0..255 causing steps between nearby values. A solution is to move to 32-bit floats and scale to the range 0..1.
I have code that looks like this
from skimage import io as sio
test_image = imread('/home/username/pat/file.png')
test_image = skimage.transform.resize(test_image, (IMG_HEIGHT, IMG_WIDTH), mode='constant', preserve_range=True)
print test_image.shape # prints (128,128)
print test_image.max(), test_image.min() # prints 65535.0 0.0
sio.imshow(test_image)
More importantly, I need to make this image be in 3 channels, so I can feed it into a neural network that expects such input, any idea how to do that?
I want to transform a 1-channel image into a 3-channel image that looks reasonable when I plot it, makes sense, etc. How?
I tried padding with 0s, I tried copying the same values 3 times for the 3 channels, but then when I try to display the image, it looks like gibberish. So how can I transform the image into 3 channels, even if it becomes something like, bluescale instead of greyscale, but still be able to visualize it in a meaningful way?
Edit:
if I try
test_image = skimage.color.gray2rgb(test_image)
I get all white image, with some black dots.
I get the same all white, rare small black dots if I try
convert Test1_PC_1.tif -colorspace sRGB -type truecolor Test1_PC_1_new.tif
Before the attempted transform with gray2rgb
print type(test_image[0,0])
<type 'numpy.uint16'>
After
print type(test_image[0,0,0])
<type 'numpy.float64'>
You need to convert the array from 2D to 3D, where the third dimension is the color.
You can use the gray2rgb function function provided by skimage:
test_image = skimage.color.gray2rgb(test_image)
Alternatively, you can write your own conversion -- which gives you some flexibility to tweak the pixel values:
# basic conversion from gray to RGB encoding
test_image = np.array([[[s,s,s] for s in r] for r in test_image],dtype="u1")
# conversion from gray to RGB encoding -- putting the image in the green channel
test_image = np.array([[[0,s,0] for s in r] for r in test_image],dtype="u1")
I notice from your max() value, that you're using 16-bit sample values (which is uncommon). You'll want a different dtype, maybe "u16" or "int32". Also, you may need to play some games to make the image display with the correct polarity (it may appear with black/white reversed).
One way to get there is to just invert all of the pixel values:
test_image = 65535-test_image ## invert 16-bit pixels
Or you could look into the norm parameter to imshow, which appears to have an inverse function.
Your conversion from gray-value to RGB by replicating the gray-value three times such that R==G==B is correct.
The strange displayed result is likely caused by assumptions made during display. You will need to scale your data before display to fix it.
Usually, a uint8 image has values 0-255, which are mapped to min-max scale of display. Uint16 has values 0-65535, with 65535 mapped to max. Floating-point images are very often assumed to be in the range 0-1, with 1 mapped to max. Any larger value will then also be mapped to max. This is why you see so much white in your output image.
If you divide each output sample by the maximum value in your image you’ll be able to display it properly.
Well, imshow is using by default, a kind of heatmap to display the image intensities. To display a grayscale image just specify the colormap as above:
plt.imshow(image, cmap="gray")
Now, i think you can get the channel of an image by doing:
image[:,:,i] where i is in {0,1,2}
To extract an image for a specific channel:
red_image = image.copy()
red_image[:,:,1] = 0
red_image[:,:,2] = 0
Edit:
Do you definitely have to use skimage? What about python-opencv module?
Have you tried the following example?
import cv2
import cv
color_img = cv2.cvtColor(gray_img, cv.CV_GRAY2RGB)
I'm trying to make a texture using an image with 3 colors, and a Perlin noise grayscale image.
This is the original image:
This is the grayscale Perlin noise image:
What I need to do is apply the original image's brightness to the grayscale image, such that darkest and lightest brightness in the Perlin noise image is no longer 100% black (0) and 100% white (1), but taken from the original image. Then, apply the new mapping of brightness from the grayscale Perlin noise image back to the original image.
This is what I tried:
from PIL import Image
alpha = 0.5
im = Image.open(filename1).convert("RGBA")
new_img = Image.open(filename2).convert("RGBA")
new_img = Image.blend(im, new_img, alpha)
new_img.save("foo.png","PNG")
And this is the output that I get:
Which is wrong, but imagine the dark and light orange and bright color having the same gradient as the grayscale image, BUT with no 100% black or 100% white.
I believe I need to:
Convert original image to HSV (properly, I've tried with a few functions from colorsys and matplotlib and they give me weird numbers.
Get highest and lowest V value from the original image.
Convert grayscale image to HSV.
Transform or normalize (I think that's what its called) the grayscale HSV using the V values from the original HSV image.
Remap all the original V values with the new transformed/normalized grayscale V values.
🤕 Why is it not working?
The approach that you are using will not work as expected because instead of keeping color and saturation information from one image and taking the other image's lightness information (totally or partially), you are just interpolating all the channels from both images at the same time, based on a constant alpha, as stated on the docs:
PIL.Image.blend(im1, im2, alpha)
Creates a new image by interpolating between two input images, using a constant alpha: out = image1 * (1.0 - alpha) + image2 * alpha
[...]
alpha – The interpolation alpha factor. If alpha is 0.0, a copy of the first image is returned. If alpha is 1.0, a copy of the second image is returned. There are no restrictions on the alpha value. If necessary, the result is clipped to fit into the allowed output range.
🔨 Basic working example
First, let's get a basic example working. I'm going to use cv2 instead of PIL, just because I'm more familiar with it and I already have it installed on my machine.
I will also use HSL (HLS in cv2) instead of HSV, as I think that will produce an output that is closer to what you might be looking for.
import cv2
filename1 = './f1.png'
filename2 = './f2.png'
# Load both images and convert them from BGR to HLS:
img1 = cv2.cvtColor(cv2.imread(filename1, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HLS)
img2 = cv2.cvtColor(cv2.imread(filename2, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HLS)
# Copy img1, the one with relevant color and saturation information:
texture = img1.copy()
# Replace its lightness information with the one from img2:
texture[:,:,1] = img2[:,:,1]
# Convert the image back from HLS to BGR and save it:
cv2.imwrite('./texture.png', cv2.cvtColor(texture, cv2.COLOR_HLS2BGR))
This is the final output:
🎛️ Adjust lightness
Ok, so we have a simple case working, but you might not want to replace img1's lightness with img2's completely, so in that case just replace this line:
texture[:,:,1] = img2[:,:,1]
With these two:
alpha = 0.25
texture[:,:,1] = alpha * img1[:,:,1] + (1.0 - alpha) * img2[:,:,1]
Now, you will retain 25% lightness from img1 and 75% from img2, and you can adjust it as needed.
For alpha = 0.25, the output will look like this:
🌈 HSL & HSV
Although HSL and HSV look quite similar, there are a few differences, mainly regarding how they represent pure white and light colors, that would make this script generate slightly different images when using one or the other:
We just need to change a couple of things to make it work with HSV:
import cv2
filename1 = './f1.png'
filename2 = './f2.png'
# Load both images and convert them from BGR to HSV:
img1 = cv2.cvtColor(cv2.imread(filename1, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HSV)
img2 = cv2.cvtColor(cv2.imread(filename2, cv2.IMREAD_COLOR), cv2.COLOR_BGR2HSV)
# Copy img1, the one with relevant color and saturation information:
texture = img1.copy()
# Merge img1 and img2's value channel:
alpha = 0.25
texture[:,:,2] = alpha * img1[:,:,2] + (1.0 - alpha) * img2[:,:,2]
# Convert the image back from HSV to BGR and save it:
cv2.imwrite('./texture.png', cv2.cvtColor(texture, cv2.COLOR_HSV2BGR))
This is how the first example looks like when using HSV:
And this is the second example (with alpha = 0.25):
You can see the most noticeable differences are in the lightest areas.
In the program given below I am adding alpha channel to a 3 channel image to control its opacity. But no matter what value of alpha channel I give there is no effect on image! Anyone could explain me why?
import numpy as np
import cv2
image = cv2.imread('image.jpg')
print image
b_channel,g_channel,r_channel = cv2.split(image)
a_channel = np.ones(b_channel.shape, dtype=b_channel.dtype)*10
image = cv2.merge((b_channel,g_channel,r_channel,a_channel))
print image
cv2.imshow('img',image)
cv2.waitKey(0)
cv2.destroyAllWindows()
I can see in the terminal that alpha channel is added and its value changes as I change it in the program, but there is no effect on the opacity of the image itself!
I am new to OpenCV so I might be missing something simple. Thanks for help!
Alpha is a channel that is used to control the opacity of an image. An alpha channel typically doesn't do anything unless you perform an action on it. It doesn't make an image transparent on its own.
Alpha is usually used to either remove unimportant areas of an image or to combine one image with another image. In the first case the image is usually simply multiplied by its alpha. This is sometimes referred to premultiplying. In this case the dark areas of the alpha channel darken the RGB and the bright areas leave the RGB untouched.
R = R*A
G = G*A
B = B*A
Here is a version of your code that might do what you want (Note- I converted to 32-bit because it's easier to use alpha channels when they are ranged from 0 to 1):
import numpy as np
import cv2
i = cv2.imread('image.jpg')
img = np.array(i, dtype=np.float)
img /= 255.0
cv2.imshow('img',img)
cv2.waitKey(0)
#pre-multiplication
a_channel = np.ones(img.shape, dtype=np.float)/2.0
image = img*a_channel
cv2.imshow('img',image)
cv2.waitKey(0)
cv2.destroyAllWindows()
The second case is used when trying to overlay an image over another image. This is a compositing operation that is often referred to as an "over" merge or a "blend" merge. In this case there is a foreground image "A" and a background image "B" and an alpha channel which could be included in the RGB images or on its own. In this case you can place A over B using:
output = (A * alpha) + (B * (1-alpha))
Actually, the answer is simple. OpenCV's imshow() function ignores the alpha channel.
If you want to see the effect of your alpha channel, save your image in PNG format (because that supports alpha channel) and display in a different viewer.
I also wrote a decorator/enhancement for imshow() here that helps visualise transparent images.