I want to select the green channel of an image and perform intensity conversion. I have selected the green channel of image. I would like to know how to do intensity conversion. I am currently working in python.
By selecting the green channel, you're technically already doing an intensity conversion. This is represented as a grayscale image which denotes how much green is experienced at each pixel in the image.
However, #MarkSetchell is correct where the canonical approach to convert from colour images to intensity is a weighted combination of each colour. Some people average all of them, other people exaggerate on the green channel more because we perceive that colour more clearly, but the SMPTE Rec. 709 standard is amongst the most popular: Y' = 0.299 R' + 0.587 G' + 0.114 B'.
Take a look at these informative links for more details on the conversion:
https://en.wikipedia.org/wiki/Luma_(video)
http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/
https://en.wikipedia.org/wiki/Grayscale
However, since you are using OpenCV, you can simply call cv2.cvtColor with the correct flag to convert an image from colour to grayscale:
import numpy as np
import cv2
im = cv2.imread('...') # Place filename here
im = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
Alternatively, you can specify 0 as the extra flag to cv2.imread to automatically convert any image into grayscale without having the need to call cv2.cvtColor:
im = cv2.imread('...', 0)
You need to be more precise. The "green channel" probably means you have green luma, a correlate of green intensity. They are related via a "transfer function", e.g. as defined as a part of sRGB:
https://en.wikipedia.org/wiki/SRGB
This will allow you to flip between luminous intensity of green and luma of green.
Equally likely, you are interested in luminance (CIE Y) or luma. Google for "Gamma FAQ" if that is the case.
Related
I want to get the pixel coordinates of the blue dots in an image.
To get it, I first converted it to gray scale and use threshold function.
import numpy as np
import cv2
img = cv2.imread("dot.jpg")
img_g = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret1,th1 = cv2.threshold(img_g,127,255,cv2.THRESH_BINARY_INV)
What to do next if I want to get the pixel location with intensity 255? Please tell if there is some simpler method to do the same.
I don't think this is going to work as you would expect.
Usually, in order to get a stable tracking over a shape with a specific color, you do that in RGB/HSV/HSL plane, you could start with HSV which is more robust in terms of lighting.
1-Convert to HSV using cv2.cvtColor()
2-Use cv2.inRagne(blue_lower, blue_upper) to "filter" all un-wanted colors.
Now you have a good-looking binary image with only blue color in it (assuming you have a static background or more filters should be added).
3-Now if you want to detect dots (which is usually more than one pixel) you could try cv2.findContours
4- You can get x,y pixel of contours using many methods(depends on the shape of what you want to detect) like this cv2.boundingRect()
I am trying to do a linear filter on an image with RGB colors. I found a way to do that is by splitting the image to different color layers and then merge them.
i.e.:
cv2.split(img)
Sobel(b...)
Sobel(g...)
Sobel(r...)
cv2.merge((b,g,r))
I want to find out how cv2.merge((b,g,r)) works and how the final image will be constructed.
cv2.merge takes single channel images and combines them to make a multi-channel image. You've run the Sobel edge detection algorithm on each channel on its own. You are then combining the results together into a final output image. If you combine the results together, it may not make sense visually at first but what you would be displaying are the edge detection results of all three planes combined into a single image.
Ideally, hues of red will tell you the strength of the edge detection in the red channel, hues of green giving the strength of the detection for the green channel, and finally blue hues for the strength of detection in the blue.
Sometimes this is a good debugging tool so that you can semantically see all of the edge information for each channel in a single image. However, this will most likely be very hard to interpret for very highly complicated images with lots of texture and activity.
What is more usually done is to actually do an edge detection using a colour edge detection algorithm, or convert the image to grayscale and do the detection on that image instead.
As an example of the former, one can decompose the RGB image into HSV and use the colour information in this space to do a better edge detection. See this answer by Micka: OpenCV Edge/Border detection based on color.
This is my understanding. In OpenCV the function split() will take in the paced image input (being a multi-channel array) and split it into several separate single-channel arrays.
Within an image, each pixel has a spot sequentially within an array with each pixel having its own array to denote (r,g and b) hence the term multi channel. This set up allows any type of image such as bgr, rgb, or hsv to be split using the same function.
As Example (pretend these are separate examples so no variables are being overwritten)
b,g,r = cv2.split(bgrImage)
r,g,b = cv2.split(rgbImage)
h,s,v = cv2.split(hsvImage)
Take b,g,r arrayts for example. Each is a single channel array contains a portion of the split rgb image.
This means the image is being split out into three separate arrays:
rgbImage[0] = [234,28,19]
r[0] = 234
g[0] = 28
b[0] = 19
rgbImage[41] = [119,240,45]
r[41] = 119
g[14] = 240
b[14] = 45
Merge does the reverse by taking several single channel arrays and merging them together:
newRGBImage = cv2.merge((r,g,b))
the order in which the separated channels are passed through become important with this function.
Pseudo code:
cv2.merge((r,g,b)) != cv2.merge((b,g,r))
As an aside: Cv2.split() is an expensive function and the use of numpy indexing is must more efficient.
For more information check out opencv python tutorials
I'm trying to get the red channel color space of an image, currently the way I'm doing it, I get a grayscale image:
img = img[:,:,2]
But I want an image like this:
Above, the top image is the red channel color space image, and the bottom one is the original image. What exactly is being done to achieve this image?
I've also tried
img[:,:,0] = 0
img[:,:,1] = 0
But the result obtained is not as desired. Here's an article on red channel color space: https://en.wikipedia.org/wiki/RG_color_space
Actually, your expected output image is not the red channel color space of the original one. It's sort of a COLORMAP that has been applied on input image. The good news is that OpenCV come up with multiple built-in colormaps. The bad news is that your expected output cant be generate by OpenCV's built-in colormaps. But don't give up, you can map the colors using a custom lookup table using cv2.LUT() function.
For better demonstration here are some examples with your image:
img = cv2.imread('origin.png')
im_color = cv2.applyColorMap(img, cv2.COLORMAP_HSV)
cv2.imshow('mapped_image', im_color)
# cv2.imwrite('result.png', im_color)
cv2.waitKey(0)
Here are all the OpenCV's COLORMAP's:
print [sub for sub in dir(cv2) if sub.startswith('COLORMAP_')]
['COLORMAP_AUTUMN', 'COLORMAP_BONE', 'COLORMAP_COOL', 'COLORMAP_HOT', 'COLORMAP_HSV', 'COLORMAP_JET', 'COLORMAP_OCEAN', 'COLORMAP_PINK', 'COLORMAP_RAINBOW', 'COLORMAP_SPRING', 'COLORMAP_SUMMER', 'COLORMAP_WINTER']
An example for mapping the colors using a custom lookup table using cv2.LUT():
table = np.array([( i * invert_value) for i in np.arange(256)]).astype("uint8")
cv2.LUT(image, table)
Your second suggestion should throw away the blue and green colors and give you a "red channel image". If you want a RG (red-green) color space image, throw away only the blue channel:
img[:,:,0] = 0
But the example image you posted doesn't illustrate that, as the resulting image has information left in all three channels. My guess is that it was produced with a "colormap", where different colors represent different values of red in the original image. Such a mapping can look any way you like, so it's not easy to reconstruct it from your example images
I am analyzing an image for finding brown objects in an image. I am thresholding an image and taking darkest parts as brown cells. However depending on the quality of an image objects cannot be identified sometimes. Is there any solution for that in OpenCV Python, such as pre-processing the gray scale image and defining what brown means for that particular image?
The code that I am using to find brown dots is as follows:
def countBrownDots(imageFile):
im = cv2.imread(imageFile)
#changing color space
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
gray = increaseBrighntness(gray)
l1,thresh = cv2.threshold(gray,10,255,cv2.THRESH_BINARY_INV)
thresh = ndimage.gaussian_filter(thresh, 16)
l2,thresh = cv2.threshold(thresh,70,255,cv2.THRESH_BINARY)
thresh = ndimage.gaussian_filter(thresh, 16)
cv2.imshow("thresh22",thresh)
rmax = pymorph.regmax(thresh)
nim = pymorph.overlay(thresh, rmax)
seeds,nr_nuclei = ndimage.label(rmax)
cv2.imshow("original",im)
cv2.imshow("browns",nim)
Here is an input image example:
Have a look at the image in HSV color space, here are the 3 planes stacked side by side
Although people have suggested segmenting on the basis of hue, there is actually more discriminative information in the saturation and value planes. For this particular image you would probably get a better result with the gray scale (i.e. value plane) than with the hue. However that is no reason to discard the color information.
As proof of concept (using Gimp) for color segmentation, I just randomly picked a brown spot and changed all colors with a color distance of less than 60 from that spot to green to get this:
If you play with the parameters a bit you will probably get what you want. Then write the code.
I tried pre-processing mean shift filtering to posterize the image, but that didn't really help.
I am doing a project in python for hand gesture recognition. So the usage of LAB color space will help to improve the accuracy of recognition because as we know that our skin color mainly comprises a ratio of red and yellow color and in case of Lαβ color space, the α component represents the pixel components position between red and green while the β component represents between yellow and blue making it less vulnerable to noise.
But the problem is that, when i tried to convert the Lab image into binary using threshold function provided in opencv it returned some errors, because the input of threshold function should be a gray scale image. Anybody know how to solve this problem?
lab = cv2.cvtColor(img,cv2.COLOR_BGR2LAB)
blur = cv2.GaussianBlur(gray,(5,5),0)
ret,thresh1 = cv2.threshold(blur,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
The error returned is Assertion Failed.
Anybody know how to threshold an LAB image?
The simplest thresholding methods replace each pixel in an image with a black pixel if the image intensity is less than some fixed constant T, or a white pixel if the image intensity is greater than that constant. Hence, for doing thresholding it is recommended to use gray scale images.
In opencv, cv2.threshold takes two arguments, First argument is the
source image, which should be a grayscale image. Second argument is the threshold value which is used to classify
the pixel values.
But in Wikipedia, there is a reference that we can threshold color images by designating a separate threshold for each of the RGB components of the image and then combine them with an AND operation.
Opencv threshold Documentation:
input array (single-channel, 8-bit or 32-bit floating point).
You can't threshold a color image. And where did grey come from? You never use the lab converted image.
Input image should be a single channel 8-bit or 32-bit float like M4rtini said. However, an RGB, Lab, HSV are all images build up from 3 8-bit channels. If you split the channels
L, a, b = cv2.split(lab)
the result will be 3 single channel images. These you can input into the function
ret,thresh_L = cv2.threshold(L,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
ret,thresh_a = cv2.threshold(a,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
ret,thresh_b = cv2.threshold(b,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
basically, you can input any 2d numpy array into the threshold function as long as its 8-bit or 32-bit float. OpenCv scales Lab colorspace to 0-256.