How to do color balance with OpenCV. I have an image with a white paper in it, how can I balance the whole image color based on that white paper. Is there any code has implemented in Python?
I've tried this method
Get the rgb mean value of the "white" background. Then compute the ratios of 255/r, 255/g, 255/b using the r,g,b values from the "white" rgb color. Then multiply the red, green and blue channels by those ratios.
But there is a problem I encounter:
I have to imwrite the balanced image and imshow it again to display the right result. If I only imshow it directly (with converting the datatype to np.uint8), the shown image is strange and not right.
Related
I'm trying to replace different colors in image using opencv.
Image is as below
I'm trying to replace border color and main object color which is shade of yellow into some other random different color say orange and red, first I tried to change border color as in below code
image = cv.imread(r'image.png')
hsv=cv.cvtColor(image,cv.COLOR_BGR2HSV)
yellow_lo=np.array([0,0,240])
yellow_hi=np.array([179,255,255])
mask=cv.inRange(hsv,yellow_lo,yellow_hi)
I get mask image as below
as you can see there's a gap between the lines in the border color , when I replace the color for this mask image I still can see the original color present in the image as below, line is not continously red in color
image[mask>0]=(0,0,255)
this is happening because pixel intensity of border varies its not constant as shown in below zoomed image
How can I solve this and replace the color of total border? I tried to erode and dilate mask image to complete broken line it didn't fix the issue. Any help suggestion to fix this will be highly appreciated.
To replace the border you need a prominent mask. A mask that strongly represents the border.
Analyzing the third channel in HSV color space (value) gives the following:
Applying an appropriate threshold on this channel gives a well defined mask, which can later be used to replace the color:
mask = cv2.threshold(hsv[:,:,2], 150, 255, cv2.THRESH_BINARY)[1]
img[th==255]=(0,0,255)
im using the following code to color screen a photo. We are trying to locate the orange circle within the image. Is there a way to eliminate some of the background noise shown in the second photo? Tweaking the color range some may help but its never enough to fully eliminate the background noise. I've also considered trying to locate circle shapes within the image but i am unsure how to do that. Any help would be amazing!
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
Lower_bound = np.array([0, 80, 165]) # COLORS NEEDED
Upper_bound = np.array([75, 255, 255])
mask = cv2.inRange(hsv, Lower_bound, Upper_bound)
Option 1 (HSV color space):
If you want to continue using HSV color space robustly, you must check out this post. There you can control the variations across the three channels using trackbars.
Option 2 (LAB color space):
Here I will be using the LAB color space where dominant colors can be segmented pretty easily. LAB space stores image across three channels (1 brightness channel) and (2 color channels):
L-channel: amount of lightness in the image
A-channel: amount of red/green in the image
B-channel: amount of blue/yellow in the image
Since orange is a close neighbor of color red, using the A-channel can help segment it.
Code:
img = cv2.imread('image_path')
# Convert to LAB color space
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
cv2.imshow('A-channel', lab[:,:,1])
The above image is the A-channel, where the object of interest is pretty visible.
# Apply threshold
th = cv2.threshold(lab[:,:,1],150,255,cv2.THRESH_BINARY)[1]
Why does this work?
Looking at the LAB color plot, the red color is on one end of A-axis (a+) while green color is on the opposite end of the same axis (a-). This means, higher values in this channel represents colors close to red, while the lower values values represents colors close to green.
The same can be done along the b-channel while trying to segment yellow/blue color in the image.
Post-processing:
From here onwards for every frame:
Identify the largest contour in the binary threshold image th.
Mask it over the frame using cv2.bitwise_and()
Note: this LAB color space can help segment dominant colors easily. You need to test them before using it for other colors.
I have a piece of code that finds the most dominant colors in an image and returns them as RGB values. I try to sort them so I can create a gradient image but they don't sort properly.
img = Image.open(r'C:\Users\Dora\Projects\Python\Album-Gradient\{}'.format(filename))
palette = dominant_colors(img) #getting dominant rgb values
palette.sort(key=lambda rgb: colorsys.rgb_to_hsb(*rgb))
print(palette) #printing sorted rgb values as a list
#then I convert them into a gradient image
Here's the gradient image I get.
As you can see there's dark yellow before black. The colors don't sort uniformly and there are noise in the gradient.
How can I sort the RGB values so it goes from black to the colors of the rainbow to white, or white to the colors of the rainbow to black?
(Something like black=>grey=>dark colors=>light colors=>white)
EDIT1: Here's the link to the full code: GitHub Repo
Also, the gradient always consists of 5 colors if that helps.
Sorting them according to their "lightness" value after converting them to HSL solved the problem.
I am trying to create the segmentation mask for each red color boundary. Input image has 4 polygons with a red color boundary. I want to mark each polygon segment with different colors as shown in the output. Please help me with this.
Input Image
Output Image
take red channel only. consider as single-channel grayscale/binary.
invert. areas become white. borders become black.
connected components labeling (of the white areas). connectedComponents is the API
To paint a picture where each area has a color, use numpy operations (mask indexing, assignment) to construct that picture from the labels map returned by connectedComponents.
Why can't you simply scan through the image and do the following:
execute flood fill with a unique (non-red color) every time you find a black pixel
replace red border with the same color if needed
repeat
I am doing a project in python for hand gesture recognition. So the usage of LAB color space will help to improve the accuracy of recognition because as we know that our skin color mainly comprises a ratio of red and yellow color and in case of Lαβ color space, the α component represents the pixel components position between red and green while the β component represents between yellow and blue making it less vulnerable to noise.
But the problem is that, when i tried to convert the Lab image into binary using threshold function provided in opencv it returned some errors, because the input of threshold function should be a gray scale image. Anybody know how to solve this problem?
lab = cv2.cvtColor(img,cv2.COLOR_BGR2LAB)
blur = cv2.GaussianBlur(gray,(5,5),0)
ret,thresh1 = cv2.threshold(blur,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
The error returned is Assertion Failed.
Anybody know how to threshold an LAB image?
The simplest thresholding methods replace each pixel in an image with a black pixel if the image intensity is less than some fixed constant T, or a white pixel if the image intensity is greater than that constant. Hence, for doing thresholding it is recommended to use gray scale images.
In opencv, cv2.threshold takes two arguments, First argument is the
source image, which should be a grayscale image. Second argument is the threshold value which is used to classify
the pixel values.
But in Wikipedia, there is a reference that we can threshold color images by designating a separate threshold for each of the RGB components of the image and then combine them with an AND operation.
Opencv threshold Documentation:
input array (single-channel, 8-bit or 32-bit floating point).
You can't threshold a color image. And where did grey come from? You never use the lab converted image.
Input image should be a single channel 8-bit or 32-bit float like M4rtini said. However, an RGB, Lab, HSV are all images build up from 3 8-bit channels. If you split the channels
L, a, b = cv2.split(lab)
the result will be 3 single channel images. These you can input into the function
ret,thresh_L = cv2.threshold(L,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
ret,thresh_a = cv2.threshold(a,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
ret,thresh_b = cv2.threshold(b,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
basically, you can input any 2d numpy array into the threshold function as long as its 8-bit or 32-bit float. OpenCv scales Lab colorspace to 0-256.