I want to make a program to classify the occupation of people by the color of their clothes from a video.
For example, if the person is wearing the white coat, there he is a doctor.
If he is wearing in blue, then he is a police
I have tried to code for cutting the image into the upper half for easier color recognition
image= image[:len(image)/2]
And then, I change the image to HSV and make it more smooth. I analysis it by the range of color.
dst = cv2.bilateralFilter(image,9,75,75)
# Conver the image from RGB to HSV
hsv = cv2.cvtColor(dst, cv2.COLOR_BGR2HSV)
channels = split(hsv)
ratio = ((float)(count(channels)) / (float)(total_number_of_pixel))
However, I found out that the ratio value is very small. I think that it is because of the background noise.
I am looking for the help of how to eliminate the background noise. Is there any more suitable method to classify the occupation?
Related
im using the following code to color screen a photo. We are trying to locate the orange circle within the image. Is there a way to eliminate some of the background noise shown in the second photo? Tweaking the color range some may help but its never enough to fully eliminate the background noise. I've also considered trying to locate circle shapes within the image but i am unsure how to do that. Any help would be amazing!
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
Lower_bound = np.array([0, 80, 165]) # COLORS NEEDED
Upper_bound = np.array([75, 255, 255])
mask = cv2.inRange(hsv, Lower_bound, Upper_bound)
Option 1 (HSV color space):
If you want to continue using HSV color space robustly, you must check out this post. There you can control the variations across the three channels using trackbars.
Option 2 (LAB color space):
Here I will be using the LAB color space where dominant colors can be segmented pretty easily. LAB space stores image across three channels (1 brightness channel) and (2 color channels):
L-channel: amount of lightness in the image
A-channel: amount of red/green in the image
B-channel: amount of blue/yellow in the image
Since orange is a close neighbor of color red, using the A-channel can help segment it.
Code:
img = cv2.imread('image_path')
# Convert to LAB color space
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
cv2.imshow('A-channel', lab[:,:,1])
The above image is the A-channel, where the object of interest is pretty visible.
# Apply threshold
th = cv2.threshold(lab[:,:,1],150,255,cv2.THRESH_BINARY)[1]
Why does this work?
Looking at the LAB color plot, the red color is on one end of A-axis (a+) while green color is on the opposite end of the same axis (a-). This means, higher values in this channel represents colors close to red, while the lower values values represents colors close to green.
The same can be done along the b-channel while trying to segment yellow/blue color in the image.
Post-processing:
From here onwards for every frame:
Identify the largest contour in the binary threshold image th.
Mask it over the frame using cv2.bitwise_and()
Note: this LAB color space can help segment dominant colors easily. You need to test them before using it for other colors.
I am trying to use OpenCV to take an RGB image and identify a boundary or draw a line where the sidewalk meets the grass. Typical methods like canny edge detection and then using hough lines is not extremely helpful since it is easily influenced by other potential lines in the environment.
Let's say I have the RGB image below RGB sidewalk image, in the image, there is a clear boundary where the sidewalk meets the grass. This boundary becomes even more prominent when you convert into the HSV space and blur the image as shown in HSV sidewalk image. I believe color segmentation is the best bet I am just not sure how to approach it. Using the code hsv_img = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV) green_low = np.array([25 , 0, 50] ) green_high = np.array([75, 255, 255]) curr_mask = cv2.inRange(hsv_img, green_low, green_high)
I was able to generate a mask that almost gets me to where I want as shown in this figure grass mask. I just need to use this mask to draw my line without getting mixed up with the other greens detected in the picture.
I have the following problem:
I want to extract only the color of a blue pen from scanned images that also contain grayscale and black printed areas on a white page background.
I'm okay with disregarding any kind of grayscale (not colored) pixel values and only keeping the blue parts, there won't be any dominant color other than blue on the images.
It sounds like a simple task, but the problem is that through the scanning process, the entire image contains colored pixels, including blue ones, even the grayscale or black parts, so I'm not sure how to go about isolating those parts and keeping only the blue ones, here is a closeup to show what I mean:
Here is what an image would look like for reference:
I would like the output to be a new image, containing only the parts drawn / written in blue pen, in this case the drawing of the hedgehog / eye.
So I've tried to isolate an HSV range for blue-ish colors in the image using this code:
img = cv.imread("./data/scan_611a720bcd70bafe7beb502d.jpg")
img_hsv = cv.cvtColor(img, cv.COLOR_BGR2HSV)
# accepted color range for blue pen
lower_blue = np.array([90, 35, 140])
upper_blue = np.array([150, 255, 255])
# preparing the mask to overlay
mask = cv.inRange(img_hsv, lower_blue, upper_blue)
inverted_mask = cv.bitwise_not(mask)
mask_blur = cv.GaussianBlur(inverted_mask, (5, 5), 0)
ret, mask_thresh = cv.threshold(mask_blur, 0, 255, cv.THRESH_BINARY + cv.THRESH_OTSU)
# The black region in the mask has the value of 0,
# so when multiplied with original image removes all non-blue regions
result = cv.bitwise_and(img, img, mask=mask)
cv.imshow("Result", mask_thresh)
k = cv.waitKey(0)
However the result is this:
Many parts of the picture that are drawn in black such as the cloud image are not removed since as mentioned, they contain blue / colored pixels due to the scanning process.
Is there any method that would allow for a clean isolation of those blue parts of the image even with those artifacts present?
The solution would need to work for any kind of image like this, the one given is just an example, but as mentioned the only color present would be the blue pen apart from the grey/black areas.
Maybe try the opposite- search for black parts first and then do some erosion around this black mask and remove all around it before you are searching for the blue. The "main" color in the cloud is still black so you can play around this.
You should realign the color planes of your scan. Then you're at least rid of those color fringes. I'd recommend scanning a sheet of graph paper to calibrate.
This is done using OpenCV's findTransformECC.
Complete examples can be found here:
https://docs.opencv.org/master/dd/d93/samples_2cpp_2image_alignment_8cpp-example.html
https://learnopencv.com/image-alignment-ecc-in-opencv-c-python/
And here's specific code to align the color planes of the picture given in the question:
https://gist.github.com/crackwitz/b8867b46f320eae17f4b2684416c79ea
(all it does is split the color planes, call findTransformECC and warpPerspective, merge the color planes)
My goal is to draw the text bounding boxes for the following image. Since the two regions are colored differently, so this should be easy. I just need to select the pixels that match a certain color values to filter out the other text region and run a convex hull detection.
However, when I zoom in the image, I notice that the text regions has the zig-zag effect on the edges, so I'm not able to easily find the two color values (for the blue and green) from the above image.
I wonder is there a way to remove the zig-zag effect to make sure each phrase is colored consistently? Or is there a way to determine the dominant color for each text region?
The anti-aliasing causes the color to become lighter (or darker if against a black background) so you can think of the color as being affected by light. In that case, we can use light-invariant color spaces to extract the colors.
So first convert to hsv since it is a light invariant colorspace. Since the background can be either black or white, we will filter out them out (if the bg is always white and the text can be black you would need to change the filtering to allow for that).
I took the saturation as less than 80 as that will encompass white black and gray since they are the only colors with low saturation. (your image is not perfectly white, its 238 instead of 255 maybe due to jpg compression)
Since we found all the black, white and gray, the rest of the image are our main colors, so i took the inverse mask of the filter, then to make the colors uniform and unaffected by light, set the Saturation and Value of the colors to 255, that way the only difference between all the colors will be the hue. I also set bg pixels to 0 to make it easier for finding contours but thats not necissary
After this you can use whatever method you want to get the different groups of colors, I just did a quick histogram for the hue values and got 3 peaks but 2 were close together so they can be bundled together as 1. You can maybe use peak finding to try to find the peaks. There might be better methods of finding the color groups but this is what i just thought of quickly.
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
mask = hsv[:,:,1] < 80 # for white, gray & black
hsv[mask] = 0 # set bg pixels to 0
hsv[~mask,1:] = 255 # set fg pixels saturation and value to 255 for uniformity
colors = hsv[~mask]
z = np.bincount(colors[:,0])
print(z)
bgr = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)
cv2.imshow('bgr', bgr)
I am analyzing an image for finding brown objects in an image. I am thresholding an image and taking darkest parts as brown cells. However depending on the quality of an image objects cannot be identified sometimes. Is there any solution for that in OpenCV Python, such as pre-processing the gray scale image and defining what brown means for that particular image?
The code that I am using to find brown dots is as follows:
def countBrownDots(imageFile):
im = cv2.imread(imageFile)
#changing color space
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
gray = increaseBrighntness(gray)
l1,thresh = cv2.threshold(gray,10,255,cv2.THRESH_BINARY_INV)
thresh = ndimage.gaussian_filter(thresh, 16)
l2,thresh = cv2.threshold(thresh,70,255,cv2.THRESH_BINARY)
thresh = ndimage.gaussian_filter(thresh, 16)
cv2.imshow("thresh22",thresh)
rmax = pymorph.regmax(thresh)
nim = pymorph.overlay(thresh, rmax)
seeds,nr_nuclei = ndimage.label(rmax)
cv2.imshow("original",im)
cv2.imshow("browns",nim)
Here is an input image example:
Have a look at the image in HSV color space, here are the 3 planes stacked side by side
Although people have suggested segmenting on the basis of hue, there is actually more discriminative information in the saturation and value planes. For this particular image you would probably get a better result with the gray scale (i.e. value plane) than with the hue. However that is no reason to discard the color information.
As proof of concept (using Gimp) for color segmentation, I just randomly picked a brown spot and changed all colors with a color distance of less than 60 from that spot to green to get this:
If you play with the parameters a bit you will probably get what you want. Then write the code.
I tried pre-processing mean shift filtering to posterize the image, but that didn't really help.