HSV space for multicolor object - python

I would like to detect this gate below, ideally the entire gate. I have played around for hours with a trackbar script but I am just not finding the right color space. I found other threads that just track yellow and not even that is working.. This is my code:
def track():
cap = cv2.VideoCapture('../files/sub/gate_mission1.mp4')
while True:
_, frame = cap.read()
cv2.imshow('img', frame)
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower = np.array([20, 93, 0])
upper = np.array([45, 255, 255])
mask = cv2.inRange(hsv, lower, upper)
contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(frame, contours, -1, (0, 255, 0), 2)
Maybe there is a way to just remove all the blue/greenish too im not sure? What are my options here?

This seems to work for me by thresholding in LAB colorspace in Python/OpenCV. According to Wikipedia at https://en.wikipedia.org/wiki/CIELAB_color_space "The a* axis is relative to the green–red opponent colors, with negative values toward green and positive values toward red." So we ought to get reasonably good separation for your green and reddish colors.
Input:
import cv2
import numpy as np
# load images
img = cv2.imread('gate.jpg')
# convert to LAB
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
# set black ranges
lower = (130,105,100)
upper = (170,170,160)
# threshold on black
result = cv2.inRange(lab, lower, upper)
# save output
cv2.imwrite('gate_thresh.jpg', result)
# display results
cv2.imshow('thresh',result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Threshold Image

Related

python Cv2-Draw Contours and fill black is ignoring perimeter

Iam trying to remove objects from the image using the following method, however, as can be seen from outcome image there is a residual fine colored line around each object. Although i have dilated my image, the lines are still present.
Is there a way to remove those lines?
import cv2
import numpy as np
img = cv2.imread('TRY.jpg')
image_edges = cv2.GaussianBlur(img,(3,3),1) #Helps in defining the edges
image_edges=cv2.Canny(image_edges,100,200)
image_edges=cv2.dilate(image_edges,(3,3),iterations=3)
contours_draw, hierarchy = cv2.findContours(image_edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
mask = np.zeros(img.shape, np.uint8)
mask.fill(255)
for c in contours_draw:
cv2.drawContours(mask, [c], -1, (0, 0, 0), -1)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
res = img.copy()
res[mask == 0] = 255
cv2.imshow('img', res)
cv2.waitKey(0)
Original Image
--
Outcome Image
The outlines are still there because the threshold is too high.
Try this:
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 20, 255, 0)
contours_draw, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
The values you pass to the threshold function determine how inclusive or exclusive the findContours function is going to be. Put low numbers if you want a low threshold, and high numbers if you want a high one.

Detect position of a round object in video opencv

I have a video with a moving red ball in the middle and I want the program to return the position of that object throughout the video. I've read many examples online but I am not so sure what's the best way to do it in opencv.
import cv2
import numpy as np
coords = []
# Open the video
cap = cv2.VideoCapture('video-4.mp4')
# Initialize frame counter
while(1):
# Take each frame
_, frame = cap.read()
# Convert BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of blue color in HSV
lower_red = np.array([100, 150, 0])
upper_red = np.array([140, 255, 255])
imgThreshHigh = cv2.inRange(hsv, lower_red, upper_red)
thresh = imgThreshHigh.copy()
countours,_ = cv2.findContours(thresh, cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
for cnt in countours:
area = cv2.contourArea(cnt)
best_cnt = cnt
M = cv2.moments(best_cnt)
cx,cy = int(M['m10']/M['m00']), int(M['m01']/M['m00'])
coord = cx, cy
print(coord)
cv2.imshow('frame',frame)
cv2.imshow('Object',thresh)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
I am not sure why, but it only processes a few frames of the videos, not all.
It is not clear to me how the best_cnt contour is selected. Right now it is just selecting the last one. I think it was intended to filter by size to ensure the small red circle was selected, but it is not doing that. Maybe the dropped frames happen because the last contour in the list is not the red circle.
I would try to filter by size and keep the contour smaller than a given max area.
Also, it looks like the red color filter range is off. I could only detect the ball changing the red filter values to:
lower_red = np.array([170, 0, 190])
upper_red = np.array([179, 255, 255])
To do this, I calculated the min and max values in the small area around the red circle:
plt.imshow(hsv[255:275, 170:190, :])
np.min(hsv[255:275, 170:190, :], axis=0)
np.max(hsv[255:275, 170:190, :], axis=0)
And chose min and max values to give some wiggle room for the red to change in different frames. This will need fine tuning using all the frames in your video to make sure no shade of red is left out using those limits.

How to blur red color in image with python opencv so that its not clearly visible?

I want to blur red color in image ("1.png" is attached) so that its not clearly visible. I tried below code where I can change red color to black color but how can I blur it? Please help.
import cv2
import numpy as np
frame = cv2.imread("1.png")
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of red color in HSV
lower = np.array([0,50,50])
upper = np.array([10,255,255])
# define range of blue color in HSV
# lower = np.array([38, 86, 0])
# upper = np.array([121, 255, 255])
# define range of pink color in HSV
# http://www.workwithcolor.com/pink-color-hue-range-01.htm
# lower = np.array([158, 127, 0])
# upper = np.array([179, 255, 255])
# Threshold the HSV image to get only red colors
mask = cv2.inRange(hsv, lower, upper)
color_only = cv2.bitwise_and(frame, frame, mask = mask)
# convert mask to 3-channel image to perform subtract
mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR)
res = cv2.subtract(frame, mask) #negative values become 0 -> black
cv2.imshow("frame", frame)
# cv2.imshow("mask", mask)
# cv2.imshow("color_only", color_only)
cv2.imshow("res", res)
cv2.waitKey()
cv2.destroyAllWindows()
1.png
You need to blur the segmented image and then use alpha blending to composite the blurred ROI with the background image. This code takes you through all the steps:
Read image and segment the color of interest:
import cv2
import numpy as np
frame = cv2.imread("/home/stephen/Desktop/1.png")
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of red color in HSV
lower = np.array([0,50,50])
upper = np.array([10,255,255])
mask = cv2.inRange(hsv, lower, upper)
color_only = cv2.bitwise_and(frame, frame, mask = mask)
### THE BACKGROUND MUST BE MADE WHITE, NOT BLACK ###
color_only[np.where((color_only==[0,0,0]).all(axis=2))] = [255,255,255]
cv2.imshow("color_only", color_only)
Next, blur the segmented image. Note, I am using a 13,13 kernel to blur the image:
blur = cv2.blur(color_only, (13,13))
cv2.imshow('blur', blur)
Next, blur the mask of the segmented image. We are going to use this to combine the images later. It's easy to combine images using a bitwise function, but that approach will not work here because the blurred image no longer occupies the same space as the segmented image:
maskForAlphaBlending = blur
## BLACK OUT THE WHITE BACKGROUND OF THE ALPHA MASK
maskForAlphaBlending[np.where((maskForAlphaBlending==[255,255,255]).all(axis=2))] = [0,0,0]
maskForAlphaBlending = cv2.cvtColor(maskForAlphaBlending, cv2.COLOR_BGR2GRAY)
cv2.imshow('maskForAlphaBlending', maskForAlphaBlending)
Finally alpha blending can be used to composite the blurred segmented image and the green and white background image:
foreground = blur
background = frame
alpha = cv2.cvtColor(maskForAlphaBlending, cv2.COLOR_GRAY2BGR)
foreground = foreground.astype(float)
background = background.astype(float)
alpha = alpha.astype(float)/100
foreground = cv2.multiply(alpha, foreground)
background = cv2.multiply(1.0 - alpha, background)
outImage = cv2.add(foreground, background)
cv2.imshow("outImg", outImage/255)
Note how the red lines are blurred, but the green and white border is not:

opencv python3.6 OCR

I do not know openCV but I would like to know
Area extraction
If the white area is reduced by hsv, the text is broken and can not be recognised. If there is a white area, THRESH_BINARY_INV is converted as shown above.
I would appreciate your help.
I set out to solve this problem.
I do not know if this is being interpreted properly.
My code will be attached below
import cv2
import numpy as np
def tracking():
frame = cv2.imread('test4.png')
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower = np.array([0, 0, 0])
upper = np.array([255, 255, 240])
mask = cv2.inRange(hsv, lower, upper)
res2 = cv2.bitwise_and(frame, frame, mask=mask)
cv2.imshow('asdasd2', res2)
_, edge2 = cv2.threshold(res2, 100, 255, cv2.THRESH_BINARY_INV)
cv2.imshow('asdasd2', edge2)
cv2.imshow('original', frame)
cv2.imshow('finish', res2)
cv2.waitKey(0)
cv2.destroyAllWindows()
tracking()

Detecting Red and Blue Squares (beanbags) using Python and CV2

I am new to CV2 and I am looking for some high level guidance for an application. I am working on a program that can detect red and blue beanbags that are in the frame of the camera. I played around with the example code offered by openCV to detect blue colors and modified it slightly to detect red as well.
import cv2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
cap = cv2.VideoCapture(0)
while(1):
# Take each frame
_, frame = cap.read()
# Convert BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of blue color in HSV
lower_blue = np.array([110,50,50])
upper_blue = np.array([130,255,255])
lower_red = np.array([-20, 100, 100])
upper_red = np.array([13, 255, 255])
# Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv, lower_red, upper_red)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(frame,frame, mask= mask)
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
This is (slightly modified) copied and pasted code from the OpenCV site. I am looking for the correct way to analyze the res numpy array of the dimensions <460, 640, 3> in order to detect that
I have red/blue objects(s) on my screen
Do something with this information such as print(1 red and 2 blue squares detected)
Image link:
Input, res and mask image of blue beanbag
You will need to obtain two masks, one for red and the other for blue as:
mask_red = cv2.inRange(hsv, lower_red, upper_red)
mask_blue = cv2.inRange(hsv, lower_blue, upper_blue)
Now let's define a function which detects if given area in mask if above threshold for taking decision if the bean bags are present, for that purpose we will use cv2.findContours.
def is_object_present(mask, threshold):
im, contours, hierarchy = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Find the largest contour
largest_contour = max(contours, key=lambda x:cv2.contourArea(x))
if cv2.contourArea(largest_contour) > threshold:
return True
return False
Now you call this method on both the masks, to get individual values if red bean bag or blue bean bag is present:
# Adjust this manually as per your needs
bean_bag_area_threshold = 5000
is_red_bean_bag_present = is_object_present(mask_red, bean_bag_area_threshold)
is_blue_bean_bag_present = is_object_present(mask_blue, bean_bag_area_threshold)
if is_red_bean_bag_present and is_blue_bean_bag_present:
print "Both bean bags are present."

Categories

Resources