Detect position of a round object in video opencv - python

I have a video with a moving red ball in the middle and I want the program to return the position of that object throughout the video. I've read many examples online but I am not so sure what's the best way to do it in opencv.
import cv2
import numpy as np
coords = []
# Open the video
cap = cv2.VideoCapture('video-4.mp4')
# Initialize frame counter
while(1):
# Take each frame
_, frame = cap.read()
# Convert BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of blue color in HSV
lower_red = np.array([100, 150, 0])
upper_red = np.array([140, 255, 255])
imgThreshHigh = cv2.inRange(hsv, lower_red, upper_red)
thresh = imgThreshHigh.copy()
countours,_ = cv2.findContours(thresh, cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
for cnt in countours:
area = cv2.contourArea(cnt)
best_cnt = cnt
M = cv2.moments(best_cnt)
cx,cy = int(M['m10']/M['m00']), int(M['m01']/M['m00'])
coord = cx, cy
print(coord)
cv2.imshow('frame',frame)
cv2.imshow('Object',thresh)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
I am not sure why, but it only processes a few frames of the videos, not all.

It is not clear to me how the best_cnt contour is selected. Right now it is just selecting the last one. I think it was intended to filter by size to ensure the small red circle was selected, but it is not doing that. Maybe the dropped frames happen because the last contour in the list is not the red circle.
I would try to filter by size and keep the contour smaller than a given max area.
Also, it looks like the red color filter range is off. I could only detect the ball changing the red filter values to:
lower_red = np.array([170, 0, 190])
upper_red = np.array([179, 255, 255])
To do this, I calculated the min and max values in the small area around the red circle:
plt.imshow(hsv[255:275, 170:190, :])
np.min(hsv[255:275, 170:190, :], axis=0)
np.max(hsv[255:275, 170:190, :], axis=0)
And chose min and max values to give some wiggle room for the red to change in different frames. This will need fine tuning using all the frames in your video to make sure no shade of red is left out using those limits.

Related

Check whether all pixels in a cropped image have a specific color (OpenCV/Python)

import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
tag = frame[235:245, 315:325]
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower_red = np.array([20, 20, 50])
upper_red = np.array([255, 255, 130])
for i in range (235,245):
for j in range (315,325):
if cv2.inRange(tag[i][j],lower_red,upper_red):
break
cv2.imshow('image',frame)
if cv2.waitKey(1) == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
i want to check my middle 100 pixels in my 480,640 camera to see if they all fall in a certain color range and if they do to end the program but i cant find i way to compare the values of the middle 100 pixels with the values that i want
Problems with your approach:
1. Improper variable handling: You are cropping your image to tag before converting to HSV color space
2. Wrong usage of cv2.inRange(): The function returns a binary image of values either 0 or 255.
0 -> if the pixels do not fall in range
255 -> if the pixels fall in range
Thumb rule: Avoid using for loops
Solution:
Since cv2.inRange() returns a binary image, just find the average of the pixel values within the cropped image. If the average is 255 (all the pixels are white and within the color range) -> break!
You can alter your code using the following snippet:
ret, frame = cap.read()
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower_red = np.array([20, 20, 50])
upper_red = np.array([255, 255, 130])
tag = hsv[235:245, 315:325]
mask = cv2.inRange(tag, lower_red, upper_red)
if np.mean(mask) == 255:
print("All the pixels are within the color range")
break

HSV space for multicolor object

I would like to detect this gate below, ideally the entire gate. I have played around for hours with a trackbar script but I am just not finding the right color space. I found other threads that just track yellow and not even that is working.. This is my code:
def track():
cap = cv2.VideoCapture('../files/sub/gate_mission1.mp4')
while True:
_, frame = cap.read()
cv2.imshow('img', frame)
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower = np.array([20, 93, 0])
upper = np.array([45, 255, 255])
mask = cv2.inRange(hsv, lower, upper)
contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(frame, contours, -1, (0, 255, 0), 2)
Maybe there is a way to just remove all the blue/greenish too im not sure? What are my options here?
This seems to work for me by thresholding in LAB colorspace in Python/OpenCV. According to Wikipedia at https://en.wikipedia.org/wiki/CIELAB_color_space "The a* axis is relative to the green–red opponent colors, with negative values toward green and positive values toward red." So we ought to get reasonably good separation for your green and reddish colors.
Input:
import cv2
import numpy as np
# load images
img = cv2.imread('gate.jpg')
# convert to LAB
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
# set black ranges
lower = (130,105,100)
upper = (170,170,160)
# threshold on black
result = cv2.inRange(lab, lower, upper)
# save output
cv2.imwrite('gate_thresh.jpg', result)
# display results
cv2.imshow('thresh',result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Threshold Image

Change Single Color in a Video with OpenCV Python

I have a video of a brick breaking game. Some bricks are in red color. I have to change the red color into black. I am trying to find the location of the red pixel in numpy array and assign black color on those pixels. The code I have provided below turns the red color into black. But the process is so slow that 12s video took more than 5 mins. Is there any faster way to do that?
import numpy as np
import cv2
vid = "input.mp4"
cap = cv2.VideoCapture(vid)
while(True):
ret, frame = cap.read()
if ret:
for i in zip(*np.where(frame == [0,0,255])):
frame[i[0], i[1], 0] = 0
frame[i[0], i[1], 1] = 0
frame[i[0], i[1], 2] = 0
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cv2.destroyAllWindows()
Try this out, read comments in the code for more information.
import cv2
import numpy as np
img = cv2.imread("1.png")
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range of red color in HSV
lower_red = np.array([0,50,50])
upper_red = np.array([10,255,255])
# Threshold the HSV image to get only red colors
mask = cv2.inRange(hsv, lower_red, upper_red)
red_only = cv2.bitwise_and(img,img, mask= mask)
#convert mask to 3-channel image to perform subtract
mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR)
res = cv2.subtract(img,mask) #negative values become 0 -> black
cv2.imshow("img",img)
cv2.imshow("mask",mask)
cv2.imshow("red_only",red_only)
cv2.imshow("res",res)
cv2.waitKey()
cv2.destroyAllWindows()
PS. And this method takes almost no time, I've tested on my machine and it takes about 3ms for an image of 1280x720
This works just for gray
(you cant specify channels with color you are replacing)
color_find = [0,0,255]
indexes=np.where(frame == color_find)
frame[indexes]=0 # 0-255 creates [0,0,0] - [255,255,255]
More general attitude is like this
Here you specify RGB axis, and you can replace with any color
red = [0,0,255]
black = [0,0,0]
indexes=np.where(np.all(frame == red,axis = 2))
frame[indexes] = black
you can replace your for loop with this:
# [b,g,r]
color_in = [0, 0, 255] # color you want to filter
color_out = [0, 0, 0] # color you want to set
for i in range(3):
frame[frame[:, :, i] == color_in[i]] = color_out[i]
You can use this for video frame with 3 color channels. Also, you can play around with the conditional operator(replace with >, <, etc.) for more control.
Use like this to filter color ranges:
frame[frame[:, :, i] < color_in[i]] = color_out[i]
Using Ha Bom's code and some parts of my code, the problem has been solved. However, it takes a little bit of time. Processing 12-second video takes around 20-25 sec. The main purpose was to convert red pixels into orange-
The code is provided below -
cap = cv2.VideoCapture("input.avi")
while(True):
ret, frame = cap.read()
if ret:
# hsv is better to recognize color, convert the BGR frame to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# in hsv red color located in two region. Create the mask for red color
# mask the red color and get an grayscale output where red is white
# everything else are black
mask1 = cv2.inRange(hsv, (0,50,20), (5,255,255))
mask2 = cv2.inRange(hsv, (175,50,20), (180,255,255))
mask = cv2.bitwise_or(mask1, mask2)
# get the index of the white areas and make them orange in the main frame
for i in zip(*np.where(mask == 255)):
frame[i[0], i[1], 0] = 0
frame[i[0], i[1], 1] = 165
frame[i[0], i[1], 2] = 255
# play the new video
cv2.imshow("res",frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cv2.destroyAllWindows()

How to get screen coordinates from color detection resultant area obtained from openCV with python?

I am trying to work with a pen having green cap tip to navigate mouse cursor using webcam but how can I get the coordinates of cap image on screen so that I can give it as input to pyinput library move function.
Thanks in advance.
# Python program for Detection of a
# specific color(green here) using OpenCV with Python
import cv2
import numpy as np
import time
cap = cv2.VideoCapture(0)
while (1):
# Captures the live stream frame-by-frame
_, frame = cap.read()
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower_red = np.array([75, 50, 50])
upper_red = np.array([100, 255, 255])
mask = cv2.inRange(hsv, lower_red, upper_red)
res = cv2.bitwise_and(frame, frame, mask=mask)
cv2.imshow('frame', frame)
cv2.imshow('mask', mask)
cv2.imshow('res', res)
**#code to output coordinates of green pen tip**
k = cv2.waitKey(5) & 0xff
if k == 27:
break
cv2.destroyAllWindows()
cap.release()
You have all the things you need, just a couple of steps more are needed:
First find the non zero points in your mask, which should represent the tip.
points = cv2.findNonZero(mask)
Then you can averaged them to have a "unique point" which represents the tip.
avg = np.mean(points, axis=0)
Now, you can normalize this into a 0-1 value which can be later be used in any resolution... or you can normalize it directly to the resolution of the screen...
# assuming the resolutions of the image and screen are the following
resImage = [640, 480]
resScreen = [1920, 1080]
# points are in x,y coordinates
pointInScreen = ((resScreen[0] / resImage[0]) * avg[0], (resScreen[1] / resImage[1]) * avg[1] )
A few things to consider, first remember that opencv coordinate system origin is on the top left side pointing down and to the right.
---->
|
| image
|
v
Depending on where and how you use these point coordinates, you may need to flip the axis....
Second, in OpenCV points are in x,y and (at least the resolutions I wrote manually) are in width and height... you may have to adapt the code to cope with this if needed :)
If you have any questions just leave a comment

RGB color detection in Python Language

I am using Raspberry Pi 3 and Pi camera. I am doing an image processing program that could detect yellow colour and right now I am testing it but in frame nothing happen.My color detection is wrong?
Here is my code:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while(1):
# Take each frame
_, frame = cap.read()
# Convert BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of yellow color in HSV
lower_yellow = np.array([204,204,0])
upper_yellow = np.array([255,255,254])
# Threshold the HSV image to get only yellow colors
mask = cv2.inRange(hsv, lower_yellow, upper_yellow)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(frame,frame, mask= mask)
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
The problem is that you're filtering an HSV image with what looks to be RGB values. If you're looking for yellow, then you want to have a narrow hue range, and then saturation and value wider.
Yellow is roughly 60 degrees for hue, which means around a value of 30 for OpenCV (as the hue value is half the degree value to it can fit in the range 0-255). Similarly, your saturation and value shouldn't be the full range of 0-255 otherwise things that are close to black or white will match.
Try something like
lower_yellow = np.array([20, 30, 30])
upper_yellow = np.array([40, 200, 300])
This will hopefully get you close, but you still may have to play around with the numbers to get what you want.

Categories

Resources