OpenCV check if object passed line - python

I am busy working on some very simple vehicle detection software using Python and OpenCV. I want to take a screen capture on the moment an object hits the line I have created.
Searching on Google resulted in nothing or some very big C++ projects. Since I am very unskilled with C++ I tought I would try to ask it here.
My code:
import cv2
face_cascade = cv2.CascadeClassifier('cars.xml')
vc = cv2.VideoCapture('dataset/traffic3.mp4')
if vc.isOpened():
rval , frame = vc.read()
else:
rval = False
while rval:
rval, frame = vc.read()
cv2.line(frame, (430, 830), (430, 100),(0,255,0), 3)
cv2.line(frame, (700, 700), (700, 100),(0,0,255), 3)
cv2.imshow("Result",frame)
cv2.waitKey(1);
vc.release()
So I want to take a screencapture the moment a vehicle passes on of the 2 lines?
Can somebody help me?
Thanks.

OpenCV's Cascade classifier will return a collection of Rect objects that correspond to bounding boxes around each car it detected in the image. If you don't know how to use the classifier, look at this tutorial in C++ to get an idea of how it works. Translating it to Python shouldn't be too hard.
Once you have these bounding boxes, you only need to test whether they intersect one of your lines to detect vehicles passing on the line.

Related

Why doesn't OpenCV cv2.moveWindow move always to the same XY-position?

The question Why is opencv's moveWindow command inconsistent? was asked 3 years, 1 month ago and was related to OpenCV (version 4.1.0) in Python (version 3.7.3). The question remains still unanswered. The statement
OpenCV's GUI functionality is not great, and is mostly available for debugging purposes. I would not rely on it. [...] – alkasm
provided as comment there doesn't answer the question about a possible reason for such behavior. But knowing the reason for such behavior is a necessary step toward a fix or debugging of the OpenCV code.
Today I am using OpenCV 4.5.5 in Python 3.9 and experience the same problem positioning windows displaying following image:
along with two other images demonstrating finding of contours with OpenCV using following code making it possible for you to reproduce the issue:
import time
import cv2 as cv
cv.namedWindow("Tetris Blocks") # flags=cv.CV_WINDOW_AUTOSIZE # to image size
time.sleep(2.0) # required to place the window correctly
cv.moveWindow(winname="Tetris Blocks", x= 1150, y= 10)
cv.namedWindow("Tetris Blocks Gray") # flags=cv.CV_WINDOW_AUTOSIZE # to image size
time.sleep(2.0) # required to place the window correctly
cv.moveWindow(winname="Tetris Blocks Gray", x= 1150, y= 380)
cv.namedWindow("Tetris Blocks Contours") # flags=cv.CV_WINDOW_AUTOSIZE # to image size
time.sleep(2.0) # required to place the window correctly
cv.moveWindow(winname="Tetris Blocks Contours", x= 1150, y= 730)
cv_img = cv.imread("tetris_blocks.png")
cv.imshow("Tetris Blocks", cv_img)
# cv.waitKey(0)
cv_img_gray = cv.cvtColor(cv_img, cv.COLOR_BGR2GRAY)
cv.imshow("Tetris Blocks Gray", cv_img_gray)
# cv.waitKey(0)
thresh = cv.threshold(cv_img_gray, 225, 255, cv.THRESH_BINARY_INV)[1]
(cnts, _) = cv.findContours(thresh.copy(), cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
cv.drawContours(cv_img, cnts, -1, (0, 0, 0), 10)
cv.imshow("Tetris Blocks Contours", cv_img)
cv.waitKey(0)
placing the windows in one column and not in one row as in the mentioned old unanswered question.
The pictures below show the result of running the above code in the same way multiple times obtaining different results. Often is the x-position the same:
where the y-position is not:
but the displacement occur also in both directions:
Looking at the provided code you can see an attempt to fix the problem as I have experienced much larger differences in the positions without 'sleeping' between positioning the windows. Inserting time.sleep() reduces the problem, but does not reliably solve it.
I suppose an easy to find and fix bug in OpenCV code behind this behavior and wonder how it comes that the problem persists in the timescale of years.

face detection , not face detected

I am trying to do face detection but it does not detect any face.
this is the function I have created for face detection
def faceDetection(test_img):
gray_img=cv2.cvtColor(test_img,cv2.COLOR_BGR2GRAY)
face_haar_cascade=cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# haar classifier
faces=face_haar_cascade.detectMultiScale(gray_img,scaleFactor=1.32,minNeighbors=5)
return faces,gray_img
this is used in
test_img=cv2.imread('pic.png')
faces_detected,gray_img=fr.faceDetection(test_img)
print("faces_detected:",faces_detected)
for (x,y,w,h) in faces_detected:
cv2.rectangle(test_img,(x,y),(x+w,y+h),(255,0,0),thickness=5)
resized_img=cv2.resize(test_img,(500,500))
cv2.imshow("face",resized_img)
cv2.waitKey(0)
cv2.destroyAllWindows
but when I run this script it does not show any face detected
simply give output this
faces_detected: ()
and no box around image
Try using a different haar cascade. The default one is haarcascade_frontalface_alt.xml
face_haar_cascade = cv2.CascadeClassifier('haarcascade_frontalface_alt.xml')
Change the scale factor you use for the cascade. If that doesn't work you can also reduce also the number of neighbors to maybe 2.
faces = face_haar_cascade.detectMultiScale(gray_img, scaleFactor=1.1, minNeighbors=5);
Check the number of faces you found
print('Faces found: ', len(faces))

How to detect moving object in varying light(illumination) conditions due to clouds - openCV

I have been trying to detect moving vehicles. But due to varying light conditions because of clouds, (not shadows of clouds, just illuminations) the background subtraction fails.
I have uploaded my input video here --> Youtube (30secs)
Here is what I got using various available background subtraction methods available in opencv
import numpy as np
import cv2
cap = cv2.VideoCapture('traffic_finalns.mp4')
#fgbgKNN = cv2.createBackgroundSubtractorKNN()
fgbgMOG = cv2.bgsegm.createBackgroundSubtractorMOG(100,5,0.7,0)
#fgbgGMG = cv2.bgsegm.createBackgroundSubtractorGMG()
#fgbgMOG2 = cv2.createBackgroundSubtractorMOG2()
#fgbgCNT = cv2.bgsegm.createBackgroundSubtractorCNT(15,True,15*60,True)
while(1):
ret, frame = cap.read()
# fgmaskKNN = fgbgKNN.apply(frame)
fgmaskMOG = fgbgMOG.apply(frame)
# fgmaskGMG = fgbgGMG.apply(frame)
# fgmaskMOG2 = fgbgMOG2.apply(frame)
# fgmaskCNT = fgbgCNT.apply(frame)
#
# cv2.imshow('frame',frame)
# cv2.imshow('fgmaskKNN',fgmaskKNN)
cv2.imshow('fgmaskMOG',fgmaskMOG)
# cv2.imshow('fgmaskGMG',fgmaskGMG)
# cv2.imshow('fgmaskMOG2',fgmaskMOG2)
# cv2.imshow('fgmaskCNT',fgmaskCNT)
k = cv2.waitKey(20) & 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
(Below images -> Frame number - 977)
BackgroundSubtractorMOG : By varying the input parameter history some illumination could be reduced, but not all, as the duration of illumination is variable
BackgroundSubtractorMOG2 :
BackgroundSubtractorGMG :
**BackgroundSubtractorKNN : **
BackgroundSubtractorCNT
1] Improving results by OpenCV Background Subtraction
For varying light conditions it is important to normalize your pixal values between 0 and 1. In your code I do not see that happening
Background subtraction will not work with a single image (In your code you are reading an image)
If you are applying background subtraction on sequence of frames then the first frame of background subtraction result is of no use
you might want to adjust the arguments of the cv2.bgsegm.createBackgroundSubtractorMOG() that you are passing to get the best results... Play around with the threshold and see what results do you get
You can also apply gaussian filter to the individual frames to reduce noise and get better results cv2.GaussianBlur()
You can try cv2.equalizeHist() on individual frame so that you improve the contrast of the frames
Anyways you say that you are trying to detect moving object. Nowadays there are many modern methods that use deep-learning for object detection
2] Use tensorflow object detection api
It does object detection in real-time and also gives you the bounding box co-ordinate of the detected objects
Here are results of Tensorflow object detection api:
3] How about trying Opencv Optical Flow
4] Simple subtraction
Your environment is static
So take a frame of your environment and store it in a variable say environment_frame
Now read every frame from your video and simply subtract it from your environment frame results = environment_frame - current_frame
Now if the np.sum(results) is greater than a threshold value then we say there is a object
Now if np.sum(results) is greater then threshold then we know there is a moving object but where ???
The moving object is where there are clustered cluttered pixels which you can easily find by some clustering algorithm
Do not forget to normalize your pixel values between 0 and 1
----------------------------UPDATED----------------------------------------
If you want to find helmets in real time then your best bet is deep-learning
You can use a deep learning technique like YOLO which newer version of OpenCV has ... but I do no think they have a python binding for YOLO in OpencV
The other real time technique can be RCNN which the tensorflow object detection api already has .... I have mentioned it above
If you want to use traditional computer vision methods then you can try hog and svm for helmet data and then you can try a sliding window technique to find the helmet in your frame (This won't be in real time)

how to make python perform a command after open cv detected a face?

I am building a robot. Pretty much all my code has been copied from other people's projects on tutorials.
I'm using a Raspberry Pi camera to detect faces, and I have this electric Nerf gun, that I want to fire, ONLY after opencv detects a face. Right now, my code fires the Nerf gun no matter if a face is detected or not. Please tell me what I have done wrong? I think the problem is in the if len(faces) > 0 area. Here is all my code.
program 1, called cbcnn.py
import RPi.GPIO as gpio
import time
def init():
gpio.setmode(gpio.BOARD)
gpio.setup(22, gpio.OUT)
def fire(tf):
init()
gpio.output(22, True)
time.sleep(tf)
gpio.cleanup()
print 'fire'
fire(3)
program 2, called cbfd2.py
import io
import picamera
import cv2
import numpy
from cbcnn import fire
#Create a memory stream so photos doesn't need to be saved in a file
stream = io.BytesIO()
#Get the picture (low resolution, so it should be quite fast)
#Here you can also specify other parameters (e.g.:rotate the image)
with picamera.PiCamera() as camera:
camera.resolution = (320, 240)
camera.vflip = True
camera.capture(stream, format='jpeg')
#Convert the picture into a numpy array
buff = numpy.fromstring(stream.getvalue(), dtype=numpy.uint8)
#Now creates an OpenCV image
image = cv2.imdecode(buff, 1)
#Load a cascade file for detecting faces
face_cascade = cv2.CascadeClassifier('/home/pi/cbot/faces.xml /haarcascade_frontalface_default.xml')
#Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
#Look for faces in the image using the loaded cascade file
faces = face_cascade.detectMultiScale(gray, 1.1, 5)
print "Found "+str(len(faces))+" face(s)"
if len(faces) > 0:
("fire")
#Draw a rectangle around every found face
for (x,y,w,h) in faces:
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
#Save the result image
cv2.imwrite('result.jpg',image)
It fires because you have a fire(3) right after you defined the fire(tf) function. And the command below only creates a tuple with 1 string value: ("fire"). It doesn't call the fire function.
if len(faces) > 0:
("fire")
If you want to fire only when faces are detected, move that fire(3) under this IF and remove it from the top.
BTW you're importing another thing here: from cbcnn import fire with the same name as your function. This will overwrite your function name and if you put fire(3) below your import line it probably throws an error. Either change you fire function fire(tf) to fire_rocket(tf) and change your fire(3) to fire_rocket(3) under the IF.
Or add this on your import line (which you actually aren't even using!) and you can keep your fire function name as is:
from cbcnn import fire as Fire
Edit after question was changed:
Fix the IF I mentioned above and put fire(some number) in there.
The reason it fires is because when you import something from another program it runs the whole script. Because fire(3) is on there it will automatically call the function when you import it.
To avoid this you have to either:
remove other parts from your cbcnn.py: print and fire(3)
Or
put those parts in this IF statement to only run them when you actually run cbcnn.py yourself, and not by importing it:
if __name__=='__main__':
print(fire)
fire(3)
I already answer you in another post but, where you are putting:
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
There you can put whatever you want, if you go into the for loop, it means you have detected a face. As you can see on the code you posted, in that case their a drawing a rectangle, but you can do anything there, the x,y,w and h are giving you the coordinates and size of the detected face.
import io
import picamera
import cv2
import numpy
import RPi.GPIO as gpio
import time
#Create a memory stream so photos doesn't need to be saved in a file
stream = io.BytesIO()
#Get the picture (low resolution, so it should be quite fast)
#Here you can also specify other parameters (e.g.:rotate the image)
with picamera.PiCamera() as camera:
camera.resolution = (320, 240)
camera.vflip = True
camera.capture(stream, format='jpeg')
#Convert the picture into a numpy array
buff = numpy.fromstring(stream.getvalue(), dtype=numpy.uint8)
#Now creates an OpenCV image
image = cv2.imdecode(buff, 1)
#Load a cascade file for detecting faces
face_cascade = cv2.CascadeClassifier('/home/pi/cbot/faces.xml/haarcascade_frontalface_default.xml')
#Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
#Look for faces in the image using the loaded cascade file
faces = face_cascade.detectMultiScale(gray, 1.1, 5)
print "Found "+str(len(faces))+" face(s)"
#Draw a rectangle around every found face
for (x,y,w,h) in faces:
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
def init():
gpio.setmode(gpio.BOARD)
gpio.setup(22, gpio.OUT)
def fire(tf):
init()
gpio.output(22, True)
time.sleep(tf)
gpio.cleanup()
print 'fire'
fire(3)
#Save the result image
cv2.imwrite('result.jpg',image)

Why are webcam images taken with Python so dark?

I've showed in various ways how to take images with a webcam in Python (see How can I take camera images with Python?). You can see that the images taken with Python are considerably darker than images taken with JavaScript. What is wrong?
Image example
The image on the left was taken with http://martin-thoma.com/html5/webcam/, the one on the right with the following Python code. Both were taken with the same (controlled) lightning situation (it was dark outside and I only had some electrical lights on) and the same webcam.
Code example
import cv2
camera_port = 0
camera = cv2.VideoCapture(camera_port)
return_value, image = camera.read()
cv2.imwrite("opencv.png", image)
del(camera) # so that others can use the camera as soon as possible
Question
Why is the image taken with Python image considerably darker than the one taken with JavaScript and how do I fix it?
(Getting a similar image quality; simply making it brighter will probably not fix it.)
Note to the "how do I fix it": It does not need to be opencv. If you know a possibility to take webcam images with Python with another package (or without a package) that is also ok.
Faced the same problem. I tried this and it works.
import cv2
camera_port = 0
ramp_frames = 30
camera = cv2.VideoCapture(camera_port)
def get_image():
retval, im = camera.read()
return im
for i in xrange(ramp_frames):
temp = camera.read()
camera_capture = get_image()
filename = "image.jpg"
cv2.imwrite(filename,camera_capture)
del(camera)
I think it's about adjusting the camera to light. The former
former and later images
I think that you have to wait for the camera to be ready.
This code works for me:
from SimpleCV import Camera
import time
cam = Camera()
time.sleep(3)
img = cam.getImage()
img.save("simplecv.png")
I took the idea from this answer and this is the most convincing explanation I found:
The first few frames are dark on some devices because it's the first
frame after initializing the camera and it may be required to pull a
few frames so that the camera has time to adjust brightness
automatically.
reference
So IMHO in order to be sure about the quality of the image, regardless of the programming language, at the startup of a camera device is necessary to wait a few seconds and/or discard a few frames before taking an image.
Tidying up Keerthana's answer results in my code looking like this
import cv2
import time
def main():
capture = capture_write()
def capture_write(filename="image.jpeg", port=0, ramp_frames=30, x=1280, y=720):
camera = cv2.VideoCapture(port)
# Set Resolution
camera.set(3, x)
camera.set(4, y)
# Adjust camera lighting
for i in range(ramp_frames):
temp = camera.read()
retval, im = camera.read()
cv2.imwrite(filename,im)
del(camera)
return True
if __name__ == '__main__':
main()

Categories

Resources