I am working on one of my college project i.e object(car) detection in opencv python ,i am using opencv 3 and python 3.4. I have a code for it but when i run the code the output is not displayed. It shows that the code is error free but still unable to get the output. I am new to image processing ,so it will be a great help if someone tries to sort out my problem. The code is given below`
import cv2
import numpy as np
import argparse
ap = argparse.ArgumentParser()
ap.add_agrument("-v","--video",
help = "path to the (optional) video file")
args = vars(ap.parse_agrs())
camera = cv2.VideoCapture(agrs["video"])
car_cascade = cv2.CascadeClassifier("cars.xml")
while true:
ret,frames = camera.read(),cv2.rectangle()
gray = cv2.cvtColor(frames, cv2.COLOR_BGR2GRAY)
cars = car_cascade.detectionMultiScale(gray, 1.1,1)
for (x,y,w,h) in cars:
cv2.rectangular()frames,(x,y),(x+w,y+h), (0,0,255),2)
cv2.imshow ('video',frames)
cv2.waitkey(0)
I just remove the argparse command and edited the code little bit and it is working quit well.To see the output click here : https://www.youtube.com/watch?v=phG9inHoAKg
And the code files are uploaded to my github account https://github.com/Juzer2012/Car-detection
You write: "It shows that the code is error free" ...
It isn't (and this multiple times) as for example here:
ap.add_agrument(...
where it should be
ap.add_argument(...
Just check again for more of such syntax errors. Happy coding :) .
Here the by you requested code example which uses argparse for image processing - it works both with python2.x and python3.x showing a video stream for processing in a for this purpose opened window. If you can see the video stream output, just mark this as a valid answer to your question. Thanks in advance (y). Happy coding :) .
import cv2
def showVideoStream_fromWebCam(argsVideo, webCamID=0, showVideoStream=True):
cv2_VideoCaptureObj_webCam = cv2.VideoCapture(webCamID)
while True:
retVal, imshowImgObj = cv2_VideoCaptureObj_webCam.read()
if showVideoStream:
imshowImgObj = cv2.flip(imshowImgObj, 1)
cv2.imshow('webCamVideoStream', imshowImgObj)
#:if
if cv2.waitKey(1) == 27:
break # [Esc] to quit
#:if
#:while
cv2.destroyAllWindows()
#:def
import argparse
ap = argparse.ArgumentParser()
ap.add_argument("-v","--video", help = "webCamID (= 0)")
args = vars(ap.parse_args())
showVideoStream_fromWebCam(args["video"])
Let's make the code even a bit more perfect by running the video at approximately it's original speed (25 frames/second), taking out what is not necessary and drawing all the rectangles first, then showing the frame:
import cv2
camera = cv2.VideoCapture("video.avi")
car_cascade = cv2.CascadeClassifier('cars.xml')
# Get frames per second from video file. Syntax depends on OpenCV version:
(major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.')
if int(major_ver) < 3 :
fps = camera.get(cv2.cv.CV_CAP_PROP_FPS)
else :
fps = camera.get(cv2.CAP_PROP_FPS)
#:if
intTimeToNextFrame=int(1000.0/fps)-12 # '-12' estimation of time for processing
while True:
(grabbed,frame) = camera.read()
grayvideo = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cars = car_cascade.detectMultiScale(grayvideo, 1.1, 1)
for (x,y,w,h) in cars:
cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,255),1)
cv2.imshow("video",frame)
if cv2.waitKey(intTimeToNextFrame)== ord('q'):
break
camera.release()
cv2.destroyAllWindows()
Related
Description
I have a file `FaceDetection.py` which detects the face from webcam footage using `opencv` and then writes frames into `output` object, converting it into a video. If the face is present in the frame then a function `sendImg()` is called which then calls the function `faceRec()` from another imported file. At last this method prints the face name if the face is known or prints "unknown".
Problem I am facing
face recognition process is quite expensive and that's why I thought that running from different file and with a thread will create a background process and output.write() function won't be interrupted while doing face recognition. But, as sendImg() function gets called, output.write() function gets interrupted, resulting in skipped frames in saved video.
Resolution I am seeking
I want to do the face recognition without interrupting the output.write() function. So, resulting video will be smooth and no frames would be skipped.
Here is the FaceDetection.py
import threading
from datetime import datetime
import cv2
from FaceRecognition import FaceRecognition
face_cascade = cv2.CascadeClassifier('face_cascade.xml')
fr = FaceRecognition()
img = ""
cap = cv2.VideoCapture(0)
path = 'C:/Users/-- Kanbhaa --/Documents/Recorded/'
vid_name = str("recording_"+datetime.now().strftime("%b-%d-%Y_%H:%M").replace(":","_")+".avi")
vid_cod = cv2.VideoWriter_fourcc(*'MPEG')
output = cv2.VideoWriter( str(path+vid_name) , vid_cod, 20.0 ,(640,480))
def sendImage(imgI):
fr.faceRec(imgI)
while True:
t,img = cap.read()
output.write(img)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray,2,4)
for _ in faces:
t1 = threading.Thread(target=sendImage,args=(img,))
t1.start()
cv2.imshow('img',img)
if cv2.waitKey(1) & 0XFF == ord('x'):
break
cap.release()
output.release()
Here is the FaceRecognition.py
from simple_facerec import SimpleFacerec
class FaceRecognition:
def __init__(self):
global sfr
# Encode faces from a folder
sfr = SimpleFacerec()
sfr.load_encoding_images("C:/Users/UserName/Documents/FaceRecognitionSystem/images/")
def faceRec(self,frame):
global sfr
# Detect Faces
face_locations, face_names = sfr.detect_known_faces(frame)
for face_loc, name in zip(face_locations, face_names):
if name != "Unknown":
print("known Person detected :) => " + name)
else:
print("Unknown Person detected :( => Frame captured...")
Imported file SimpleFacerec in the above code is from youtube. So, I didn't code it. It can be found below.
SimpleFacerec.py
you write it wrong. It should be
t1 = threading.Thread(target=sendImage, args=(img,))
Try changing, but don't used underscore.
for index in faces:
t1 = threading.Thread(target=sendImage,args=(index,))
t1.start()
I want the code to take 4 pics with a 1 second interval using cv2, then add the together using hconcat function. When I try to run this code I get ('numpy.ndarray' object has no attribute 'release'), can someone help
import cv2,random
num = random.randint(0,2000)
cam = cv2.VideoCapture(0)
cv2.namedWindow("Mac")
def concat_tile(im_list_2d):
return cv2.vconcat([cv2.hconcat(im_list_h) for im_list_h in im_list_2d])
x = []
for i in range(4):
ret,frame = cam.read()
x.append(frame)
frame.release()
cv2.destroyAllWindows()
im_v = concat_tile([[x[0],x[1]],
[x[2],x[3]]])
img_name = "opencv_frame_{}.png".format(num)
cv2.imwrite(img_name,im_v)
It should not be frame.release(), rather try cam.release()
Also, release() and cv2.destroyAllWindows(), should be written at the end of your code, not within the for loop
for i in range(4):
ret,frame = cam.read()
x.append(frame)
cam.release()
cv2.destroyAllWindows()
I am creating a raspberry pi timelapse camera encoding video with CV2 videowriter
Each image captured with picamera is added to the videowriter and once the intended number of images are taken the videowriter closes.
However - while this works for a few thousand images - it stops at some limit with a filesize of 366Mb which is now frustrating me and I ask you - the internet and hoard of coders to tell me why I am bad a coding and how to fix this - you must be tempted by this..
Here is my offering of garbage for you to laugh pitifully at
import os, cv2
from picamera import PiCamera
from picamera.array import PiRGBArray
from datetime import datetime
from time import sleep
now = datetime.now()
x = now.strftime("%Y")+"-"+now.strftime("%m")+"-"+now.strftime("%d")+"-"+now.strftime("%H")+"-"+now.strftime("%M") #string of dateandtimestart
print(x)
def main():
imagenum = 10000 #how many images
period = 1 #seconds between images
os.chdir ("/home/pi/t_lapse")
os.mkdir(x)
os.chdir(x)
filename = x + ".avi"
camera = PiCamera()
camera.resolution=(1920,1088)
camera.vflip = True
camera.hflip = True
camera.color_effects = (128,128) #makes a black and white image for IR camera
sleep(0.1)
out = cv2.VideoWriter(filename, cv2.cv.CV_FOURCC(*'XVID'), 30, (1920,1088))
for c in range(imagenum):
with PiRGBArray(camera, size=(1920,1088)) as output:
camera.capture(output, 'bgr')
imagec = output.array
out.write(imagec)
output.truncate(0) #trying to get more than 300mb files..
pass
sleep(period-0.5)
camera.close()
out.release()
if __name__ == '__main__':
main()
This example is a part of the whole code I've written (https://github.com/gchennell/RPi-PiLapse) which has an OLED display and buttons and selection of how many images as I have this all in an enclosure - the number of images seems to be limited to about 3000-4000 and then it just gives up and goes home... I tried adding the output.truncate(0)
I have also recreated this in python3 before you cry "BUT CV2.CV2.VIDEOWRITER!!!!" and that hasn't changed a thing - I'm missing something here...
Currently trying write code with a GUI which will allow for toggling on/off image processing. Ideally the code will allow for turning on/off window view, real time image processing (pretty basic), and controlling an external board.
The problem I'm having revolves around the cv2.imshow() function. A few months back I made a push to increase processing rates by switching from picamera to cv2 where I can perform more complex computations like background subtraction without having to call python all the time. using the bcm2835-v4l2 package, I was able to pull images directly from the picamera using cv2.
fast forward 6 months and while trying to update the code, I find that the function cv2.imshow() does not display correctly anymore. I thought it might be a problem with bcm2835-v4l2 but tests using matplotlib show that the connection is fine. it appears to have everything to do with cv2.imshow() or so I guess.
I am actually creating a separate thread using the threading module for image capture and I am wondering if this could be the culprit. I don't think so though as typing in the commands
import cv2
camera = cv2.VideoCapture(0)
grabbed,frame = camera.read()
cv2.imshow(frame)
produces the same black screen
Down below is my code I am using (on the RPi3) and some images show the error and what is expected.
as for reference here are the details about my system
Raspberry pi3
raspi stretch
python 3.5.1
opencv 3.4.1
Code
import cv2
from threading import Thread
import time
import numpy as np
from tkinter import Button, Label, mainloop, Tk, RIGHT
class GPIOControllersystem:
def __init__(self,OutPinOne=22, OutPinTwo=27,Objsize=30,src=0):
self.Objectsize = Objsize
# Build GUI controller
self.TK = Tk() # Place TK GUI class into self
# Variables
self.STSP = 0
self.ShutdownVar = 0
self.Abut = []
self.Bbut = []
self.Cbut = []
self.Dbut = []
# setup pi camera for aquisition
self.resolution = (640,480)
self.framerate = 60
# Video capture parameters
(w,h) = self.resolution
self.bytesPerFrame = w * h
self.Camera = cv2.VideoCapture(src)
self.fgbg = cv2.createBackgroundSubtractorMOG2()
def Testpins(self):
while True:
grabbed,frame = self.Camera.read()
frame = self.fgbg.apply(frame)
if self.ShutdownVar ==1:
break
if self.STSP == 1:
pic1, pic2 = map(np.copy,(frame,frame))
pic1[pic1 > 126] = 255
pic2[pic2 <250] = 0
frame = pic1
elif self.STSP ==1:
time.sleep(1)
cv2.imshow("Window",frame)
cv2.destroyAllWindows()
def MProcessing(self):
Thread(target=self.Testpins,args=()).start()
return self
def BuildGUI(self):
self.Abut = Button(self.TK,text = "Start/Stop System",command = self.CallbackSTSP)
self.Bbut = Button(self.TK,text = "Change Pump Speed",command = self.CallbackShutdown)
self.Cbut = Button(self.TK,text = "Shutdown System",command = self.callbackPumpSpeed)
self.Dbut = Button(self.TK,text = "Start System",command = self.MProcessing)
self.Abut.pack(padx=5,pady=10,side=RIGHT)
self.Bbut.pack(padx=5,pady=10,side=RIGHT)
self.Cbut.pack(padx=5,pady=10,side=RIGHT)
self.Dbut.pack(padx=5,pady=10,side=RIGHT)
Label(self.TK, text="Controller").pack(padx=5, pady=10, side=RIGHT)
mainloop()
def CallbackSTSP(self):
if self.STSP == 1:
self.STSP = 0
print("stop")
elif self.STSP == 0:
self.STSP = 1
print("start")
def CallbackShutdown(self):
self.ShutdownVar = 1
def callbackPumpSpeed(self):
pass
if __name__ == "__main__":
GPIOControllersystem().BuildGUI()
Using matplotlib.pyplot.imshow(), I can see that the connection between the raspberry pi camera and opencv is working through the bcm2835-v4l2 connection.
However when using opencv.imshow() the window result in a blackbox, nothing is displayed.
Update: so while testing I found out that when I perform the following task
import cv2
import matplotlib
camera = cv2.VideoCapture(0)
grab,frame = camera.read()
matplotlib.pyplot.imshow(frame)
grab,frame = camera.read()
matplotlib.pyplot.imshow(frame)
update was solved and not related to the main problem. This was a buffering issue. Appears to have no correlation to cv2.imshow()
on a raspberry you should work with
from picamera import PiCamera
checkout pyimagesearch for that
Aim : Detect the motion and save only the motion periods in files with names of the starting time.
Now I met the issue about how to save the video to the files with video starting time.
What I tested :
I tested my program part by part. It seems that each part works well except the saving part.
Running status: No error. But in the saving folder, there is no video. If I use a static saving path instead, the video will be saved successfully, but the video will be override by the next video. My codes are below:
import cv2
import numpy as np
import time
cap = cv2.VideoCapture( 0 )
bgst = cv2.createBackgroundSubtractorMOG2()
fourcc=cv2.VideoWriter_fourcc(*'DIVX')
size = (int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)))
n = "start_time"
while True:
ret, frame = cap.read()
dst = bgst.apply(frame)
dst = np.array(dst, np.int8)
if np.count_nonzero(dst)>3000: # use this value to adjust the "Sensitivity“
print('something is moving %s' %(time.ctime()))
path = r'E:\OpenCV\Motion_Detection\%s.avi' %n
out = cv2.VideoWriter( path, fourcc, 50, size )
out.write(frame)
key = cv2.waitKey(3)
if key == 32:
break
else:
out.release()
n = time.ctime()
print("No motion Detected %s" %n)
What I meant is:
import cv2
import numpy as np
import time
cap = cv2.VideoCapture( 0 )
bgst = cv2.createBackgroundSubtractorMOG2()
fourcc=cv2.VideoWriter_fourcc(*'DIVX')
size = (int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)),int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)))
path = r'E:\OpenCV\Motion_Detection\%s.avi' %(time.ctime())
out = cv2.VideoWriter( path, fourcc, 16, size )
while True:
ret, frame = cap.read()
dst = bgst.apply(frame)
dst = np.array(dst, np.int8)
for i in range(number of frames in the video):
if np.count_nonzero(dst)<3000: # use this value to adjust the "Sensitivity“
print("No Motion Detected")
out.release()
else:
print('something is moving %s' %(time.ctime()))
#label each frame you want to output here
out.write(frame(i))
key = cv2.waitKey(1)
if key == 32:
break
cap.release()
cv2.destroyAllWindows()
If you see the code there will be a for loop, within which the process of saving is done.
I do not know the exact syntax involving for loop with frames, but I hope you have the gist of it. You have to find the number of frames present in the video and set that as the range in the for loop.
Each frame gets saved uniquely (see the else condition.) As I said I do not know the syntax. Please refer and follow this procedure.
Cheers!