Python opencv open webcam faster - python

I have a product need to open multiple cameras(about 20) with each camera capturing one single image. However, the time to initiate each camera takes about 3-4 seconds, and having all these time spent in sequence is relatively long.
So the question is: Whether there is a way to open a usb camera faster, and whether there is a way to do it in a concurrent way.
Appreciate any advice.
Thanks to everyone! I have code attached in below.
def take_picture(camera_id):
cap = cv2.VideoCapture(camera_id, cv2.CAP_DSHOW)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
res = None
name = None
while(cap.isOpened()):
ret0, frame0 = cap.read()
if frame0.any():
real_location = 'station' + str(pos)
name = real_location + '-' + 'collection' + str(cursor) + '.png'
res = frame0
break
cap.release()
return [name, res]

Related

When saving multiple videos with OpenCV, it corrupts the previous generated video. Is it solvable?

I have written a program that uses the web-cam to monitor a machine, and when a movement is detected, I want to save the ‘movement’-frames into a time stamped video.
However, while the program is running, the latest video is viewable, but the previous versions are corrupted.
Ex:
Files:
Movement_11_25_01.avi (corrupted)
Movement_11_28_22.avi (corrupted)
Movement_11_33_21.avi (this is the latest video created, and it is viewable.)
And then another movement is detected;
Files:
Movement_11_25_01.avi (corrupted)
Movement_11_28_22.avi (corrupted)
Movement_11_33_21.avi (corrupted)
Movement_11_35_41.avi (this is the latest video created, and it is viewable.)
This is the code that runs in the end of every motion detected:
out = cv2.VideoWriter('Results\\'+self.name, self.fourcc, self.fps, (self.frame_width, self.frame_height))
print('Len Frames before saving video: ', len(self.frames))
for frame in self.frames:
out.write(frame)
out.release()
self.cap.release()
print('Video saved.')
cv2.destroyAllWindows()
self.frames = list()
# Open up a new instance of VideoCapture
self.cap = cv2.VideoCapture(file_path, cv2.CAP_DSHOW)
I suspect that the corruption is related to how the video file is stored while the program is running, but I cannot figure out how to save a video, close the connection to that file, and then create a new file.
Any help is apprichiated!
I have tried to close the cap = cv2.VideoCapture() and the out = cv2.VideoWriter('Results\\'+self.name, self.fourcc, self.fps, (self.frame_width, self.frame_height)) by running cap.release() and
out.release() but with no success. Only when the program is terminated, a uncurrupted file is generated, and it is the most recent file.
Edit 2022-11-07:
Here is an MRE that i run on windows 10, with Python 3.10.6, together with the modules numpy (1.23.2) and opencv-python (4.6.0.66).
I have noticed that in this MRE, the first file is saved in a workable format. However, the second and third file is corrupted. The main issue is the same, but reversed.
import cv2
import numpy as np
import time
from datetime import datetime
file_path = 0
cap = cv2.VideoCapture(file_path, cv2.CAP_DSHOW) # create a capture instance
startTime = time.time() # Starttider
date_time = datetime.fromtimestamp(startTime)
frames = list()
image = list()
frame_width = int( cap.get(cv2.CAP_PROP_FRAME_WIDTH))
frame_height =int( cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = 30
for _ in range(3):
# Get some images
for _ in range(60):
_, frame = cap.read()
frames.append(frame)
# Initiate a video file
fourcc = cv2.VideoWriter_fourcc('X','V','I','D')
date_time = datetime.fromtimestamp(time.time())
name = "v2Motiondetect_output"+str(date_time.strftime("%d-%m-%Y__ %H_%M_%S") )+ ".avi"
out = cv2.VideoWriter(name, fourcc, fps, (frame_width, frame_height))
print('Len Frames before saving video: ', len(frames))
# Write out all frames
for frame in frames:
out.write(frame)
print('Video Capture before ending program: ', out)
print('Frames before ending program: ', id(frames))
out.release()
cap.release()
print('Video Capture after ending program: ', out)
print('Frames after ending program: ', id(frames))
frames = list()
print('frames after it is overwritten with an empty list object:', id(frames))
cv2.destroyAllWindows()
time.sleep(2) # Needed to ensure different timestamps

How to do concurrent executions in python?

Description
I have a file `FaceDetection.py` which detects the face from webcam footage using `opencv` and then writes frames into `output` object, converting it into a video. If the face is present in the frame then a function `sendImg()` is called which then calls the function `faceRec()` from another imported file. At last this method prints the face name if the face is known or prints "unknown".
Problem I am facing
face recognition process is quite expensive and that's why I thought that running from different file and with a thread will create a background process and output.write() function won't be interrupted while doing face recognition. But, as sendImg() function gets called, output.write() function gets interrupted, resulting in skipped frames in saved video.
Resolution I am seeking
I want to do the face recognition without interrupting the output.write() function. So, resulting video will be smooth and no frames would be skipped.
Here is the FaceDetection.py
import threading
from datetime import datetime
import cv2
from FaceRecognition import FaceRecognition
face_cascade = cv2.CascadeClassifier('face_cascade.xml')
fr = FaceRecognition()
img = ""
cap = cv2.VideoCapture(0)
path = 'C:/Users/-- Kanbhaa --/Documents/Recorded/'
vid_name = str("recording_"+datetime.now().strftime("%b-%d-%Y_%H:%M").replace(":","_")+".avi")
vid_cod = cv2.VideoWriter_fourcc(*'MPEG')
output = cv2.VideoWriter( str(path+vid_name) , vid_cod, 20.0 ,(640,480))
def sendImage(imgI):
fr.faceRec(imgI)
while True:
t,img = cap.read()
output.write(img)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray,2,4)
for _ in faces:
t1 = threading.Thread(target=sendImage,args=(img,))
t1.start()
cv2.imshow('img',img)
if cv2.waitKey(1) & 0XFF == ord('x'):
break
cap.release()
output.release()
Here is the FaceRecognition.py
from simple_facerec import SimpleFacerec
class FaceRecognition:
def __init__(self):
global sfr
# Encode faces from a folder
sfr = SimpleFacerec()
sfr.load_encoding_images("C:/Users/UserName/Documents/FaceRecognitionSystem/images/")
def faceRec(self,frame):
global sfr
# Detect Faces
face_locations, face_names = sfr.detect_known_faces(frame)
for face_loc, name in zip(face_locations, face_names):
if name != "Unknown":
print("known Person detected :) => " + name)
else:
print("Unknown Person detected :( => Frame captured...")
Imported file SimpleFacerec in the above code is from youtube. So, I didn't code it. It can be found below.
SimpleFacerec.py
you write it wrong. It should be
t1 = threading.Thread(target=sendImage, args=(img,))
Try changing, but don't used underscore.
for index in faces:
t1 = threading.Thread(target=sendImage,args=(index,))
t1.start()

How to restrict a camera on capturing and not capturing images on python, opencv?

I have a camera that faces an LED sensor that I want to restrict on capturing images when the object is in sight otherwise not capturing. I will designate a point or region where the color pixels will change as an object arrives it will wait for 3sec and then capture. Thereafter it will not capture the same object if it still remains in the designated region. Here are my codes.
import cv2
import numpy as np
import os
os.chdir('C:/Users/Man/Desktop')
previous_flag = 0
current_flag = 0
a = 0
video = cv2.VideoCapture(0)
while True:
ret, frame = video.read()
rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
cv2.imshow('Real Time', rgb)
cv2.waitKey(1)
if(rgb[400, 200, 0]>200): # selecting a point that will change in terms of pixel values
current_flag = 0
previous_flag = current_flag
print('No object Detected')
else: # object arrived
current_flag = 1
change_in_flags = current_flag - previous_flag
if(change_in_flags==1):
time.sleep(3)
cv2.imwrite('.trial/cam+str(a).jpg', rgb)
a += 1
previous_flag = current_flag
video.release()
cv2.destroyAllWindows()
When I implement the above program, for the first case (if) it prints several lines of No object Detected.
How can I reduce those sequentially printed lines for No object Detected statement? So that the program can just print one line without repetition.
I also tried to add a while loop to keep the current status true after a+=1 like:
while True:
previous_flag = current_flag
continue
It worked but the system became so slow is there any way to avoid such a while loop such that the system becomes faster? Any advice on that
import cv2
import numpy as np
import os
os.chdir('C:/Users/Man/Desktop')
state1=0
a = 0
video = cv2.VideoCapture(0)
while True:
ret, frame = video.read()
rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
cv2.imshow('Real Time', rgb)
cv2.waitKey(1)
if(rgb[400, 200, 0]>200): # selecting a point that will change in terms of pixel values
if(state1 == 0):
print('No object Detected')
state1=1
else: # object arrived
time.sleep(1)
if(state1==1)
print('Object is detected')
cv2.imwrite('./trial/cam+str(a).jpg', rgb)
a += 1
state1=0
video.release()
cv2.destroyAllWindows()

Can't write frames to a video with multiprocessing + cv2

I have a code which breaks down a video into frames and edits the image and puts it back into a video, but I am realizing that it's really slow... So I looked into multiprocessing for speeding up the code, and it works! As I can see it processes the images much faster, but the problem is, when I add those frames to a new video, it doesn't work, the video remains empty!
Here is my code:
# Imports
import cv2, sys, time
import numpy as np
from scipy.ndimage import rotate
from PIL import Image, ImageDraw, ImageFont, ImageOps
import concurrent.futures
def function(fullimg):
img = np.array(Image.fromarray(fullimg).crop((1700, 930, 1920-60, 1080-80)))
inpaintRadius = 10
inpaintMethod = cv2.INPAINT_TELEA
textMask = cv2.imread('permanentmask.jpg', 0)
final_result = cv2.inpaint(img.copy(), textMask, inpaintRadius, inpaintMethod)
text = Image.fromarray(np.array([np.array(i) for i in final_result]).astype(np.uint8)).convert('RGBA')
im = np.array([[tuple(x) for x in i] for i in np.zeros((70, 160, 4))])
im[1:-1, 1:-1] = (170, 13, 5, 40)
im[0, :] = (0,0,0,128)
im[1:-1, [0, -1]] = (0,0,0,128)
im[-1, :] = (0,0,0,128)
im = Image.fromarray(im.astype(np.uint8))
draw = ImageDraw.Draw(im)
font = ImageFont.truetype('arialbd.ttf', 57)
draw.text((5, 5),"TEXT",(255,255, 255, 128),font=font)
text.paste(im, mask=im)
text = np.array(text)
fullimg = Image.fromarray(fullimg)
fullimg.paste(Image.fromarray(text), (1700, 930, 1920-60, 1080-80))
fullimg = cv2.cvtColor(np.array(fullimg), cv2.COLOR_BGR2RGB)
return fullimg
cap = cv2.VideoCapture('before2.mp4')
_fourcc = cv2.VideoWriter_fourcc(*'MPEG')
out = cv2.VideoWriter('after.mp4', _fourcc, 29.97, (1280,720))
frames = []
lst = []
while cap.isOpened():
ret, fullimg = cap.read()
if not ret:
break
frames.append(fullimg)
if len(frames) >= 8:
if __name__ == '__main__':
with concurrent.futures.ProcessPoolExecutor() as executor:
results = executor.map(function, frames)
for i in results:
print(type(i))
out.write(i)
frames.clear()
cap.release()
out.release()
cv2.destroyAllWindows() # destroy all opened windows
My code inpaints a watermark and adds another watermark using PIL.
If I don't use multiprocessing the code works. But if I do use multiprocessing, it gives an empty video.
I am not that familiar with OpenCV, but there seems to be a few things that should be corrected in your code. First, if you are running under Windows, as you appear to be because you have if __name__ == '__main__': guarding the code that creates new processes (by the way, when you tag a question with multiprocessing, you should also tag the question with the platform being used), then any code at global scope will be executed by every process created to implement your pool. That means you should move if __name__ == '__main__': as follows:
if __name__ == '__main__':
cap = cv2.VideoCapture('before2.mp4')
_fourcc = cv2.VideoWriter_fourcc(*'MPEG')
out = cv2.VideoWriter('after.mp4', _fourcc, 29.97, (1280,720))
frames = []
lst = []
while cap.isOpened():
ret, fullimg = cap.read()
if not ret:
break
frames.append(fullimg)
if len(frames) >= 8:
with concurrent.futures.ProcessPoolExecutor() as executor:
results = executor.map(function, frames)
for i in results:
print(type(i))
out.write(i)
frames.clear()
cap.release()
out.release()
cv2.destroyAllWindows() # destroy all opened windows
If you do not do this it seems to me that every sub-process in the pool will first attempt in parallel to create an empty video (the function worker function and out.write will never be called by these processes) and only then will the main process be able to invoke the function worker function using map. This doesn't quite explain why the main process doesn't succeed after all of these wasteful attempts. But...
You also have:
while cap.isOpened():
The documentations states that isOpened() returns True if the previous VideoCapture constructor succeeded. Then if this returns True once, why wouldn't it return True the next time it is tested and you end up looping indefinitely? Shouldn't the while be changed to an if? And doesn't this suggest that isOpened() is perhaps returning False or else you would be looping indefinitely? Or what if len(frames) < 8? It seems then you would also end up with an empty output file.
My suggestion would be to make the above changes and try again.
Update
I took a closer look at the code more closely and it appears that it is looping reading the input (before2.mp4) one frame at a time and when it has accumulated 8 frames or more it creates a pool and processes the frames it has accumulated and writing them out to the output (after.mp4). But that means that if there are, for example, 8 more frames, it will create a brand new processing pool (very wasteful and expensive) and then write out the 8 additional processed frames. But if there were only 7 additional frames, they would never get processed and written out. I would suggest the following code (untested, of course):
def main():
import os
cap = cv2.VideoCapture('before2.mp4')
if not cap.isOpened():
return
_fourcc = cv2.VideoWriter_fourcc(*'MPEG')
out = cv2.VideoWriter('after.mp4', _fourcc, 29.97, (1280,720))
FRAMES_AT_A_TIME = 8
pool_size = min(FRAMES_AT_A_TIME, os.cpu_count())
with concurrent.futures.ProcessPoolExecutor(max_workers=pool_size) as executor:
more_frames = True
while more_frames:
frames = []
for _ in range(FRAMES_AT_A_TIME):
ret, fullimg = cap.read()
if not ret:
more_frames = False
break
frames.append(fullimg)
if not frames:
break # no frames
results = executor.map(function, frames)
for i in results:
print(type(i))
out.write(i)
cap.release()
out.release()
cv2.destroyAllWindows() # destroy all opened windows
if __name__ == '__main__':
main()

OpenCV - Save video segments based on certion condition

Aim : Detect the motion and save only the motion periods in files with names of the starting time.
Now I met the issue about how to save the video to the files with video starting time.
What I tested :
I tested my program part by part. It seems that each part works well except the saving part.
Running status: No error. But in the saving folder, there is no video. If I use a static saving path instead, the video will be saved successfully, but the video will be override by the next video. My codes are below:
import cv2
import numpy as np
import time
cap = cv2.VideoCapture( 0 )
bgst = cv2.createBackgroundSubtractorMOG2()
fourcc=cv2.VideoWriter_fourcc(*'DIVX')
size = (int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)))
n = "start_time"
while True:
ret, frame = cap.read()
dst = bgst.apply(frame)
dst = np.array(dst, np.int8)
if np.count_nonzero(dst)>3000: # use this value to adjust the "Sensitivity“
print('something is moving %s' %(time.ctime()))
path = r'E:\OpenCV\Motion_Detection\%s.avi' %n
out = cv2.VideoWriter( path, fourcc, 50, size )
out.write(frame)
key = cv2.waitKey(3)
if key == 32:
break
else:
out.release()
n = time.ctime()
print("No motion Detected %s" %n)
What I meant is:
import cv2
import numpy as np
import time
cap = cv2.VideoCapture( 0 )
bgst = cv2.createBackgroundSubtractorMOG2()
fourcc=cv2.VideoWriter_fourcc(*'DIVX')
size = (int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)),int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)))
path = r'E:\OpenCV\Motion_Detection\%s.avi' %(time.ctime())
out = cv2.VideoWriter( path, fourcc, 16, size )
while True:
ret, frame = cap.read()
dst = bgst.apply(frame)
dst = np.array(dst, np.int8)
for i in range(number of frames in the video):
if np.count_nonzero(dst)<3000: # use this value to adjust the "Sensitivity“
print("No Motion Detected")
out.release()
else:
print('something is moving %s' %(time.ctime()))
#label each frame you want to output here
out.write(frame(i))
key = cv2.waitKey(1)
if key == 32:
break
cap.release()
cv2.destroyAllWindows()
If you see the code there will be a for loop, within which the process of saving is done.
I do not know the exact syntax involving for loop with frames, but I hope you have the gist of it. You have to find the number of frames present in the video and set that as the range in the for loop.
Each frame gets saved uniquely (see the else condition.) As I said I do not know the syntax. Please refer and follow this procedure.
Cheers!

Categories

Resources