Can't write frames to a video with multiprocessing + cv2 - python

I have a code which breaks down a video into frames and edits the image and puts it back into a video, but I am realizing that it's really slow... So I looked into multiprocessing for speeding up the code, and it works! As I can see it processes the images much faster, but the problem is, when I add those frames to a new video, it doesn't work, the video remains empty!
Here is my code:
# Imports
import cv2, sys, time
import numpy as np
from scipy.ndimage import rotate
from PIL import Image, ImageDraw, ImageFont, ImageOps
import concurrent.futures
def function(fullimg):
img = np.array(Image.fromarray(fullimg).crop((1700, 930, 1920-60, 1080-80)))
inpaintRadius = 10
inpaintMethod = cv2.INPAINT_TELEA
textMask = cv2.imread('permanentmask.jpg', 0)
final_result = cv2.inpaint(img.copy(), textMask, inpaintRadius, inpaintMethod)
text = Image.fromarray(np.array([np.array(i) for i in final_result]).astype(np.uint8)).convert('RGBA')
im = np.array([[tuple(x) for x in i] for i in np.zeros((70, 160, 4))])
im[1:-1, 1:-1] = (170, 13, 5, 40)
im[0, :] = (0,0,0,128)
im[1:-1, [0, -1]] = (0,0,0,128)
im[-1, :] = (0,0,0,128)
im = Image.fromarray(im.astype(np.uint8))
draw = ImageDraw.Draw(im)
font = ImageFont.truetype('arialbd.ttf', 57)
draw.text((5, 5),"TEXT",(255,255, 255, 128),font=font)
text.paste(im, mask=im)
text = np.array(text)
fullimg = Image.fromarray(fullimg)
fullimg.paste(Image.fromarray(text), (1700, 930, 1920-60, 1080-80))
fullimg = cv2.cvtColor(np.array(fullimg), cv2.COLOR_BGR2RGB)
return fullimg
cap = cv2.VideoCapture('before2.mp4')
_fourcc = cv2.VideoWriter_fourcc(*'MPEG')
out = cv2.VideoWriter('after.mp4', _fourcc, 29.97, (1280,720))
frames = []
lst = []
while cap.isOpened():
ret, fullimg = cap.read()
if not ret:
break
frames.append(fullimg)
if len(frames) >= 8:
if __name__ == '__main__':
with concurrent.futures.ProcessPoolExecutor() as executor:
results = executor.map(function, frames)
for i in results:
print(type(i))
out.write(i)
frames.clear()
cap.release()
out.release()
cv2.destroyAllWindows() # destroy all opened windows
My code inpaints a watermark and adds another watermark using PIL.
If I don't use multiprocessing the code works. But if I do use multiprocessing, it gives an empty video.

I am not that familiar with OpenCV, but there seems to be a few things that should be corrected in your code. First, if you are running under Windows, as you appear to be because you have if __name__ == '__main__': guarding the code that creates new processes (by the way, when you tag a question with multiprocessing, you should also tag the question with the platform being used), then any code at global scope will be executed by every process created to implement your pool. That means you should move if __name__ == '__main__': as follows:
if __name__ == '__main__':
cap = cv2.VideoCapture('before2.mp4')
_fourcc = cv2.VideoWriter_fourcc(*'MPEG')
out = cv2.VideoWriter('after.mp4', _fourcc, 29.97, (1280,720))
frames = []
lst = []
while cap.isOpened():
ret, fullimg = cap.read()
if not ret:
break
frames.append(fullimg)
if len(frames) >= 8:
with concurrent.futures.ProcessPoolExecutor() as executor:
results = executor.map(function, frames)
for i in results:
print(type(i))
out.write(i)
frames.clear()
cap.release()
out.release()
cv2.destroyAllWindows() # destroy all opened windows
If you do not do this it seems to me that every sub-process in the pool will first attempt in parallel to create an empty video (the function worker function and out.write will never be called by these processes) and only then will the main process be able to invoke the function worker function using map. This doesn't quite explain why the main process doesn't succeed after all of these wasteful attempts. But...
You also have:
while cap.isOpened():
The documentations states that isOpened() returns True if the previous VideoCapture constructor succeeded. Then if this returns True once, why wouldn't it return True the next time it is tested and you end up looping indefinitely? Shouldn't the while be changed to an if? And doesn't this suggest that isOpened() is perhaps returning False or else you would be looping indefinitely? Or what if len(frames) < 8? It seems then you would also end up with an empty output file.
My suggestion would be to make the above changes and try again.
Update
I took a closer look at the code more closely and it appears that it is looping reading the input (before2.mp4) one frame at a time and when it has accumulated 8 frames or more it creates a pool and processes the frames it has accumulated and writing them out to the output (after.mp4). But that means that if there are, for example, 8 more frames, it will create a brand new processing pool (very wasteful and expensive) and then write out the 8 additional processed frames. But if there were only 7 additional frames, they would never get processed and written out. I would suggest the following code (untested, of course):
def main():
import os
cap = cv2.VideoCapture('before2.mp4')
if not cap.isOpened():
return
_fourcc = cv2.VideoWriter_fourcc(*'MPEG')
out = cv2.VideoWriter('after.mp4', _fourcc, 29.97, (1280,720))
FRAMES_AT_A_TIME = 8
pool_size = min(FRAMES_AT_A_TIME, os.cpu_count())
with concurrent.futures.ProcessPoolExecutor(max_workers=pool_size) as executor:
more_frames = True
while more_frames:
frames = []
for _ in range(FRAMES_AT_A_TIME):
ret, fullimg = cap.read()
if not ret:
more_frames = False
break
frames.append(fullimg)
if not frames:
break # no frames
results = executor.map(function, frames)
for i in results:
print(type(i))
out.write(i)
cap.release()
out.release()
cv2.destroyAllWindows() # destroy all opened windows
if __name__ == '__main__':
main()

Related

Live dehazed video displayed with opencv, freezes, goes not responding

I want to implement this library with video dehazing ability.
I have only CPU, but I expect the result will be good without GPU,because video output of DCP,or any other dehaze algorithm works good.
So I developed this code:
import cv2
import torch
import numpy as np
import torch.nn as nn
import math
class dehaze_net(nn.Module):
def __init__(self):
super(dehaze_net, self).__init__()
self.relu = nn.ReLU(inplace=True)
self.e_conv1 = nn.Conv2d(3,3,1,1,0,bias=True)
self.e_conv2 = nn.Conv2d(3,3,3,1,1,bias=True)
self.e_conv3 = nn.Conv2d(6,3,5,1,2,bias=True)
self.e_conv4 = nn.Conv2d(6,3,7,1,3,bias=True)
self.e_conv5 = nn.Conv2d(12,3,3,1,1,bias=True)
def forward(self, x):
source = []
source.append(x)
x1 = self.relu(self.e_conv1(x))
x2 = self.relu(self.e_conv2(x1))
concat1 = torch.cat((x1,x2), 1)
x3 = self.relu(self.e_conv3(concat1))
concat2 = torch.cat((x2, x3), 1)
x4 = self.relu(self.e_conv4(concat2))
concat3 = torch.cat((x1,x2,x3,x4),1)
x5 = self.relu(self.e_conv5(concat3))
clean_image = self.relu((x5 * x) - x5 + 1)
return clean_image
model = dehaze_net()
model.load_state_dict(torch.load('snapshots/dehazer.pth',map_location=torch.device('cpu')))
device = torch.device('cpu')
model.to(device)
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret, frame = cap.read()
if ret:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame = torch.from_numpy(frame.transpose((2, 0, 1))).float().unsqueeze(0) / 255.0
frame = frame.to(device)
with torch.no_grad():
dehazed_frame = model(frame).squeeze().cpu().numpy()
dehazed_frame = (dehazed_frame * 255).clip(0, 255).transpose((1, 2, 0)).astype(np.uint8)
dehazed_frame = cv2.cvtColor(dehazed_frame, cv2.COLOR_RGB2BGR)
cv2.imshow('Dehazed Frame', dehazed_frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cap.release()
cv2.destroyAllWindows()
This is a single file code that needs only snapshots/dehazer.pth to be downloaded from original source(MayankSingal/PyTorch-Image-Dehazing).
I downloaded it and executed the code.
for time being let me show a paper in camera,
The problem:
The problem is
the window that shows the video freezes until it gets a new frame, i.e: Frame1--->FREEZE--->Frame2..., Here is some example:
for 1 second the window looks good
for 5 second the window goes not responding/hangs/freezes...
the window that shows the video, shows the frames with long delay, that is it takes about 5 second for a frame
I was expecting smooth live output(its fine even if Frame-Per-Second is 1 or 2), but I am not ok with that "Not responding" window, I feel the code I/Author have put has some flaw/problem/loop hole. If I use any other code, lik DCP,there is no problem. So whats the part that cause not responding, how to solve?
GUIs need to run their event processing regularly. If that doesn't happen often enough, the GUI becomes noticeably unresponsive. Most operating systems notice that for you and alert you about the program becoming unresponsive.
GUIs are event-based. Any intensive computations must be performed outside of the event loop, i.e. in a thread.
That is not the case in your program because you perform (compute-intensive) inference in the same loop that calls waitKey(), which is the function in OpenCV that performs GUI event processing.
Here is a brief sketch that shows how to use threads:
import cv2 as cv
import threading
import queue
def worker_function(stop_event, result_queue):
cap = cv.VideoCapture()
assert cap.isOpened()
while not stop_event.is_set():
(success, frame) = cap.read()
if not success: break
... # do your inference here
result_queue.put(result_frame)
cap.release()
if __name__ == "__main__":
stop_event = threading.Event()
result_queue = queue.Queue(maxsize=1)
worker_thread = threading.Thread(
target=worker_function, args=(stop_event, result_queue))
worker_thread.start()
cv.namedWindow("window", cv.WINDOW_NORMAL)
while True:
# handle new result, if any
try:
result_frame = result_queue.get_nowait()
cv.imshow("window", result_frame)
result_queue.task_done()
except queue.Empty:
pass
# GUI event processing
key = cv.waitKey(10)
if key in (13, 27): # Enter, Escape
break
stop_event.set()
worker_thread.join()
I didn't test this but the idea is sound.

Mac photo booth on cv2 python, why does my code result in an error?

I want the code to take 4 pics with a 1 second interval using cv2, then add the together using hconcat function. When I try to run this code I get ('numpy.ndarray' object has no attribute 'release'), can someone help
import cv2,random
num = random.randint(0,2000)
cam = cv2.VideoCapture(0)
cv2.namedWindow("Mac")
def concat_tile(im_list_2d):
return cv2.vconcat([cv2.hconcat(im_list_h) for im_list_h in im_list_2d])
x = []
for i in range(4):
ret,frame = cam.read()
x.append(frame)
frame.release()
cv2.destroyAllWindows()
im_v = concat_tile([[x[0],x[1]],
[x[2],x[3]]])
img_name = "opencv_frame_{}.png".format(num)
cv2.imwrite(img_name,im_v)
It should not be frame.release(), rather try cam.release()
Also, release() and cv2.destroyAllWindows(), should be written at the end of your code, not within the for loop
for i in range(4):
ret,frame = cam.read()
x.append(frame)
cam.release()
cv2.destroyAllWindows()

Read webcam in python with multiprocessing

I have a simple program for reading webcams, but the reading results are very slow, so I lower the quality of reading images from the webcam, but the reading is still slow, so I try to use multiprocessing, so I'm testing a simple program to find out if my multiprocessing program is running correctly or not. but I don't know why the variable "cap" cannot be read. and I don't know how to solve it.
this is my program :
import cv2
import numpy as np
import multiprocessing
def get():
global cap
cap = cv2.VideoCapture(0)
return cap
def video(cap):
_, frame = cap.read()
frame = cv2.flip(frame, 1)
return frame
if __name__ == "__main__":
p1 = multiprocessing.Process(target = get)
p1.start()
p1.join()
while True:
frame = video(cap)
cv2.imshow("frame", frame)
key = cv2.waitKey(1)
if key == 27: #Key 'S'
break
cv2.waitKey(0)
cv2.destroyAllWindows()
Actually, cap has never been declared. Try to insert this line after your import satements:
cap = None
This will take care of the missing cap. Of course this will then lead to other problems in your code, but it is a stating point.
Good luck
Andreas

Python two functions threading

I want to thread two functions, first function streaming a video and passing frames to second function, second function reading frames with Optical Character Recognition and converting frames to text. The question how to pass frames from first threaded function to second threaded function?
What I have done already, with first function saving video frames to local file 'frame.jpg' and at the same time reading with second function from 'frame.jpg'. Is it possible to define video frames as global variable and pass to reading function?
import cv2
import pytesseract
from multiprocessing import Process
def video_straming(): #Video streaming function, First Function
vc = cv2.VideoCapture(0)
cv2.namedWindow("preview")
if vc.isOpened():
rval, frame = vc.read()
else:
rval = False
while rval:
rval, frame = vc.read()
cv2.imwrite('frame.jpg',frame)
key = cv2.waitKey(20)
if key == 27: # exit on ESC
break
cv2.destroyWindow("preview")
def reading(): #Reading from frame.jpg function, Second Function
while:
frame = cv2.imread('frame.jpg')
read = Image.fromarray(frame)
read = pytesseract.image_to_string(read)
if len(read) > 80:
break
if __name__ == '__main__':
video_stream = Process(target=video_streaming)
video_stream.start()
frame_read = Process(target=reading)
frame_read.start()
video_stream.join()
frame_read.join()
Hope this answer can still be of some use.
I use multiprocessing.Pipe() to pass video frames from one processes to another with cv2.VideoCapture() to capture frames and write each image to the Pipe.
import multiprocessing
multiprocessing.set_start_method('spawn')
video_outfrompipe, video_intopipe = multiprocessing.Pipe()
vs = multiprocessing.Process(target=VideoSource, args=(video_intopipe))
vs.start()
vc = multiprocessing.Process(target=VideoConsumer, args=(video_outfrompipe))
vc.start()
vs.join()
vc.join()

OpenCV - Save video segments based on certion condition

Aim : Detect the motion and save only the motion periods in files with names of the starting time.
Now I met the issue about how to save the video to the files with video starting time.
What I tested :
I tested my program part by part. It seems that each part works well except the saving part.
Running status: No error. But in the saving folder, there is no video. If I use a static saving path instead, the video will be saved successfully, but the video will be override by the next video. My codes are below:
import cv2
import numpy as np
import time
cap = cv2.VideoCapture( 0 )
bgst = cv2.createBackgroundSubtractorMOG2()
fourcc=cv2.VideoWriter_fourcc(*'DIVX')
size = (int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)))
n = "start_time"
while True:
ret, frame = cap.read()
dst = bgst.apply(frame)
dst = np.array(dst, np.int8)
if np.count_nonzero(dst)>3000: # use this value to adjust the "Sensitivity“
print('something is moving %s' %(time.ctime()))
path = r'E:\OpenCV\Motion_Detection\%s.avi' %n
out = cv2.VideoWriter( path, fourcc, 50, size )
out.write(frame)
key = cv2.waitKey(3)
if key == 32:
break
else:
out.release()
n = time.ctime()
print("No motion Detected %s" %n)
What I meant is:
import cv2
import numpy as np
import time
cap = cv2.VideoCapture( 0 )
bgst = cv2.createBackgroundSubtractorMOG2()
fourcc=cv2.VideoWriter_fourcc(*'DIVX')
size = (int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)),int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)))
path = r'E:\OpenCV\Motion_Detection\%s.avi' %(time.ctime())
out = cv2.VideoWriter( path, fourcc, 16, size )
while True:
ret, frame = cap.read()
dst = bgst.apply(frame)
dst = np.array(dst, np.int8)
for i in range(number of frames in the video):
if np.count_nonzero(dst)<3000: # use this value to adjust the "Sensitivity“
print("No Motion Detected")
out.release()
else:
print('something is moving %s' %(time.ctime()))
#label each frame you want to output here
out.write(frame(i))
key = cv2.waitKey(1)
if key == 32:
break
cap.release()
cv2.destroyAllWindows()
If you see the code there will be a for loop, within which the process of saving is done.
I do not know the exact syntax involving for loop with frames, but I hope you have the gist of it. You have to find the number of frames present in the video and set that as the range in the for loop.
Each frame gets saved uniquely (see the else condition.) As I said I do not know the syntax. Please refer and follow this procedure.
Cheers!

Categories

Resources