Live dehazed video displayed with opencv, freezes, goes not responding - python

I want to implement this library with video dehazing ability.
I have only CPU, but I expect the result will be good without GPU,because video output of DCP,or any other dehaze algorithm works good.
So I developed this code:
import cv2
import torch
import numpy as np
import torch.nn as nn
import math
class dehaze_net(nn.Module):
def __init__(self):
super(dehaze_net, self).__init__()
self.relu = nn.ReLU(inplace=True)
self.e_conv1 = nn.Conv2d(3,3,1,1,0,bias=True)
self.e_conv2 = nn.Conv2d(3,3,3,1,1,bias=True)
self.e_conv3 = nn.Conv2d(6,3,5,1,2,bias=True)
self.e_conv4 = nn.Conv2d(6,3,7,1,3,bias=True)
self.e_conv5 = nn.Conv2d(12,3,3,1,1,bias=True)
def forward(self, x):
source = []
source.append(x)
x1 = self.relu(self.e_conv1(x))
x2 = self.relu(self.e_conv2(x1))
concat1 = torch.cat((x1,x2), 1)
x3 = self.relu(self.e_conv3(concat1))
concat2 = torch.cat((x2, x3), 1)
x4 = self.relu(self.e_conv4(concat2))
concat3 = torch.cat((x1,x2,x3,x4),1)
x5 = self.relu(self.e_conv5(concat3))
clean_image = self.relu((x5 * x) - x5 + 1)
return clean_image
model = dehaze_net()
model.load_state_dict(torch.load('snapshots/dehazer.pth',map_location=torch.device('cpu')))
device = torch.device('cpu')
model.to(device)
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret, frame = cap.read()
if ret:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame = torch.from_numpy(frame.transpose((2, 0, 1))).float().unsqueeze(0) / 255.0
frame = frame.to(device)
with torch.no_grad():
dehazed_frame = model(frame).squeeze().cpu().numpy()
dehazed_frame = (dehazed_frame * 255).clip(0, 255).transpose((1, 2, 0)).astype(np.uint8)
dehazed_frame = cv2.cvtColor(dehazed_frame, cv2.COLOR_RGB2BGR)
cv2.imshow('Dehazed Frame', dehazed_frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cap.release()
cv2.destroyAllWindows()
This is a single file code that needs only snapshots/dehazer.pth to be downloaded from original source(MayankSingal/PyTorch-Image-Dehazing).
I downloaded it and executed the code.
for time being let me show a paper in camera,
The problem:
The problem is
the window that shows the video freezes until it gets a new frame, i.e: Frame1--->FREEZE--->Frame2..., Here is some example:
for 1 second the window looks good
for 5 second the window goes not responding/hangs/freezes...
the window that shows the video, shows the frames with long delay, that is it takes about 5 second for a frame
I was expecting smooth live output(its fine even if Frame-Per-Second is 1 or 2), but I am not ok with that "Not responding" window, I feel the code I/Author have put has some flaw/problem/loop hole. If I use any other code, lik DCP,there is no problem. So whats the part that cause not responding, how to solve?

GUIs need to run their event processing regularly. If that doesn't happen often enough, the GUI becomes noticeably unresponsive. Most operating systems notice that for you and alert you about the program becoming unresponsive.
GUIs are event-based. Any intensive computations must be performed outside of the event loop, i.e. in a thread.
That is not the case in your program because you perform (compute-intensive) inference in the same loop that calls waitKey(), which is the function in OpenCV that performs GUI event processing.
Here is a brief sketch that shows how to use threads:
import cv2 as cv
import threading
import queue
def worker_function(stop_event, result_queue):
cap = cv.VideoCapture()
assert cap.isOpened()
while not stop_event.is_set():
(success, frame) = cap.read()
if not success: break
... # do your inference here
result_queue.put(result_frame)
cap.release()
if __name__ == "__main__":
stop_event = threading.Event()
result_queue = queue.Queue(maxsize=1)
worker_thread = threading.Thread(
target=worker_function, args=(stop_event, result_queue))
worker_thread.start()
cv.namedWindow("window", cv.WINDOW_NORMAL)
while True:
# handle new result, if any
try:
result_frame = result_queue.get_nowait()
cv.imshow("window", result_frame)
result_queue.task_done()
except queue.Empty:
pass
# GUI event processing
key = cv.waitKey(10)
if key in (13, 27): # Enter, Escape
break
stop_event.set()
worker_thread.join()
I didn't test this but the idea is sound.

Related

How to make Frame Rate constant of Allied Vision Camera using Vimba SDK?

I am using Allied Vision Camera Manta G-201C for a project. The requirement is of constant 30 FPS (Fames Per Second), but I am having a higher rate of 33-34 and is not constant.
The following code I am using:
#! /usr/bin/python3.7
from datetime import datetime
from functools import partial
import queue
import time
from vimba import *
import cv2
def setup_camera(cam):
cam.set_pixel_format(PixelFormat.BayerRG8)
cam.ExposureTimeAbs.set(10000)
cam.BalanceWhiteAuto.set('Off')
cam.Gain.set(0)
cam.AcquisitionMode.set('Continuous')
cam.GainAuto.set('Off')
# NB: Following adjusted for my Manta G-033C
cam.Height.set(492)
cam.Width.set(656)
# Called periodically as frames are received by Vimba's capture thread
# NB: This is invoked in a different thread than the rest of the code!
def frame_handler(frame_queue, cam, frame):
img = frame.as_numpy_ndarray()
img_rgb = cv2.cvtColor(img, cv2.COLOR_BAYER_RG2RGB)
try:
# Try to put the frame in the queue...
frame_queue.put_nowait(img_rgb)
except queue.Full:
# If that fials (queue is full), just drop the frame
# NB: You may want to handle this better...
print('Dropped Frame')
cam.queue_frame(frame)
def do_something(img, count):
filename = 'data/IMG_' + str(count) + '.jpg'
cv2.putText(img, str(datetime.now()), (20, 40)
, cv2.FONT_HERSHEY_PLAIN, 2, (255, 255, 255)
, 2, cv2.LINE_AA)
cv2.imwrite(filename, img)
def run_processing(cam):
try:
# Create a queue to use for communication between Vimba's capture thread
# and the main thread, limit capacity to 10 entries
frame_queue = queue.Queue(maxsize=10)
# Start asynchronous capture, using frame_handler
# Bind the first parameter of frame handler to our frame_queue
cam.start_streaming(handler=partial(frame_handler,frame_queue)
, buffer_count=10)
start = time.time()
frame_count = 0
while True:
if frame_queue.qsize() > 0:
# If there's something in the queue, try to fetch it and process
try:
frame = frame_queue.get_nowait()
frame_count += 1
cv2.imshow('Live feed', frame)
do_something(frame, frame_count)
except queue.Empty:
pass
key = cv2.waitKey(1)
if (key == ord('q')) or (frame_count >= 100):
cv2.destroyAllWindows()
break
fps = int((frame_count + 1)/(time.time() - start))
print('FPS:', fps)
finally:
# Stop the asynchronous capture
cam.stop_streaming()
##profile
def main():
with Vimba.get_instance() as vimba:
with vimba.get_all_cameras()[0] as cam:
setup_camera(cam)
run_processing(cam)
if __name__ == "__main__":
main()
I want to have a constant FPS of 30 for image capturing. I don't know how to solve this? Any idea is appreciated!
You can set a static framerate with this feature:
AcquisitionFrameRateAbs
If TriggerSelector = FrameStart and either TriggerMode = Off or
TriggerSource = FixedRate, this feature specifies the frame rate.
Depending on the exposure duration, the camera may not achieve the
frame rate set here.
More information about the features would be in the Feature Reference on the Manta Documentation Download site.
With Vimba Python you use:
feature = cam.get_feature_by_name("AcquisitionFrameRateAbs")
feature.set(30) #specifies 30FPS
# set the other features TriggerSelector and TriggerMode
feature = cam.get_feature_by_name("TriggerSelector")
feature.set("FrameStart")
feature = cam.get_feature_by_name("TriggerMode")
feature.set("Off")

MediaPipe pose estimator with multiprocessing hangs on its process function

I am currently trying to implement MediaPipe pose estimator as an independent event-based process with Python's multiprocessing library, but it hangs on the MediaPipe's Pose.process() function.
I input the frame with another process (readFrames). Whenever a frame is captured, it is written into a shared object and tells the MediaPipe process (MediaPipeRunner) to start working on the current image:
def readFrames(ns, event):
#initialize the video capture object
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret, frame = cap.read()
if ret:
ns.frame = frame
event.set()
cv2.imshow('Orijinal Frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
cap.release()
cv2.destroyAllWindows()
return -1
else:
return
class MediaPipeRunner(mproc.Process):
def __init__(self, name, nsFrame, nsMediaPipe, eventWait, eventPublish):
super(MediaPipeRunner, self).__init__()
# Specify a name for the instance
self.name = name
# Input and output namespaces
self.nsFrame = nsFrame
self.nsMediaPipe = nsMediaPipe
# Waiter and publisher events
self.eventWait = eventWait
self.eventPublish = eventPublish
# Create a pose estimator from MediaPipe
mp_pose = mp.solutions.pose
# Specify pose estimator parameters (static)
static_image_mode = True
model_complexity = 1
enable_segmentation = True # DONT CHANGE
min_detection_confidence = 0.5
# Create a pose estimator here
self.pose = mp_pose.Pose(
static_image_mode=static_image_mode,
model_complexity=model_complexity,
enable_segmentation=enable_segmentation,
min_detection_confidence=min_detection_confidence,
smooth_landmarks=False,
)
def run(self):
while True:
eventFrame.wait()
# This part is where it gets stuck:
results = self.pose.process(cv2.cvtColor(self.nsFrame.frame, cv2.COLOR_BGR2RGB))
if not results.pose_landmarks:
continue
self.nsMediaPipe.segmentation = results.segmentation_mask
eventMP.set()
This is how I bind the processes, namespaces and events:
if __name__=="__main__":
mgr = mproc.Manager()
nsFrame = mgr.Namespace()
nsMP = mgr.Namespace()
eventFrame = mproc.Event()
eventMP = mproc.Event()
camCap = mproc.Process(name='camCap', target=readFrames, args=(nsFrame, eventFrame, ))
camCap.daemon=True
mpCap = MediaPipeRunner('mpCap', nsFrame, nsMP, eventFrame, eventMP, )
mpCap.daemon=True
camCap.start()
mpCap.start()
camCap.join()
mpCap.join()
Am I taking a wrong step on processes or MediaPipe is not getting along with the multiprocessing library of Python?
Any help will be appreciated, thanks in advance :)
P.S.: I installed MediaPipe by pip and version 0.8.9.1 is present.
I have found the problem: The process function behaves correctly when with structure is used in Python (idk why):
with mp_pose.Pose(
static_image_mode=static_image_mode,
model_complexity=model_complexity,
enable_segmentation=enable_segmentation,
min_detection_confidence=min_detection_confidence,
smooth_landmarks=False,
) as pose:
Now this part works!
results = self.pose.process(cv2.cvtColor(self.nsFrame.frame, cv2.COLOR_BGR2RGB))
I hope it might be helpful for you.

Can't write frames to a video with multiprocessing + cv2

I have a code which breaks down a video into frames and edits the image and puts it back into a video, but I am realizing that it's really slow... So I looked into multiprocessing for speeding up the code, and it works! As I can see it processes the images much faster, but the problem is, when I add those frames to a new video, it doesn't work, the video remains empty!
Here is my code:
# Imports
import cv2, sys, time
import numpy as np
from scipy.ndimage import rotate
from PIL import Image, ImageDraw, ImageFont, ImageOps
import concurrent.futures
def function(fullimg):
img = np.array(Image.fromarray(fullimg).crop((1700, 930, 1920-60, 1080-80)))
inpaintRadius = 10
inpaintMethod = cv2.INPAINT_TELEA
textMask = cv2.imread('permanentmask.jpg', 0)
final_result = cv2.inpaint(img.copy(), textMask, inpaintRadius, inpaintMethod)
text = Image.fromarray(np.array([np.array(i) for i in final_result]).astype(np.uint8)).convert('RGBA')
im = np.array([[tuple(x) for x in i] for i in np.zeros((70, 160, 4))])
im[1:-1, 1:-1] = (170, 13, 5, 40)
im[0, :] = (0,0,0,128)
im[1:-1, [0, -1]] = (0,0,0,128)
im[-1, :] = (0,0,0,128)
im = Image.fromarray(im.astype(np.uint8))
draw = ImageDraw.Draw(im)
font = ImageFont.truetype('arialbd.ttf', 57)
draw.text((5, 5),"TEXT",(255,255, 255, 128),font=font)
text.paste(im, mask=im)
text = np.array(text)
fullimg = Image.fromarray(fullimg)
fullimg.paste(Image.fromarray(text), (1700, 930, 1920-60, 1080-80))
fullimg = cv2.cvtColor(np.array(fullimg), cv2.COLOR_BGR2RGB)
return fullimg
cap = cv2.VideoCapture('before2.mp4')
_fourcc = cv2.VideoWriter_fourcc(*'MPEG')
out = cv2.VideoWriter('after.mp4', _fourcc, 29.97, (1280,720))
frames = []
lst = []
while cap.isOpened():
ret, fullimg = cap.read()
if not ret:
break
frames.append(fullimg)
if len(frames) >= 8:
if __name__ == '__main__':
with concurrent.futures.ProcessPoolExecutor() as executor:
results = executor.map(function, frames)
for i in results:
print(type(i))
out.write(i)
frames.clear()
cap.release()
out.release()
cv2.destroyAllWindows() # destroy all opened windows
My code inpaints a watermark and adds another watermark using PIL.
If I don't use multiprocessing the code works. But if I do use multiprocessing, it gives an empty video.
I am not that familiar with OpenCV, but there seems to be a few things that should be corrected in your code. First, if you are running under Windows, as you appear to be because you have if __name__ == '__main__': guarding the code that creates new processes (by the way, when you tag a question with multiprocessing, you should also tag the question with the platform being used), then any code at global scope will be executed by every process created to implement your pool. That means you should move if __name__ == '__main__': as follows:
if __name__ == '__main__':
cap = cv2.VideoCapture('before2.mp4')
_fourcc = cv2.VideoWriter_fourcc(*'MPEG')
out = cv2.VideoWriter('after.mp4', _fourcc, 29.97, (1280,720))
frames = []
lst = []
while cap.isOpened():
ret, fullimg = cap.read()
if not ret:
break
frames.append(fullimg)
if len(frames) >= 8:
with concurrent.futures.ProcessPoolExecutor() as executor:
results = executor.map(function, frames)
for i in results:
print(type(i))
out.write(i)
frames.clear()
cap.release()
out.release()
cv2.destroyAllWindows() # destroy all opened windows
If you do not do this it seems to me that every sub-process in the pool will first attempt in parallel to create an empty video (the function worker function and out.write will never be called by these processes) and only then will the main process be able to invoke the function worker function using map. This doesn't quite explain why the main process doesn't succeed after all of these wasteful attempts. But...
You also have:
while cap.isOpened():
The documentations states that isOpened() returns True if the previous VideoCapture constructor succeeded. Then if this returns True once, why wouldn't it return True the next time it is tested and you end up looping indefinitely? Shouldn't the while be changed to an if? And doesn't this suggest that isOpened() is perhaps returning False or else you would be looping indefinitely? Or what if len(frames) < 8? It seems then you would also end up with an empty output file.
My suggestion would be to make the above changes and try again.
Update
I took a closer look at the code more closely and it appears that it is looping reading the input (before2.mp4) one frame at a time and when it has accumulated 8 frames or more it creates a pool and processes the frames it has accumulated and writing them out to the output (after.mp4). But that means that if there are, for example, 8 more frames, it will create a brand new processing pool (very wasteful and expensive) and then write out the 8 additional processed frames. But if there were only 7 additional frames, they would never get processed and written out. I would suggest the following code (untested, of course):
def main():
import os
cap = cv2.VideoCapture('before2.mp4')
if not cap.isOpened():
return
_fourcc = cv2.VideoWriter_fourcc(*'MPEG')
out = cv2.VideoWriter('after.mp4', _fourcc, 29.97, (1280,720))
FRAMES_AT_A_TIME = 8
pool_size = min(FRAMES_AT_A_TIME, os.cpu_count())
with concurrent.futures.ProcessPoolExecutor(max_workers=pool_size) as executor:
more_frames = True
while more_frames:
frames = []
for _ in range(FRAMES_AT_A_TIME):
ret, fullimg = cap.read()
if not ret:
more_frames = False
break
frames.append(fullimg)
if not frames:
break # no frames
results = executor.map(function, frames)
for i in results:
print(type(i))
out.write(i)
cap.release()
out.release()
cv2.destroyAllWindows() # destroy all opened windows
if __name__ == '__main__':
main()

PyAV: how to display multiple video streams to the screen at the same time

I'm just learning to work with video frames and new to python language. I need to display multiple video streams to the screen at the same time using PyAV.
The code below works fine for one camera. Please help me to display multiple cameras on the screen. What should I add or fix in this code?
dicOption={'buffer_size':'1024000','rtsp_transport':'tcp','stimeout':'20000000','max_delay':'200000'}
video = av.open("rtsp://viewer:vieweradmin#192.16.5.69:80/1", 'r',format=None,options=dicOption, metadata_errors='nostrict')
try:
for packet in video.demux():
for frame in packet.decode():
if packet.stream.type == 'video':
print(packet)
print(frame)
img = frame.to_ndarray(format='bgr24')
#time.sleep(1)
cv2.imshow("Video", img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
except KeyboardInterrupt:
pass
cv2.destroyAllWindows()
Playing multiple streams with PyAV is possible but not trivial. The main challenge is decoding multiple streams simultaneously, which in a single-threaded program can take longer than the frame rate of the videos would require. Unfortunately threads won't be of help here (Python allows only one thread to be active at any given time), so the solution is to build a multi-process architecture.
I created the code below for a side project, it implements a simple multi-stream video player using PyAV and OpenCV. It creates a separate background process to decode each stream, using queues to send the frames to the main process. Because the queues have limited size, there is no risk of decoders outpacing the main process — if a frame is not retrieved by the time the next one is ready, its process will block until the main process catches up.
All streams are assumed to run at the same frame rate.
import av
import cv2
import numpy as np
import logging
from argparse import ArgumentParser
from math import ceil
from multiprocessing import Process, Queue
from time import time
def parseArguments():
r'''Parse command-line arguments.
'''
parser = ArgumentParser(description='Video player that can reproduce multiple files simultaneoulsy')
parser.add_argument('paths', nargs='+', help='Paths to the video files to be played')
parser.add_argument('--resolution', type=int, nargs=2, default=[1920, 1080], help='Resolution of the combined video')
parser.add_argument('--fps', type=int, default=15, help='Frame rate used when playing video contents')
return parser.parse_args()
def decode(path, width, height, queue):
r'''Decode a video and return its frames through a process queue.
Frames are resized to `(width, height)` before returning.
'''
container = av.open(path)
for frame in container.decode(video=0):
# TODO: Keep image ratio when resizing.
image = frame.to_rgb(width=width, height=height).to_ndarray()
queue.put(image)
queue.put(None)
class GridViewer(object):
r'''Interface for displaung video frames in a grid pattern.
'''
def __init__(self, args):
r'''Create a new grid viewer.
'''
size = float(len(args.paths))
self.cols = ceil(size ** 0.5)
self.rows = ceil(size / self.cols)
(width, height) = args.resolution
self.shape = (height, width, 3)
self.cell_width = width // self.cols
self.cell_height = height // self.rows
cv2.namedWindow('Video', cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO | cv2.WINDOW_GUI_EXPANDED)
cv2.resizeWindow('Video', width, height)
def update(self, queues):
r'''Query the frame queues and update the viewer.
Return whether all decoders are still active.
'''
grid = np.zeros(self.shape, dtype=np.uint8)
for (k, queue) in enumerate(queues):
image = queue.get()
if image is None:
return False
(i, j) = (k // self.cols, k % self.cols)
(m, n) = image.shape[:2]
a = i * self.cell_height
b = a + m
c = j * self.cell_width
d = c + n
grid[a:b, c:d] = image
grid = cv2.cvtColor(grid, cv2.COLOR_RGB2BGR)
cv2.imshow('Video', grid)
cv2.waitKey(1)
return True
def play(args):
r'''Play multiple video files in a grid interface.
'''
grid = GridViewer(args)
queues = []
processes = []
for path in args.paths:
queues.append(Queue(1))
processes.append(Process(target=decode, args=(path, grid.cell_width, grid.cell_height, queues[-1]), daemon=True))
processes[-1].start()
period = 1.0 / args.fps
t_start = time()
t_frame = 0
while grid.update(queues):
# Spin-lock the thread as necessary to maintain the frame rate.
while t_frame > time() - t_start:
pass
t_frame += period
# Terminate any lingering processes, just in case.
for process in processes:
process.terminate()
def main():
logging.disable(logging.WARNING)
play(parseArguments())
if __name__ == '__main__':
main()

Multithreaded cv2.imshow() in Python does not work

I have two cameras (using OpenNI, I have two streams per camera, handled by the same instance of the driver API) and would like to have two threads, each capturing data from each camera independently, i.e. for one instance of the driver API, say cam_handler, I have two streams depth and rgb per camera, say cam_handler.RGB1_stream and cam_handler.DEPTH1_stream
Here is the code for the same:
import threading
def capture_and_save(cam_handle, cam_id, dir_to_write, log_writer, rgb_stream,
depth_stream, io):
t = threading.currentThread()
shot_idx = 0
rgb_window = 'RGB' + str(cam_id)
depth_window = 'DEPTH' + str(cam_id)
while getattr(t, "do_run", True):
if rgb_stream is not None:
rgb_array = cam_handle.get_rgb(rgb_stream)
rgb_array_disp = cv2.cvtColor(rgb_array, cv2.COLOR_BGR2RGB)
cv2.imshow(rgb_window, rgb_array_disp)
cam_handle.save_frame('rgb', rgb_array, shot_idx, dir_to_write + str(cam_id + 1))
io.write_log(log_writer[cam_id], shot_idx, None)
if depth_stream is not None:
depth_array = cam_handle.get_depth(depth_stream)
depth_array_disp = ((depth_array / 10000.) * 255).astype(np.uint8)
cv2.imshow(depth_window, np.uint8(depth_array_disp))
cam_handle.save_frame('depth', depth_array, shot_idx, dir_to_write + str(cam_id + 1))
shot_idx = shot_idx + 1
key = cv2.waitKey(1)
if key == 27: # exit on ESC
break
print "Stopping camera %d thread..." % (cam_id + 1)
return
def main():
# Setup camera threads
cam_threads = []
dir_to_write = "some/save/path"
for cam in range(cam_count):
cam = (cam + 1) % cam_count
cv2.namedWindow('RGB' + str(cam))
cv2.namedWindow('DEPTH' + str(cam))
one_thread = threading.Thread(target=capture_and_save,
name="CamThread" + str(cam + 1),
args=(cam_cap, cam, dir_to_write,
log_writer,
rgb_stream[cam], depth_stream[cam], io,))
cam_threads.append(one_thread)
one_thread.daemon = True
one_thread.start()
try:
while True:
pass
# cv2.waitKey(1)
except KeyboardInterrupt:
# Stop everything
for each_thread in cam_threads:
each_thread.do_run = False
each_thread.join(1)
cam_cap.stop_rgb(rgb_stream)
cam_cap.stop_depth(depth_stream)
# Stop and quit
openni2.unload()
cv2.destroyAllWindows()
if __name__ == '__main__':
main()
So, my issue is that if I remove the cv2.imshow() lines from the code, everything runs as expected and I get both camera outputs saved to file. However, with the cv2.imshow() lines, there are only "blank" windows being created and the threads seem to be "stuck", with no output at all.
I have tried several suggestions, including moving the namedWindow creation to the main thread as well as into the capture_and_save thread. I have also tried moving around the waitKey() because it was said that OpenCV only allows waitKey() in the main thread. There was no difference, however.
I solved the issue by using mutables, passing a dictionary cam_disp = {} to the thread and reading the value in the main thread. cv2.imshow() works best when kept in the main thread, so this worked perfectly. I am not sure if this is the "right" way to do this, so all suggestions are welcome.
Try moving cv2.namedWindow('RGB' + str(cam)) inside your thread target capture_and_save
The cv2.imshow function is not thread safe.
Just move cv2.namedWindow to the threading.Thread that calls cv2.imshow.
import cv2
import threading
def run():
cap = cv2.VideoCapture('test.mp4')
cv2.namedWindow("preview", cv2.WINDOW_NORMAL)
while True:
ret, frame = cap.read()
if frame is None:
print("Video is over")
break
cv2.imshow('preview', frame)
cv2.waitKey(1)
cap.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
thread = threading.Thread(target=run)
thread.start()
thread.join()
print("Bye :)")
I think, without knowing :) but its the best explanation I have.
When you try to open the image in the view you will pass the image from the camera thread to the GUI-thread. When doing so you will be out of synch. Sometimes it works. I think you will/might end up with the same problem at some point. (I did) It is more noticeable when running more than 1 thread. What you can do is use a thread-safe memory (queue) or a thread lock.
You then put the thread lock around the camera save, in the thread, and imshow read in main you will be safe. There is also the GIL spooking around which you also need to read more about. But easiest is the thread lock in your case but all depends on how much you need to control the flow of data and priority of "reading camera" or showing on screen in "real time". The answer from #DarkSidds is correct too. But if you have more than 1 camera and those are writing from the thread you will crash.

Categories

Resources