OpenCV and custom multiprocessing - python

I try to build a webcam software which uses the webcam input and changes the image, then outputs this to a virtual webcam. So I got it running pretty good, but now I want to add a System Tray Icon to control it. As this is a GUI I need the my software to run on another thread.
So my (cut down) class looks like this:
import pyvirtualcam
import cv2
import effects
import numpy as np
class VirtualCam:
def __init__(self):
self.stopped = False
self.paused = False
[...]
def run(self):
with pyvirtualcam.Camera(width=1280, height=720, fps=self.FPS, fmt=pyvirtualcam.PixelFormat.BGR) as cam:
while not self.stopped:
if not self.paused: # only copy image if NOT paused!
success, img = self.cap.read()
else:
img = self.EMPTY_IMAGE
cam.send(effects.zoom(img, self.zoom))
cam.sleep_until_next_frame()
So this should be straight-forward.
Now on my gui thread, which is based on this code, I added menu points, where one starts a thread and nothing more (while the others can pause, stop, etc) (again this is cut down):
if __name__ == '__main__':
import itertools
import glob
import virtcam
from multiprocessing import Process
cam_thread = None
vcam = virtcam.VirtualCam()
# controlling vcam
def pause(sysTrayIcon):
vcam.pause()
def cont(sysTrayIcon):
vcam.cont()
def start(sysTrayIcon):
global cam_thread
# start threading for camera capture here
cam_thread = Process(target=vcam.run)
cam_thread.start()
def stop(sysTrayIcon):
global cam_thread
vcam.stop()
cam_thread.join()
menu_options = (
('Start', next(icons), start),
[...]
)
SysTrayIcon(next(icons), hover_text, menu_options, on_quit=bye, default_menu_index=1)
Okay so this should work, shouldn't it? When I click on "Start" in the tray-menu, I get an error:
Python WNDPROC handler failed
Traceback (most recent call last):
File "I:/Entwicklung/webcamZoomer/tray_gui.py", line 207, in command
self.execute_menu_option(id)
File "I:/Entwicklung/webcamZoomer/tray_gui.py", line 214, in execute_menu_option
menu_action(self)
File "I:/Entwicklung/webcamZoomer/tray_gui.py", line 256, in start
cam_thread.start()
File "H:\Programme\Python3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "H:\Programme\Python3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "H:\Programme\Python3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "H:\Programme\Python3\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "H:\Programme\Python3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle cv2.VideoCapture objects
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "H:\Programme\Python3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "H:\Programme\Python3\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
Of course I used the search function, but I could only find things I don't really understand: It seems like OpenCV already uses multiprocessing. But why does it interfer with my code? Furthermore, I don't do any manual pickling; I actually need only the webcam input.
So - can someone help me out on this one? Thank you!
edit: I'm btw on Windows 10 and I only will need this software to be used on Windows systems.

Related

Error trying to run Pytorch on multiple GPUs

I'm trying to create a script with what I thought was a fairly simple Producer/Consumer queue. I'm using this on a system with two A4000 GPUs. Below is the relevant code.
import torch
from torch.multiprocessing import Process, set_start_method, Queue
def main():
input_data_queue = Queue(25)
send_data_queue = Queue(5)
for i in range(torch.cuda.device_count()):
Process_Data(input_data_queue, send_data_queue, i)
....
class Process_Data:
def __init__(self, in_q, out_q, gpu_id):
self.in_queue = in_q
self.out_queue = out_q
self.gpu_id = gpu_id
self.model = torch.hub.load('ultralytics/yolov5', 'custom', path='best.pt').to(torch.device(self.gpu_id))
self.model.eval()
....
if __name__ == "__main__":
set_start_method('spawn')
main()
I always get the error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/usr/lib/python3.8/multiprocessing/spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
File "/usr/local/lib/python3.8/dist-packages/torch/multiprocessing/reductions.py", line 111, in rebuild_cuda_tensor
storage = storage_cls._new_shared_cuda(
File "/usr/local/lib/python3.8/dist-packages/torch/storage.py", line 630, in _new_shared_cuda
return eval(cls.__module__)._UntypedStorage._new_shared_cuda(*args, **kwargs)
RuntimeError: CUDA error: invalid device ordinal
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
For calling the device, I've tried:
First creating a model with torch.device(0), then another class with torch.device(1)
torch.device("cuda:0") then torch.device("cuda:1")
torch.device("cuda") then torch.device("cuda")
torch.device("cuda", 0), then torch.device("cuda",1)
All variations I can find documented give the same error.
How can I get two models, running on two GPUs, sharing a work queue?

How can you use easyocr with multiprocessing?

I tried to read text on images with easyocr on python, and I want to run it separately so it doesn't hold back other parts of the code. But when I call the function inside a multiprocessing loop, I get a notimplemented error. Here is an example of code.
import multiprocessing as mp
import easyocr
import cv2
def ocr_test(q, reader):
while not q.empty():
q.get()
img = cv2.imread('unknown.png')
result = reader.readtext(img)
if __name__ == '__main__':
q = mp.Queue()
reader = easyocr.Reader(['en'])
p = mp.Process(target=ocr_test, args=(q,reader))
p.start()
q.put('start')
p.join()
and this is the error I get.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Program Files\Python310\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Program Files\Python310\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
File "C:\Python\venv\lib\site-packages\torch\multiprocessing\reductions.py", line 90, in rebuild_tensor
t = torch._utils._rebuild_tensor(storage, storage_offset, size, stride)
File "C:\Python\venv\lib\site-packages\torch\_utils.py", line 134, in _rebuild_tensor
t = torch.tensor([], dtype=storage.dtype, device=storage._untyped().device)
NotImplementedError: Could not run 'aten::empty.memory_format' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty.memory_format' is only available for these backends: [CPU, Meta, MkldnnCPU, SparseCPU, SparseCsrCPU, BackendSelect, Python, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, AutocastCPU, Autocast, Batched, VmapMode, Functionalize].
Is there a way to solve this problem?

Python: Error occurs on Spyder when modifing pandas DataFrame with multi-threading

I have a large dataframe and a column "image" in it, the data in "image" is the file name(with extension equals "jpg" or "jpeg") of a large amount of files. Some files exist with right extension, but others not. So, I have to check whether "image" data is right, but it takes 30 seconds with single-threading, I then decide to do this with multi-threading.
I have written a code with Python(3.6.5) to check this, it runs well when I execute it on Command Line, but error occurs when I execute it on Spyder(3.2.8), how could I do to avoid this?
Here is my code:
# -*- coding: utf-8 -*-
import multiprocessing
import numpy as np
import os
import pandas as pd
from multiprocessing import Pool
#some large scale DataFrame, the size is about (600, 15)
waferDf = pd.DataFrame({"image": ["aaa.jpg", "bbb.jpeg", "ccc.jpg", "ddd.jpeg", "eee.jpg", "fff.jpg", "ggg.jpeg", "hhh.jpg"]})
waferDf["imagePath"] = np.nan
#to parallelize whole process
def parallelize(func, df, uploadedDirPath):
partitionCount = multiprocessing.cpu_count()
partitions = np.array_split(df, partitionCount)
paras = [(part, uploadedDirPath) for part in partitions]
pool = Pool(partitionCount)
df = pd.concat(pool.starmap(func, paras))
pool.close()
pool.join()
return df
#check whether files exist
def checkImagePath(partialDf, uploadedDirPath):
for index in partialDf.index.values:
print(index)
if os.path.exists(os.path.join(uploadedDirPath, partialDf.loc[index, ["image"]][0].replace(".jpeg\n", ".jpeg"))):
partialDf.loc[index, ["imagePath"]][0] = os.path.join(uploadedDirPath, partialDf.loc[index, ["image"]][0].replace(".jpeg\n", ".jpeg"))
elif os.path.exists(os.path.join(uploadedDirPath, partialDf.loc[index, ["image"]][0].replace(".jpeg\n", ".jpg"))):
partialDf.loc[index, ["imagePath"]][0] = os.path.join(uploadedDirPath, partialDf.loc[index, ["image"]][0].replace(".jpeg\n", ".jpg"))
print(partialDf)
return partialDf
if __name__ == '__main__':
waferDf = parallelize(checkImagePath, waferDf, "/eap/uploadedFiles/")
print(waferDf)
and here is the error:
runfile('C:/Users/00048564/Desktop/Multi-Threading.py', wdir='C:/Users/00048564/Desktop')
Traceback (most recent call last):
File "<ipython-input-24-732edc0ea3ea>", line 1, in <module>
runfile('C:/Users/00048564/Desktop/Multi-Threading.py', wdir='C:/Users/00048564/Desktop')
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/00048564/Desktop/Multi-Threading.py", line 35, in <module>
waferDf = parallelize(checkImagePath, waferDf, "/eap/uploadedFiles/")
File "C:/Users/00048564/Desktop/Multi-Threading.py", line 17, in parallelize
pool = Pool(partitionCount)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 119, in Pool
context=self.get_context())
File "C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py", line 174, in __init__
self._repopulate_pool()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py", line 239, in _repopulate_pool
w.start()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 172, in get_preparation_data
main_mod_name = getattr(main_module.__spec__, "name", None)
AttributeError: module '__main__' has no attribute '__spec__'
In most cases,when you run python script from command line by calling keyword python 'YourFile.py' , script is executed as main program.Hence it was able to call required modules such as multiprocessing and other modules shown on your error trace.
However, your Spyder configurations could be different and your instruction to run the script as main program is not working .
Were you able to successfully run any script from Spyder that has
if __name__ == '__main__':
Read the accepted answer on this thread https://stackoverflow.com/a/419185/9968677

Python 3.x multiprocessing tkinter mainloop

How do you use multiprocessing on root.mainloop? I am using Python 3.6. I need to do lines of code after it, some requiring the object.
I do not want to create a second object, like some of the other answers for my question suggest.
Here is a little code snippet (set being a JSON object):
from multiprocessing import Process
def check():
try: sett['setup']
except KeyError:
sett['troubleshoot_file']=None
check()
else:
if sett['setup'] is True: return
elif type(sett['setup']) is not bool: raise TypeError('sett[\'setup\'] is not a type of boolian (\'bool\')')
root.=Tk()
root['bg']='blue'
mainloop=Process(target=root.mainloop)
mainloop.start()
mainloop.join()
check()
However, I get this traceback:
Traceback (most recent call last):
File "(directory)/main.py", line 41, in <module>
check()
File "(directory)/main.py", line 39, in check
mainloop.start()
File "C:\Program Files (x86)\Python36-32\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Program Files (x86)\Python36-32\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Program Files (x86)\Python36-32\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Program Files (x86)\Python36-32\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "C:\Program Files (x86)\Python36-32\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _tkinter.tkapp objects
I have tried running:
from queue import Queue
from tkinter import Tk
from multiprocessing import Process
p=Process(target=q.get())
The interpreter then completely crashes.
You cannot use any tkinter objects across multiple processes or threads. If you need to share data between the gui and other processes you will need to set up a queue, and poll the queue from the GUI.
The reason for this is that tkinter is a wrapper around a tcl interpreter that knows nothing about python threads or processes.
You will find a link on how to do this at:
docs.python.org/3.6/library/queue.html

SimpleCV and multithreading

I'm trying to use SimpleCV for image capture in Python (Windows). Capture is performed inside a function, which I want to run inside a thread. This is my code:
# -*- encoding: utf-8 -*-
import threading
import time
from SimpleCV import Camera
def run(filename):
# Initialize the camera
cam = Camera(0, {"width": 640, "height": 480})
while 1:
img = cam.getImage()
img.save(filename, quality=50, optimize=True, progressive=True)
time.sleep(3)
filename = "C:/SimpleCV/image.jpeg"
t = threading.Thread(target=run, args=(filename,))
t.start()
while(1):
time.sleep(1)
If I call the run() function directly (with no threads) everything's ok. However, when using a thread (as in the above code), Windows shows a dialog asking for the capture source and program crashes. What's the problem?
Error codes:
OpenCV Error: Bad argument (Array should be CvMat or IplImage) in unknown function, file C:\slave\WinInstallerMegaPack\src\opencv\modules\core\src\array.cpp, line 1238
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 551, in __bootstrap_inner
self.run()
File "C:\Python27\lib\threading.py", line 504, in run
self.__target(*self.__args, **self.__kwargs)
File "C:/SimpleCV/test.py", line 12, in run
img = cam.getImage()
File "C:\Python27\lib\site-packages\SimpleCV\Camera.py", line 586, in getImage
newimg = cv.CreateImage(cv.GetSize(frame), cv.IPL_DEPTH_8U, 3)
error: Array should be CvMat or IplImage

Categories

Resources