SimpleCV and multithreading - python

I'm trying to use SimpleCV for image capture in Python (Windows). Capture is performed inside a function, which I want to run inside a thread. This is my code:
# -*- encoding: utf-8 -*-
import threading
import time
from SimpleCV import Camera
def run(filename):
# Initialize the camera
cam = Camera(0, {"width": 640, "height": 480})
while 1:
img = cam.getImage()
img.save(filename, quality=50, optimize=True, progressive=True)
time.sleep(3)
filename = "C:/SimpleCV/image.jpeg"
t = threading.Thread(target=run, args=(filename,))
t.start()
while(1):
time.sleep(1)
If I call the run() function directly (with no threads) everything's ok. However, when using a thread (as in the above code), Windows shows a dialog asking for the capture source and program crashes. What's the problem?
Error codes:
OpenCV Error: Bad argument (Array should be CvMat or IplImage) in unknown function, file C:\slave\WinInstallerMegaPack\src\opencv\modules\core\src\array.cpp, line 1238
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 551, in __bootstrap_inner
self.run()
File "C:\Python27\lib\threading.py", line 504, in run
self.__target(*self.__args, **self.__kwargs)
File "C:/SimpleCV/test.py", line 12, in run
img = cam.getImage()
File "C:\Python27\lib\site-packages\SimpleCV\Camera.py", line 586, in getImage
newimg = cv.CreateImage(cv.GetSize(frame), cv.IPL_DEPTH_8U, 3)
error: Array should be CvMat or IplImage

Related

How can you use easyocr with multiprocessing?

I tried to read text on images with easyocr on python, and I want to run it separately so it doesn't hold back other parts of the code. But when I call the function inside a multiprocessing loop, I get a notimplemented error. Here is an example of code.
import multiprocessing as mp
import easyocr
import cv2
def ocr_test(q, reader):
while not q.empty():
q.get()
img = cv2.imread('unknown.png')
result = reader.readtext(img)
if __name__ == '__main__':
q = mp.Queue()
reader = easyocr.Reader(['en'])
p = mp.Process(target=ocr_test, args=(q,reader))
p.start()
q.put('start')
p.join()
and this is the error I get.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Program Files\Python310\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Program Files\Python310\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
File "C:\Python\venv\lib\site-packages\torch\multiprocessing\reductions.py", line 90, in rebuild_tensor
t = torch._utils._rebuild_tensor(storage, storage_offset, size, stride)
File "C:\Python\venv\lib\site-packages\torch\_utils.py", line 134, in _rebuild_tensor
t = torch.tensor([], dtype=storage.dtype, device=storage._untyped().device)
NotImplementedError: Could not run 'aten::empty.memory_format' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty.memory_format' is only available for these backends: [CPU, Meta, MkldnnCPU, SparseCPU, SparseCsrCPU, BackendSelect, Python, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, AutocastCPU, Autocast, Batched, VmapMode, Functionalize].
Is there a way to solve this problem?

OpenCV and custom multiprocessing

I try to build a webcam software which uses the webcam input and changes the image, then outputs this to a virtual webcam. So I got it running pretty good, but now I want to add a System Tray Icon to control it. As this is a GUI I need the my software to run on another thread.
So my (cut down) class looks like this:
import pyvirtualcam
import cv2
import effects
import numpy as np
class VirtualCam:
def __init__(self):
self.stopped = False
self.paused = False
[...]
def run(self):
with pyvirtualcam.Camera(width=1280, height=720, fps=self.FPS, fmt=pyvirtualcam.PixelFormat.BGR) as cam:
while not self.stopped:
if not self.paused: # only copy image if NOT paused!
success, img = self.cap.read()
else:
img = self.EMPTY_IMAGE
cam.send(effects.zoom(img, self.zoom))
cam.sleep_until_next_frame()
So this should be straight-forward.
Now on my gui thread, which is based on this code, I added menu points, where one starts a thread and nothing more (while the others can pause, stop, etc) (again this is cut down):
if __name__ == '__main__':
import itertools
import glob
import virtcam
from multiprocessing import Process
cam_thread = None
vcam = virtcam.VirtualCam()
# controlling vcam
def pause(sysTrayIcon):
vcam.pause()
def cont(sysTrayIcon):
vcam.cont()
def start(sysTrayIcon):
global cam_thread
# start threading for camera capture here
cam_thread = Process(target=vcam.run)
cam_thread.start()
def stop(sysTrayIcon):
global cam_thread
vcam.stop()
cam_thread.join()
menu_options = (
('Start', next(icons), start),
[...]
)
SysTrayIcon(next(icons), hover_text, menu_options, on_quit=bye, default_menu_index=1)
Okay so this should work, shouldn't it? When I click on "Start" in the tray-menu, I get an error:
Python WNDPROC handler failed
Traceback (most recent call last):
File "I:/Entwicklung/webcamZoomer/tray_gui.py", line 207, in command
self.execute_menu_option(id)
File "I:/Entwicklung/webcamZoomer/tray_gui.py", line 214, in execute_menu_option
menu_action(self)
File "I:/Entwicklung/webcamZoomer/tray_gui.py", line 256, in start
cam_thread.start()
File "H:\Programme\Python3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "H:\Programme\Python3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "H:\Programme\Python3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "H:\Programme\Python3\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "H:\Programme\Python3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle cv2.VideoCapture objects
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "H:\Programme\Python3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "H:\Programme\Python3\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
Of course I used the search function, but I could only find things I don't really understand: It seems like OpenCV already uses multiprocessing. But why does it interfer with my code? Furthermore, I don't do any manual pickling; I actually need only the webcam input.
So - can someone help me out on this one? Thank you!
edit: I'm btw on Windows 10 and I only will need this software to be used on Windows systems.

How can I save a filtered pyshark FileCapture to a new pcap file?

I have a program that can scan a pcap file using pyshark.FileCapture and then print the filtered packets.
I want to save those packets to a new pcap file.
Code:
import pyshark
import os
import sys
from scapy.all import *
def save_to_pcap(cap, filename):
new_cap = PcapWriter(filename, append=True)
for packet in cap:
new_cap.write(packet.get_raw_packet())
def load_pcap(filter_str, path):
cap = pyshark.FileCapture(path, display_filter=filter_str)
return cap
def main():
cap = load_pcap('http', 'file.pcap')
cap
save_to_pcap(cap, 'results.pcap')
main()
I tried using scapy, but save_to_pcap() function does not work and this exception pops up:
Traceback (most recent call last):
File "SharkAn.py", line 116, in <module>
main()
File "SharkAn.py", line 108, in main
save_to_pcap(cap, filename)
File "SharkAn.py", line 81, in save_to_pcap
pcap = rdpcap(cap)
File "C:\Users\Gal\AppData\Local\Programs\Python\Python37\lib\site-packages\scapy\utils.py", line 860, in rdpcap
with PcapReader(filename) as fdesc:
File "C:\Users\Gal\AppData\Local\Programs\Python\Python37\lib\site-packages\scapy\utils.py", line 883, in __call__
filename, fdesc, magic = cls.open(filename)
File "C:\Users\Gal\AppData\Local\Programs\Python\Python37\lib\site-packages\scapy\utils.py", line 914, in open
magic = fdesc.read(4)
AttributeError: 'FileCapture' object has no attribute 'read'
Just did exactly what you want:
cap = pyshark.FileCapture('path.pcap', display_filter=filter_str, output_file='path_to_save.pcap')
cap.load_packets()
And this will save packets to 'path_to_save.pcap'
This method will loade captured file to memory.
So scapy is not needed.

Python: OverflowError: Python int too large to convert to C long when saving plot as image

I'm programming using python on a raspberry PI 1. I get this error when I try to save a plot using plt.savefig, even if the content of the plot is a single value. I have a Tkinter running on a main prosess, I have seperate thread doing some calculations by calling functions in a different .py file, the plt.savefig is in one of these functions. plt.savefig works fine when calling the second .py file directly, so I guess this has something to do with my threading? My knowledge is kind of limited, I would really appreciate some help :(
Edit:
import threading
import time
import matplotlib.pyplot as plt
def saveplot():
plt.plot(3)
plt.savefig("plot.jpg")
time.sleep(10)
threads = []
t = threading.Thread(target=saveplot)
threads.append(t)
t.start()
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "Slett.py", line 6, in saveplot
plt.savefig("plot.jpg")
File "/usr/lib/python2.7/dist-packages/matplotlib/pyplot.py", line 578, in savefig
draw() # need this if 'transparent=True' to reset colors
File "/usr/lib/python2.7/dist-packages/matplotlib/pyplot.py", line 571, in draw
get_current_fig_manager().canvas.draw()
File "/usr/lib/python2.7/dist-packages/matplotlib/backends/backend_tkagg.py", line 350, in draw
tkagg.blit(self._tkphoto, self.renderer._renderer, colormode=2)
File "/usr/lib/python2.7/dist-packages/matplotlib/backends/tkagg.py", line 21, in blit
_tkagg.tkinit(tk.interpaddr(), 1)
OverflowError: Python int too large to convert to C long
please look at the thread: https://github.com/matplotlib/matplotlib/issues/7680 this is bug in matplotlib https://github.com/matplotlib/matplotlib/pull/7634

py2exe on PIL ImageStat.Stat throws Exception: argument 2 must be ImagingCore, not ImagingCore

I'm trying to create a .exe from a python program using py2exe, but when I run the .exe I get a log file with
Exception in thread Thread-1:
Traceback (most recent call last):
File "threading.pyc", line 532, in __bootstrap_inner
File "threading.pyc", line 484, in run
File "webcam.py", line 66, in loop
File "ImageStat.pyc", line 50, in __init__
File "PIL\Image.pyc", line 990, in histogram
TypeError: argument 2 must be ImagingCore, not ImagingCore
Here's some code:
#webcam.py
cam = VideoCapture.Device();
def getImage():
return cam.getImage();
...
camshot = grayscale(getImage());
lightCoords = [];
level = camshot.getextrema()[1]-leniency;
for p in camshot.getdata():
if p>=level:
lightCoords.append(255);
else:
lightCoords.append(0);
maskIm = new("L",res);
maskIm.putdata(lightCoords);
...
64 colorcamshot = getImage();
65 camshot = grayscale(colorcamshot);
66 brightness = ImageStat.Stat(camshot,maskIm).sum[0]/divVal;
Try importing PIL in your main thread before starting any worker threads. It looks like the same class has been imported twice, and type comparisons are acting wacky as a result.

Categories

Resources