Can't run process from another python .py file - python

I am creating a class and when initializing it, I am calling (multiprocess) its function. When I am creating this class from the same .py file, everything works fine. But, when I am initializing this class from another .py file - it fails and throws an error... Why?
MyDummyClass.py:
import multiprocessing
class MyDummyClass:
def __init__(self):
print("I am in '__init__()'")
if __name__ == 'MyDummyClass':
p = multiprocessing.Process(target=self.foo())
p.start()
def foo(self):
print("Hello from foo()")
example.py:
from MyDummyClass import MyDummyClass
dummy = MyDummyClass()
When I run example.py, I am getting this error:
I am in '__init__()'
Hello from foo()
I am in '__init__()'
Hello from foo()
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Python\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Python\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Python\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Python\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\Python\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Python\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Python\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "E:\Dropbox\Docs\Business\_NZTrackAlerts\Website\Current dev\NZtracker\cgi-bin\example1.py", line 2, in <module>
dummy = MyDummyClass()
File "...path to my file...\MyDummyClass.py", line 13, in __init__
p.start()
File "C:\Python\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Python\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Python\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Python\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Python\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Python\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
I can't understand (and not from other posts as well) how to fix it.
Thank you very much for your help!!

This has to do with the way that python actually executes scripts. The error that it's throwing essentially says that you're trying to start a new process when python isn't ready yet. This is because you're calling dummy = MyDummyClass() in the main script without giving python a chance to fully initialize. If you instead have example.py like this:
from MyDummyClass import MyDummyClass
if __name__ == "__main__":
dummy = MyDummyClass()
This will produce your desired output:
C:\Python Scripts>python example.py
I am in '__init__()'
Hello from foo()
The if __name__ == "__main__" block tells python "only execute this if the script is being run as the main script (i.e. python example.py), and this will force python to initialize everything properly before it runs that block.
Apologies if this answer doesn't describe enough what's going on in the back end of python as it's doing the initializing, I don't know a ton about it myself :)

Related

For some reason this python multiprocessing code is not working

So, I am just starting out with Python multiprocessing. I tried this example but didn't get it to quite work:
import multiprocessing
def function():
time.sleep(1)
print("slept once")
p1 = multiprocessing.Process(target=function)
p2 = multiprocessing.Process(target=function)
p1.start()
p2.start()
it should output this:
(sleeping 1 second)
slept once
slept once
but instead it gave me an error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\krist\PycharmProjects\chat_app\client_1.py", line 11, in <module>
p1.start()
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\krist\PycharmProjects\chat_app\client_1.py", line 11, in <module>
p1.start()
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
I am using windows 11, python 3.6
Hope someone can help!                                                         
Here is just a bunch of random stuff because stackoverflow would not allow me to post this otherwise:
data:1,myfile;1:0_sub_.error:help.error:help.data:1,myfile;1:0_sub_
error:help.data:1,myfile;1:0_sub_error:help.data:1,myfile;1:0_sub_error:help.data:1,myfile;1:0_sub_
;;rm.0000;ip/ip.error:help.data:1,myfile;1:0_sub_
Your code works fine if you add in the expected idiom:
import time
import multiprocessing
def function():
time.sleep(1)
print("slept once")
if __name__ == '__main__':
p1 = multiprocessing.Process(target=function)
p2 = multiprocessing.Process(target=function)
p1.start()
p2.start()
You can't start a new process unless you follow the multiprocessing library's guidelines[1].
There is a specific guideline in this library how the start() method should only be running within the if __name__ == '__main__ conditional.
Why? Because it's safer to spawn new processes while the files running directly and not being imported.
So, this should do the trick:
import multiprocessing
import time
def function():
time.sleep(1)
print("slept once")
p1 = multiprocessing.Process(target=function)
p2 = multiprocessing.Process(target=function)
if __name__ == '__main__':
p1.start()
p2.start()
[0]How to use the MultiProcessing library? https://docs.python.org/3/library/multiprocessing.html
[1]https://docs.python.org/3/library/multiprocessing.html#multiprocessing-programming

why multiprocess works differently in ubuntu vs macOS?

I have this simple code to make things simpler.
from multiprocessing import Process
def abc():
print("HI")
print(a)
a = 4
p = Process(target = abc)
p.start()
It works perfectly fine in ubuntu (Python 3.8.5) and provides the output:
HI
4
However, it fails in spyder (Python 3.9.5) "AttributeError: Can't get attribute 'abc' on <module 'main' (built-in)>" and macOS (Python 3.8.10, I tried other versions as well and failed) CLI "RuntimeError".
Spyder error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "multiprocessing/spawn.pyc", line 116, in spawn_main
File "multiprocessing/spawn.pyc", line 126, in _main
AttributeError: Can't get attribute 'abc' on <module '__main__' (built-in)>
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "multiprocessing/spawn.pyc", line 116, in spawn_main
File "multiprocessing/spawn.pyc", line 126, in _main
AttributeError: Can't get attribute 'abc' on <module '__main__' (built-in)>
MacOS-BigSur error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/Users/asavas/opt/anaconda3/lib/python3.8/runpy.py", line 265, in run_path
return _run_module_code(code, init_globals, run_name,
File "/Users/asavas/opt/anaconda3/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Users/asavas/opt/anaconda3/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/asavas/delete.py", line 8, in <module>
p.start()
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Trying to understand why and how can I resolve this issue?
Thanks
It appears that this version of macOS has switched over to using the spawn rather than fork method of creating new processes. This means that when a new process is created, a new, empty address space is created, a new Python interpreter is launched and the source file is re-executed from the top. Consequently, any code that is at global scope and not within a block that begins if __name__ == '__main__': will get executed. That is why any code that creates processes must be in such a block or you will get in a recursive, process-creating loop that generates the errors you see. You simply need:
from multiprocessing import Process
def abc():
print("HI")
print(a)
# this will get re-executed in the new subprocess:
a = 4
if __name__ == '__main__':
p = Process(target=abc)
p.start()
p.join() # explicitly wait for process to complete

multiprocessing causes infinite loop of errors

Using python 3.8 on osx Big Sur
The lines
import multiprocessing as mp
pool = mp.Pool(2)
causes an infinite loop of errors, but a snippet of it is collected below.
Also, mp.freeze_support() does not seem to help.
Traceback (most recent call last):
File "<string>", line 1, in <module>
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
exitcode = _main(fd, parent_sentinel)
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
prepare(preparation_data)
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
_fixup_main_from_path(data['init_main_from_path'])
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/runpy.py", line 264, in run_path
main_content = runpy.run_path(main_path,
.
.
.
Any ideas why?
Edit:
None of the solutions worked at the time, but it seems to be working today? I did install some other packages that made me do updates on dependencies so maybe it changed my multiprocessing version or some other package version that fixed it?
Either way its working now, sorry this won't be much help to others /:
Specifically when the start method is "spawn" (default on MacOS now):
You must only ever create child processes (creating a pool creates child processes) inside the if __name__ == "__main__": block because the children import the __main__ file. When that import happens the file gets executed (just like any import) and you will recursively keep creating more and more child processes unless some sort of thing (like the if __name__ ... block) limits execution to only the parent process.

Python inputs library - 'NoneType' object has no attribute 'terminate'

I am trying to use the inputs library to get user input from mice, gamepads, and keyboards.
I tried the following code which is supposed to read events from all devices:
import inputs
while True:
for device in inputs.devices:
for event in device.read():
print(event)
There is a problem when I run the code - I get the following error: AttributeError: 'NoneType' object has no attribute 'terminate'
I have also tried to read a single event:
import inputs
while True:
for device in inputs.devices:
event = device.read()
print(event)
This gives me the same error.
I am using Python3.6 and inputs==0.4 from pip
Does anyone know how to fix this error?
Full traceback:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\python36\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\python36\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\python36\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\python36\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\python36\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\python36\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\David\Documents\GitHub\Bubbles\testing.py", line 5, in <module>
event = device.read()
File "C:\python36\lib\site-packages\inputs.py", line 2313, in read
return next(iter(self))
File "C:\python36\lib\site-packages\inputs.py", line 2273, in __iter__
event = self._do_iter()
File "C:\python36\lib\site-packages\inputs.py", line 2292, in _do_iter
data = self._get_data(read_size)
File "C:\python36\lib\site-packages\inputs.py", line 2365, in _get_data
return self._pipe.recv_bytes()
File "C:\python36\lib\site-packages\inputs.py", line 2330, in _pipe
self._listener.start()
File "C:\python36\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\python36\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\python36\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\python36\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\python36\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\python36\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Exception ignored in: <bound method InputDevice.__del__ of inputs.Keyboard("/dev/input/by-id/usb-A_Nice_Keyboard-event-kbd")>
Traceback (most recent call last):
File "C:\python36\lib\site-packages\inputs.py", line 2337, in __del__
File "C:\python36\lib\multiprocessing\process.py", line 116, in terminate
AttributeError: 'NoneType' object has no attribute 'terminate'
Define your steps under a method like:
def func():
while True:
for device in inputs.devices:
event = device.read()
print(event)
And then call your function using:
if __name__ == '__main__':
func()

ThreadPool is OK, Pool raises RuntimeError

i'm new to Python and i'm trying to organize some kind of timeout, when process hangs. Following code works fine:
from multiprocessing.pool import ThreadPool
pool = ThreadPool(processes=1)
file_list = []
while not file_list:
async_result = pool.apply_async(list_retriever,)
try:
file_list = async_result.get(15)
except:
print('We\'ve got timeout!\n')
I've found out that ThreadPool is not documented properly, and I've decided to switch to the Pool instead. Following code raises RuntimeError:
from multiprocessing.pool import Pool
pool = Pool(processes=1)
file_list = []
while not file_list:
async_result = pool.apply_async(list_retriever,)
try:
file_list = async_result.get(15)
except:
print('We\'ve got timeout!\n')
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Bodnya\AppData\Local\Programs\Python\Python35\lib\multiprocessing\spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "C:\Users\Bodnya\AppData\Local\Programs\Python\Python35\lib\multiprocessing\spawn.py", line 115, in _main
prepare(preparation_data)
File "C:\Users\Bodnya\AppData\Local\Programs\Python\Python35\lib\multiprocessing\spawn.py", line 226, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\Bodnya\AppData\Local\Programs\Python\Python35\lib\multiprocessing\spawn.py", line 278, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\Users\Bodnya\AppData\Local\Programs\Python\Python35\lib\runpy.py", line 254, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\Bodnya\AppData\Local\Programs\Python\Python35\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\Bodnya\AppData\Local\Programs\Python\Python35\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Bodnya\PycharmProjects\zuf-test-branch\zuf-test\auto-deploy.py", line 69, in <module>
pool = Pool(processes=1)
File "C:\Users\Bodnya\AppData\Local\Programs\Python\Python35\lib\multiprocessing\pool.py", line 168, in __init__
self._repopulate_pool()
File "C:\Users\Bodnya\AppData\Local\Programs\Python\Python35\lib\multiprocessing\pool.py", line 233, in _repopulate_pool
w.start()
File "C:\Users\Bodnya\AppData\Local\Programs\Python\Python35\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\Bodnya\AppData\Local\Programs\Python\Python35\lib\multiprocessing\context.py", line 313, in _Popen
return Popen(process_obj)
File "C:\Users\Bodnya\AppData\Local\Programs\Python\Python35\lib\multiprocessing\popen_spawn_win32.py", line 34, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\Bodnya\AppData\Local\Programs\Python\Python35\lib\multiprocessing\spawn.py", line 144, in get_preparation_data
_check_not_importing_main()
File "C:\Users\Bodnya\AppData\Local\Programs\Python\Python35\lib\multiprocessing\spawn.py", line 137, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
And this error message keeps showing in the endless loop. Can you please tell me what I'm doing wrong?
UPDATE
Apparently I did not protect the main code like this to avoid creating subprocesses recursively, but why it worked in the first example?
Here's working solution:
def function():
from multiprocessing.pool import ThreadPool
pool = ThreadPool(processes=3)
file_list = []
for i in range(3)
async_result = pool.apply_async(list_retriever,)
try:
file_list = async_result.get(15)
except:
print('We\'ve got timeout!\n')
continue
if __name__ == '__main__':
function()
However, still, it soent explain, why it was working in the first place.

Categories

Resources