why multiprocess works differently in ubuntu vs macOS? - python

I have this simple code to make things simpler.
from multiprocessing import Process
def abc():
print("HI")
print(a)
a = 4
p = Process(target = abc)
p.start()
It works perfectly fine in ubuntu (Python 3.8.5) and provides the output:
HI
4
However, it fails in spyder (Python 3.9.5) "AttributeError: Can't get attribute 'abc' on <module 'main' (built-in)>" and macOS (Python 3.8.10, I tried other versions as well and failed) CLI "RuntimeError".
Spyder error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "multiprocessing/spawn.pyc", line 116, in spawn_main
File "multiprocessing/spawn.pyc", line 126, in _main
AttributeError: Can't get attribute 'abc' on <module '__main__' (built-in)>
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "multiprocessing/spawn.pyc", line 116, in spawn_main
File "multiprocessing/spawn.pyc", line 126, in _main
AttributeError: Can't get attribute 'abc' on <module '__main__' (built-in)>
MacOS-BigSur error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/Users/asavas/opt/anaconda3/lib/python3.8/runpy.py", line 265, in run_path
return _run_module_code(code, init_globals, run_name,
File "/Users/asavas/opt/anaconda3/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Users/asavas/opt/anaconda3/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/asavas/delete.py", line 8, in <module>
p.start()
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "/Users/asavas/opt/anaconda3/lib/python3.8/multiprocessing/spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Trying to understand why and how can I resolve this issue?
Thanks

It appears that this version of macOS has switched over to using the spawn rather than fork method of creating new processes. This means that when a new process is created, a new, empty address space is created, a new Python interpreter is launched and the source file is re-executed from the top. Consequently, any code that is at global scope and not within a block that begins if __name__ == '__main__': will get executed. That is why any code that creates processes must be in such a block or you will get in a recursive, process-creating loop that generates the errors you see. You simply need:
from multiprocessing import Process
def abc():
print("HI")
print(a)
# this will get re-executed in the new subprocess:
a = 4
if __name__ == '__main__':
p = Process(target=abc)
p.start()
p.join() # explicitly wait for process to complete

Related

For some reason this python multiprocessing code is not working

So, I am just starting out with Python multiprocessing. I tried this example but didn't get it to quite work:
import multiprocessing
def function():
time.sleep(1)
print("slept once")
p1 = multiprocessing.Process(target=function)
p2 = multiprocessing.Process(target=function)
p1.start()
p2.start()
it should output this:
(sleeping 1 second)
slept once
slept once
but instead it gave me an error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\krist\PycharmProjects\chat_app\client_1.py", line 11, in <module>
p1.start()
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\krist\PycharmProjects\chat_app\client_1.py", line 11, in <module>
p1.start()
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\krist\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
I am using windows 11, python 3.6
Hope someone can help!                                                         
Here is just a bunch of random stuff because stackoverflow would not allow me to post this otherwise:
data:1,myfile;1:0_sub_.error:help.error:help.data:1,myfile;1:0_sub_
error:help.data:1,myfile;1:0_sub_error:help.data:1,myfile;1:0_sub_error:help.data:1,myfile;1:0_sub_
;;rm.0000;ip/ip.error:help.data:1,myfile;1:0_sub_
Your code works fine if you add in the expected idiom:
import time
import multiprocessing
def function():
time.sleep(1)
print("slept once")
if __name__ == '__main__':
p1 = multiprocessing.Process(target=function)
p2 = multiprocessing.Process(target=function)
p1.start()
p2.start()
You can't start a new process unless you follow the multiprocessing library's guidelines[1].
There is a specific guideline in this library how the start() method should only be running within the if __name__ == '__main__ conditional.
Why? Because it's safer to spawn new processes while the files running directly and not being imported.
So, this should do the trick:
import multiprocessing
import time
def function():
time.sleep(1)
print("slept once")
p1 = multiprocessing.Process(target=function)
p2 = multiprocessing.Process(target=function)
if __name__ == '__main__':
p1.start()
p2.start()
[0]How to use the MultiProcessing library? https://docs.python.org/3/library/multiprocessing.html
[1]https://docs.python.org/3/library/multiprocessing.html#multiprocessing-programming

multiprocessing causes infinite loop of errors

Using python 3.8 on osx Big Sur
The lines
import multiprocessing as mp
pool = mp.Pool(2)
causes an infinite loop of errors, but a snippet of it is collected below.
Also, mp.freeze_support() does not seem to help.
Traceback (most recent call last):
File "<string>", line 1, in <module>
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
exitcode = _main(fd, parent_sentinel)
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
prepare(preparation_data)
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
_fixup_main_from_path(data['init_main_from_path'])
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/Users/biscuit/.conda/envs/advanced-projects/lib/python3.8/runpy.py", line 264, in run_path
main_content = runpy.run_path(main_path,
.
.
.
Any ideas why?
Edit:
None of the solutions worked at the time, but it seems to be working today? I did install some other packages that made me do updates on dependencies so maybe it changed my multiprocessing version or some other package version that fixed it?
Either way its working now, sorry this won't be much help to others /:
Specifically when the start method is "spawn" (default on MacOS now):
You must only ever create child processes (creating a pool creates child processes) inside the if __name__ == "__main__": block because the children import the __main__ file. When that import happens the file gets executed (just like any import) and you will recursively keep creating more and more child processes unless some sort of thing (like the if __name__ ... block) limits execution to only the parent process.

problems with working compile and multiprocessing at the same time

i work in VSCode and when I run this file:
from multiprocessing import Process
def mp_setup_and_run(processes_num, *args):
processes = {}
for i in range(processes_num):
processes[i] = Process(
target=function_example,
args=args,
daemon=True,)
processes[i].start()
for i in range(processes_num):
processes[i].join()
def function_example(*data):
print(data)
if __name__ == "__main__":
compiled = compile("z**2 + c", "<string>", "eval")
mp_setup_and_run(3, compiled)
I get an exception/s:
PS C:\Python\projects\mondebrot_painter> cd 'c:\Python\projects\mondebrot_painter'; & 'C:\Program Files\Python38\python.exe' 'c:\Users\ASUS\.vscode\extensions\ms-python.python-2020.5.80290\pythonFiles\lib\python\debugpy\no_wheels\debugpy\launcher' '51560' '--' 'c:\Python\projects\mondebrot_painter\test.py'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Program Files\Python38\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Program Files\Python38\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
Traceback (most recent call last):
File "C:\Program Files\Python38\lib\runpy.py", line 193, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Program Files\Python38\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "c:\Users\ASUS\.vscode\extensions\ms-python.python-2020.5.80290\pythonFiles\lib\python\debugpy\no_wheels\debugpy\__main__.py", line 45, in <module>
cli.main()
File "c:\Users\ASUS\.vscode\extensions\ms-python.python-2020.5.80290\pythonFiles\lib\python\debugpy\no_wheels\debugpy/..\debugpy\server\cli.py", line 430, in main
run()
File "c:\Users\ASUS\.vscode\extensions\ms-python.python-2020.5.80290\pythonFiles\lib\python\debugpy\no_wheels\debugpy/..\debugpy\server\cli.py", line 267, in run_file
runpy.run_path(options.target, run_name=compat.force_str("__main__"))
File "C:\Program Files\Python38\lib\runpy.py", line 263, in run_path
return _run_module_code(code, init_globals, run_name,
File "C:\Program Files\Python38\lib\runpy.py", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "C:\Program Files\Python38\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "c:\Python\projects\mondebrot_painter\test.py", line 45, in <module>
result = mp_setup_and_run(3, compiled)
File "c:\Python\projects\mondebrot_painter\test.py", line 19, in mp_setup_and_run
processes[i].start()
File "C:\Program Files\Python38\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Program Files\Python38\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Program Files\Python38\lib\multiprocessing\context.py", line 326, in _Popen
return Popen(process_obj)
File "C:\Program Files\Python38\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
reduction.dump(process_obj, to_child)
File "C:\Program Files\Python38\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle 'code' object
and the debugger redirects me to the <string> file:
LOAD_CONST(0), LOAD_CONST(None), IMPORT_NAME(sys), STORE_NAME(sys), LOAD_NAME(sys.path), LOAD_METHOD(insert), LOAD_CONST(0), LOAD_CONST('c:\\Users\\ASUS\\.vscode\\extensions\\ms-python.python-2020.5.80290\\pythonFiles\\lib\\python\\debugpy\\no_wheels\\debugpy\\_vendored\\pydevd'), CALL_METHOD{2}, POP_TOP, LOAD_CONST(0), LOAD_CONST(None), IMPORT_NAME(pydevd), STORE_NAME(pydevd), LOAD_CONST('http_json'), LOAD_NAME(pydevd.PydevdCustomization), STORE_ATTR(DEFAULT_PROTOCOL), LOAD_NAME(pydevd.settrace), LOAD_CONST('127.0.0.1'), LOAD_CONST(51592), LOAD_CONST(False), LOAD_CONST(False), LOAD_CONST(True), LOAD_CONST(None), LOAD_CONST('92e8bb604eeece436b2401def85a7ab95455e6c26fd9d660cb8175e691d71bd0'), LOAD_CONST('127.0.0.1'), LOAD_CONST('92e8bb604eeece436b2401def85a7ab95455e6c26fd9d660cb8175e691d71bd0'), LOAD_CONST(True), LOAD_CONST(True), LOAD_CONST(51592), LOAD_CONST(9040), LOAD_CONST(False), LOAD_CONST(('client', 'client-access-token', 'json-dap-http', 'multiprocess', 'port', 'ppid', 'server')), BUILD_CONST_KEY_MAP{7}, LOAD_CONST(('host', 'port', 'suspend', 'trace_only_current_thread', 'patch_multiprocessing', 'access_token', 'client_access_token', '__setup_holder__')), CALL_FUNCTION_KW{8}, POP_TOP, LOAD_CONST(0), LOAD_CONST(('spawn_main',)), IMPORT_NAME(multiprocessing.spawn), IMPORT_FROM(spawn_main), STORE_NAME(spawn_main), POP_TOP, LOAD_NAME(spawn_main), LOAD_CONST(9040), LOAD_CONST(892), LOAD_CONST(('parent_pid', 'pipe_handle')), CALL_FUNCTION_KW{2}, POP_TOP, return None
if I run the program from the console, I get this message:
C:\Python\projects\mondebrot_painter>python set_generator.py
Traceback (most recent call last):
File "set_generator.py", line 121, in <module>
set_ = mp_setup_and_run(senter, length, quality, processes_num, max_iter, compiled, mode)
File "set_generator.py", line 86, in mp_setup_and_run
processes[i].start()
File "C:\Program Files\Python38\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Program Files\Python38\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Program Files\Python38\lib\multiprocessing\context.py", line 326, in _Popen
return Popen(process_obj)
File "C:\Program Files\Python38\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
reduction.dump(process_obj, to_child)
File "C:\Program Files\Python38\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle 'code' object
C:\Python\projects\mondebrot_painter>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Program Files\Python38\lib\multiprocessing\spawn.py", line 107, in spawn_main
new_handle = reduction.duplicate(pipe_handle,
File "C:\Program Files\Python38\lib\multiprocessing\reduction.py", line 79, in duplicate
return _winapi.DuplicateHandle(
PermissionError: [WinError 5] Access Denied
I am somewhat lost and don’t understand what is happening and why I can't pass compiled.
If you simplify away the multiprocessing code and just use this from the console, you'll see the TypeError you are getting:
$ python
...
>>> compiled = compile("z**2 + c", "<string>", "eval")
>>> import pickle
>>> pickle.dumps(compiled)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can't pickle code objects
This makes sense because the documentation tells us that pickle can handle:
None, True, and False
integers, floating point numbers, complex numbers
strings, bytes, bytearrays
tuples, lists, sets, and dictionaries containing only picklable objects
functions defined at the top level of a module (using def, not lambda)
built-in functions defined at the top level of a module
classes that are defined at the top level of a module
instances of such classes whose __dict__ or the result of calling __getstate__() is picklable (see section Pickling Class Instances for details).
and compiled is not one of these.1
What's not said here, but is crucial to know, is that the multiprocessing module must be able to use the pickle code to serialize objects, so as to send them from one Python process to another. Since your compiled expression is not serializable, it cannot be sent from one Python process to another.
The trick is to serialize the expression, not the compiled expression. That is, instead of:
mp_setup_and_run(3, compiled)
use:
mp_setup_and_run(3, "z**2 + c")
Then, in mp_setup_and_run, have it pass the expression to the function. Have each function make its own call to compile. You'll do three separate compiles, in your three separate processes that run with the multiprocessing module, but that's OK.
1Of course, the documentation also says:
Attempts to pickle unpicklable objects will raise the PicklingError exception
when you and I both got TypeError instead. But this is the reason for the TypeError.

Python inputs library - 'NoneType' object has no attribute 'terminate'

I am trying to use the inputs library to get user input from mice, gamepads, and keyboards.
I tried the following code which is supposed to read events from all devices:
import inputs
while True:
for device in inputs.devices:
for event in device.read():
print(event)
There is a problem when I run the code - I get the following error: AttributeError: 'NoneType' object has no attribute 'terminate'
I have also tried to read a single event:
import inputs
while True:
for device in inputs.devices:
event = device.read()
print(event)
This gives me the same error.
I am using Python3.6 and inputs==0.4 from pip
Does anyone know how to fix this error?
Full traceback:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\python36\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\python36\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\python36\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\python36\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\python36\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\python36\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\David\Documents\GitHub\Bubbles\testing.py", line 5, in <module>
event = device.read()
File "C:\python36\lib\site-packages\inputs.py", line 2313, in read
return next(iter(self))
File "C:\python36\lib\site-packages\inputs.py", line 2273, in __iter__
event = self._do_iter()
File "C:\python36\lib\site-packages\inputs.py", line 2292, in _do_iter
data = self._get_data(read_size)
File "C:\python36\lib\site-packages\inputs.py", line 2365, in _get_data
return self._pipe.recv_bytes()
File "C:\python36\lib\site-packages\inputs.py", line 2330, in _pipe
self._listener.start()
File "C:\python36\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\python36\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\python36\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\python36\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\python36\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\python36\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Exception ignored in: <bound method InputDevice.__del__ of inputs.Keyboard("/dev/input/by-id/usb-A_Nice_Keyboard-event-kbd")>
Traceback (most recent call last):
File "C:\python36\lib\site-packages\inputs.py", line 2337, in __del__
File "C:\python36\lib\multiprocessing\process.py", line 116, in terminate
AttributeError: 'NoneType' object has no attribute 'terminate'
Define your steps under a method like:
def func():
while True:
for device in inputs.devices:
event = device.read()
print(event)
And then call your function using:
if __name__ == '__main__':
func()

Can't run process from another python .py file

I am creating a class and when initializing it, I am calling (multiprocess) its function. When I am creating this class from the same .py file, everything works fine. But, when I am initializing this class from another .py file - it fails and throws an error... Why?
MyDummyClass.py:
import multiprocessing
class MyDummyClass:
def __init__(self):
print("I am in '__init__()'")
if __name__ == 'MyDummyClass':
p = multiprocessing.Process(target=self.foo())
p.start()
def foo(self):
print("Hello from foo()")
example.py:
from MyDummyClass import MyDummyClass
dummy = MyDummyClass()
When I run example.py, I am getting this error:
I am in '__init__()'
Hello from foo()
I am in '__init__()'
Hello from foo()
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Python\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Python\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Python\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Python\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\Python\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Python\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Python\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "E:\Dropbox\Docs\Business\_NZTrackAlerts\Website\Current dev\NZtracker\cgi-bin\example1.py", line 2, in <module>
dummy = MyDummyClass()
File "...path to my file...\MyDummyClass.py", line 13, in __init__
p.start()
File "C:\Python\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Python\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Python\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Python\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Python\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Python\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
I can't understand (and not from other posts as well) how to fix it.
Thank you very much for your help!!
This has to do with the way that python actually executes scripts. The error that it's throwing essentially says that you're trying to start a new process when python isn't ready yet. This is because you're calling dummy = MyDummyClass() in the main script without giving python a chance to fully initialize. If you instead have example.py like this:
from MyDummyClass import MyDummyClass
if __name__ == "__main__":
dummy = MyDummyClass()
This will produce your desired output:
C:\Python Scripts>python example.py
I am in '__init__()'
Hello from foo()
The if __name__ == "__main__" block tells python "only execute this if the script is being run as the main script (i.e. python example.py), and this will force python to initialize everything properly before it runs that block.
Apologies if this answer doesn't describe enough what's going on in the back end of python as it's doing the initializing, I don't know a ton about it myself :)

Categories

Resources