How to fix multiprocessing problems in python in windows10 - python

I try to use this tutorial to train my own car model recognition model: https://github.com/Helias/Car-Model-Recognition. And i want to use coda and my gpu perfomance to enhance training speed (preprocesssing step was completed without any errors).But when I try to train my model, I've got the following errors:
######### ERROR #######
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
######### batch #######
Traceback (most recent call last):
File "D:\Car-Model-Recognition\main.py", line 78, in train_model
######### ERROR #######
[Errno 32] Broken pipe
for i, batch in enumerate(loaders[mode]):
######### batch ####### File "C:\Program Files\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 279, in __iter__
return _MultiProcessingDataLoaderIter(self)
Traceback (most recent call last):
File "C:\Program Files\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 719, in __init__
File "main.py", line 78, in train_model
w.start()
File "C:\Program Files\Python37\lib\multiprocessing\process.py", line 112, in start
for i, batch in enumerate(loaders[mode]):
File "C:\Program Files\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 279, in __iter__
self._popen = self._Popen(self)
File "C:\Program Files\Python37\lib\multiprocessing\context.py", line 223, in _Popen
return _MultiProcessingDataLoaderIter(self)
File "C:\Program Files\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 719, in __init__
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Program Files\Python37\lib\multiprocessing\context.py", line 322, in _Popen
w.start()
return Popen(process_obj)
File "C:\Program Files\Python37\lib\multiprocessing\popen_spawn_win32.py", line 46, in __init__
File "C:\Program Files\Python37\lib\multiprocessing\process.py", line 112, in start
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Program Files\Python37\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
self._popen = self._Popen(self)
File "C:\Program Files\Python37\lib\multiprocessing\context.py", line 223, in _Popen
_check_not_importing_main()
File "C:\Program Files\Python37\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Program Files\Python37\lib\multiprocessing\context.py", line 322, in _Popen
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
return Popen(process_obj)
I have used the exact code from given link, and if i start my code using wsl, everything is ok, but I can't use my gpu from wsl. Where should I insert this name == 'main' check to prevent such a mistake or how can i disable this multiprocessing

Looking at main.py, you run a lot of code at the module level. On Windows, python's multiprocessing module will start a new python interpreter, import your modules, unpickle a snapshot of your parent context and then call your worker function. The problem is that all of that module level code executes merely by import and you essentially run a new copy of your program instead of building a context for your worker.
The solution is two-fold. First, move all of the module level code into functions. You want to be a able to import your module without side effects. Second, call the function(s) that start your program from a conditional
def main():
the stuff you were doing a module level
if __name__ == "__main__":
main()
The reason this works is in the module name. When you run the top level script of a python (e.g., python main.py), its a script called "__main__", not a module. If a different program imports main its a module called "main" (or whatever you named your script). That 'if' stops your main code from executing if its imported by some other python code - such as the multiprocessing module.
Its okay to have some executable code at the module level, especially if you are setting up defaults and such. But don't do anything at the module level that you wouldn't want done if some other code imports your script.

Related

Multiprocessing Module Error: `if __name__ == '__main__': [duplicate]

This question already has answers here:
RuntimeError on windows trying python multiprocessing
(10 answers)
Closed 1 year ago.
I'm trying to run a code in multiprocessing mode using all the threads available to my CPU (MacBook Pro 16 inch 2019) and am getting an error I don't get running the exact same code on a linux environment for my windows computer.
I have never seen this error before. I tried uninstalling and reinstalling the multiprocessing module, but that didn't help. What is causing this issue? Again, the exact same code works on a Linux environment.
pool = Pool(processes = ncpus)
File "/Users/aeglick/opt/anaconda3/lib/python3.8/multiprocessing/context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
File "/Users/aeglick/opt/anaconda3/lib/python3.8/multiprocessing/pool.py", line 212, in __init__
self._repopulate_pool()
File "/Users/aeglick/opt/anaconda3/lib/python3.8/multiprocessing/pool.py", line 303, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
File "/Users/aeglick/opt/anaconda3/lib/python3.8/multiprocessing/pool.py", line 326, in _repopulate_pool_static
w.start()
File "/Users/aeglick/opt/anaconda3/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Users/aeglick/opt/anaconda3/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/Users/aeglick/opt/anaconda3/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Users/aeglick/opt/anaconda3/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Users/aeglick/opt/anaconda3/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/Users/aeglick/opt/anaconda3/lib/python3.8/multiprocessing/spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "/Users/aeglick/opt/anaconda3/lib/python3.8/multiprocessing/spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
You have to run your Pool inside the block __main__:
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__': # <- you have to execute your Pool from here
with Pool(5) as p:
print(p.map(f, [1, 2, 3]))
So you can't launch a multiprocess code from a Python/IPython console.
Read this: https://docs.python.org/3/library/multiprocessing.html#multiprocessing-programming

RuntimeError with multiprocessing module when trying to recursively compare lists

I'm generating a list filled with sublists of randomly generated 0s and 1s, and then trying to compare each list with every other list to determine their similarity, efficiently.
I know that my code works with a single process (i.e. without involving multiprocessing, but once I start involving multiprocessing.Pool() or multiprocessing.Process() everything starts to break.
I want to compare how long a single process would take compared to multiple processes. I've tried this with threading, but a single process actually ended up taking less time, probably due to the Global Interpreter Lock.
Here's my code:
import difflib
import secrets
import timeit
import multiprocessing
import numpy
random_lists = [[secrets.randbelow(2) for _ in range(500)] for _ in range(500)]
random_lists_split = numpy.array_split(numpy.array(random_lists), 5)
def get_similarity_value(lists_to_check, sublists_to_check) -> list:
ratios = []
matcher = difflib.SequenceMatcher()
for sublist_major in sublists_to_check:
try:
sublist_major = sublist_major.tolist()
except AttributeError:
pass
for sublist_minor in lists_to_check:
if sublist_major == sublist_minor or [lists_to_check.index(sublist_major), lists_to_check.index(sublist_minor)] in [ratios[i][1] for i in range(len(ratios))] or [lists_to_check.index(sublist_minor), lists_to_check.index(sublist_major)] in [ratios[i][1] for i in range(len(ratios))]: # or lists_to_check.index(sublist_major.tolist()) > lists_to_check.index(sublist_minor):
pass
else:
matcher.set_seqs(sublist_major, sublist_minor)
ratios.append([matcher.ratio(), sorted([lists_to_check.index(sublist_major), lists_to_check.index(sublist_minor)])])
return ratios
def start():
test = multiprocessing.Pool(4)
data = [(random_lists, random_lists_split[i]) for i in range(len(random_lists_split))]
print(test.map(get_similarity_value, data))
statement = timeit.Timer(start)
print(statement.timeit(1))
statement2 = timeit.Timer(lambda: get_similarity_value(random_lists, random_lists))
print(statement2.timeit(1))
And here's the error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\ProgramData\Anaconda3\envs\Computing Coursework\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\ProgramData\Anaconda3\envs\Computing Coursework\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\ProgramData\Anaconda3\envs\Computing Coursework\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\ProgramData\Anaconda3\envs\Computing Coursework\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\ProgramData\Anaconda3\envs\Computing Coursework\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\ProgramData\Anaconda3\envs\Computing Coursework\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\ProgramData\Anaconda3\envs\Computing Coursework\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "timings.py", line 38, in <module>
print(statement.timeit(1))
File "C:\ProgramData\Anaconda3\envs\Computing Coursework\lib\timeit.py", line 178, in timeit
timing = self.inner(it, self.timer)
File "<timeit-src>", line 6, in inner
File "timings.py", line 32, in start
test = multiprocessing.Pool(4)
File "C:\ProgramData\Anaconda3\envs\Computing Coursework\lib\multiprocessing\context.py", line 119, in Pool
context=self.get_context())
File "C:\ProgramData\Anaconda3\envs\Computing Coursework\lib\multiprocessing\pool.py", line 174, in __init__
self._repopulate_pool()
File "C:\ProgramData\Anaconda3\envs\Computing Coursework\lib\multiprocessing\pool.py", line 239, in _repopulate_pool
w.start()
File "C:\ProgramData\Anaconda3\envs\Computing Coursework\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\ProgramData\Anaconda3\envs\Computing Coursework\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\ProgramData\Anaconda3\envs\Computing Coursework\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\ProgramData\Anaconda3\envs\Computing Coursework\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\ProgramData\Anaconda3\envs\Computing Coursework\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
N.B. I have tried using multiprocessing.freeze_support() but it results in the same error. The code also seems to be attempting to run indefinitely, as the error appears over and over again.
Thanks!
The problem is that your top-level code—including the code that creates the child Process—is not protected from being run in the child processes.
As the docs explain:, if you're not using the fork start method (and since you're on Windows, you're not):
Make sure that the main module can be safely imported by a new Python interpreter without causing unintended side effects (such a starting a new process).
In fact, it's nearly identical to the example that follows that warning. You're launching a whole pool of children instead of just one, but it's the same problem. Every child in the pool tries to launch a new pool, and, fortunately, multiprocessing figures out that something bad is going on and fails with a RuntimeError instead of exponentially spawning processes until Windows refuses to spawn anymore or its scheduler just falls down.
As the docs say:
Instead one should protect the “entry point” of the program by using if __name__ == '__main__':
In your case, that means this part:
if __name__ == '__main__':
statement = timeit.Timer(start)
print(statement.timeit(1))
statement2 = timeit.Timer(lambda: get_similarity_value(random_lists, random_lists))
print(statement2.timeit(1))

hdbscan Parallel Error When Running After Import

I am building and fitting an hdbscan model on my data and when I run the script from within the file it works well and quickly, but when I import the file and run it from 'outside' it goes into a weird loop that I don't understand how it started. And I get the following error:
ImportError: [joblib] Attempting to do parallel computing without protecting your import on a system that does not support forking. To use parallel-computing in a script, you must protect your main loop using "if __name__ == '__main__'". Please see the joblib documentation on Parallel for more information
Here is an excerpt of the code:
df_pos_raw, df_pos_training = pre_process_data(df_pos)
df_pos_training_std = standardize_df(df_pos_training) # Standardized data, column-wise
print "generating model"
pos_cls = hdbscan.HDBSCAN(min_cluster_size=10, prediction_data=True)
print "fitting model to data"
pos_cls.fit(df_pos_training_std)
print 'done fitting model'
# sns.distplot(pos_cls.labels_, bins=len(set(pos_cls.labels_)))
df_filtered = filter_cons_types(df, [3, 5])
print "Done. returning variables"
return pos_cls, df_filtered
Here is the output when running from 'outside' the file:
Traceback (most recent call last):
File "<string>", line 1, in <module>
generating model
File "C:\ProgramData\Anaconda2\Lib\multiprocessing\forking.py", line 380, in main
fitting model to data
prepare(preparation_data)
File "C:\ProgramData\Anaconda2\Lib\multiprocessing\forking.py", line 510, in prepare
'__parents_main__', file, path_name, etc
File "C:\Users\sareetn\PycharmProjects\Arad\DataImputation\ClusteringExtrapolation\Dev\run_clustering_based_prediction.py", line 4, in <module>
model, raw_df = clustering()
File "C:\Users\sareetn\PycharmProjects\Arad\DataImputation\ClusteringExtrapolation\Dev\clustering_model_constype_3_5.py", line 86, in main
pos_cls.fit(df_pos_training_std)
File "C:\Users\sareetn\PycharmProjects\Arad\venv\lib\site-packages\hdbscan\hdbscan_.py", line 816, in fit
self._min_spanning_tree) = hdbscan(X, **kwargs)
File "C:\Users\sareetn\PycharmProjects\Arad\venv\lib\site-packages\hdbscan\hdbscan_.py", line 543, in hdbscan
core_dist_n_jobs, **kwargs)
File "C:\Users\sareetn\PycharmProjects\Arad\venv\lib\site-packages\sklearn\externals\joblib\memory.py", line 362, in __call__
return self.func(*args, **kwargs)
File "C:\Users\sareetn\PycharmProjects\Arad\venv\lib\site-packages\hdbscan\hdbscan_.py", line 239, in _hdbscan_boruvka_kdtree
n_jobs=core_dist_n_jobs, **kwargs)
File "hdbscan/_hdbscan_boruvka.pyx", line 375, in hdbscan._hdbscan_boruvka.KDTreeBoruvkaAlgorithm.__init__ (hdbscan/_hdbscan_boruvka.c:5195)
File "hdbscan/_hdbscan_boruvka.pyx", line 411, in hdbscan._hdbscan_boruvka.KDTreeBoruvkaAlgorithm._compute_bounds (hdbscan/_hdbscan_boruvka.c:5915)
File "C:\Users\sareetn\PycharmProjects\Arad\venv\lib\site-packages\sklearn\externals\joblib\parallel.py", line 749, in __call__
n_jobs = self._initialize_backend()
File "C:\Users\sareetn\PycharmProjects\Arad\venv\lib\site-packages\sklearn\externals\joblib\parallel.py", line 547, in _initialize_backend
**self._backend_args)
File "C:\Users\sareetn\PycharmProjects\Arad\venv\lib\site-packages\sklearn\externals\joblib\_parallel_backends.py", line 305, in configure
'[joblib] Attempting to do parallel computing '
ImportError: [joblib] Attempting to do parallel computing without protecting your import on a system that does not support forking. To use parallel-computing in a script, you must protect your main loop using "if __name__ == '__main__'". Please see the joblib documentation on Parallel for more information
generating model
fitting model to data
generating model
fitting model to data
generating model
fitting model to data
Thank you very much in advance!!
A friend helped me figure it out-
Clustering uses a library called joblib that splits the job into parallel processes. When running such functions on a Windows machine, care needs to be taken to make sure we use
if __name__ == '__main__'
in order to protect the code and allow the parallel processing to work.
After adding
if __name__ == '__main__'
and placing all of the code there, the clustering ran smoothly and quickly

Python 3.x multiprocessing tkinter mainloop

How do you use multiprocessing on root.mainloop? I am using Python 3.6. I need to do lines of code after it, some requiring the object.
I do not want to create a second object, like some of the other answers for my question suggest.
Here is a little code snippet (set being a JSON object):
from multiprocessing import Process
def check():
try: sett['setup']
except KeyError:
sett['troubleshoot_file']=None
check()
else:
if sett['setup'] is True: return
elif type(sett['setup']) is not bool: raise TypeError('sett[\'setup\'] is not a type of boolian (\'bool\')')
root.=Tk()
root['bg']='blue'
mainloop=Process(target=root.mainloop)
mainloop.start()
mainloop.join()
check()
However, I get this traceback:
Traceback (most recent call last):
File "(directory)/main.py", line 41, in <module>
check()
File "(directory)/main.py", line 39, in check
mainloop.start()
File "C:\Program Files (x86)\Python36-32\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Program Files (x86)\Python36-32\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Program Files (x86)\Python36-32\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Program Files (x86)\Python36-32\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "C:\Program Files (x86)\Python36-32\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _tkinter.tkapp objects
I have tried running:
from queue import Queue
from tkinter import Tk
from multiprocessing import Process
p=Process(target=q.get())
The interpreter then completely crashes.
You cannot use any tkinter objects across multiple processes or threads. If you need to share data between the gui and other processes you will need to set up a queue, and poll the queue from the GUI.
The reason for this is that tkinter is a wrapper around a tcl interpreter that knows nothing about python threads or processes.
You will find a link on how to do this at:
docs.python.org/3.6/library/queue.html

RuntimeError on windows trying python multiprocessing

I'm going to dump the error code I got while try a python script :
Preprocess validation data upfront
Using gpu device 0: Tesla K20c
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\SciSoft\WinPython-64bit-2.7.6.4\python-2.7.6.amd64\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\SciSoft\WinPython-64bit-2.7.6.4\python-2.7.6.amd64\lib\multiprocessing\forking.py", line 495, in prepare
'__parents_main__', file, path_name, etc
File "C:\Users\Administrator\Desktop\Galaxy Data\kaggle-galaxies-master\kaggle-galaxies-master\try_convnet_cc_multirotflip_3x69r45_maxout2048_extradense.py", line 133, in <module>
for data, length in create_valid_gen():
File "load_data.py", line 572, in buffered_gen_mp
process.start()
`File "C:\SciSoft\WinPython-64bit-2.7.6.4\python-2.7.6.amd64\lib\multiprocessing\process.py", line 130, in start
self._popen = Popen(self)
File "C:\SciSoft\WinPython-64bit-2.7.6.4\python-2.7.6.amd64\lib\multiprocessing\forking.py", line 258, in init
cmd = get_command_line() + [rhandle]
File "C:\SciSoft\WinPython-64bit-2.7.6.4\python-2.7.6.amd64\lib\multiprocessing\forking.py", line 358, in get_command_line`
is not going to be frozen to produce a Windows executable.''')
RuntimeError:
Attempt to start a new process before the current process
has finished its bootstrapping phase.
This probably means that you are on Windows and you have
forgotten to use the proper idiom in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce a Windows executable.
As I understand I have to insert a line
if __name__ == '__main__':
Some where to get it to work
Can anyone tell me in which File I should insert it ? I have included the affected files list in the initial error logs
The affected file :
https://github.com/benanne/kaggle-galaxies/blob/master/try_convnet_cc_multirotflip_3x69r45_maxout2048_extradense.py
Lines 131-134
and
https://github.com/benanne/kaggle-galaxies/blob/master/load_data.py
line 572
Python documentation is quite clear in this case.
The important part is Safe importing of main module.
Your try_convnet_cc_multirotflip_3x69r45_maxout2048_extradense.py script is doing lots of things on module level. Without reading it in details I can already say that you should wrap the workflow with a function and use it like this:
if __name__ == '__main__':
freeze_support() # Optional under circumstances described in docs
your_workflow_function()
Besides the problem you have, it's a good habit not to surprise possible user of your script with side effects, if the user just wants to import it and reuse some of it's functionality.
So don't put your code on module level. It's ok to have constants on module level but the workflow should be in functions and classes.
If Python module is intended to be used as a script (like in your case), you simply put the if __name__ == '__main__' in the very end of this module, calling your_workflow_function() only if the module is the entry point for the interpreter - so called main module.

Categories

Resources