RuntimeError on windows trying python multiprocessing - python

I'm going to dump the error code I got while try a python script :
Preprocess validation data upfront
Using gpu device 0: Tesla K20c
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\SciSoft\WinPython-64bit-2.7.6.4\python-2.7.6.amd64\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\SciSoft\WinPython-64bit-2.7.6.4\python-2.7.6.amd64\lib\multiprocessing\forking.py", line 495, in prepare
'__parents_main__', file, path_name, etc
File "C:\Users\Administrator\Desktop\Galaxy Data\kaggle-galaxies-master\kaggle-galaxies-master\try_convnet_cc_multirotflip_3x69r45_maxout2048_extradense.py", line 133, in <module>
for data, length in create_valid_gen():
File "load_data.py", line 572, in buffered_gen_mp
process.start()
`File "C:\SciSoft\WinPython-64bit-2.7.6.4\python-2.7.6.amd64\lib\multiprocessing\process.py", line 130, in start
self._popen = Popen(self)
File "C:\SciSoft\WinPython-64bit-2.7.6.4\python-2.7.6.amd64\lib\multiprocessing\forking.py", line 258, in init
cmd = get_command_line() + [rhandle]
File "C:\SciSoft\WinPython-64bit-2.7.6.4\python-2.7.6.amd64\lib\multiprocessing\forking.py", line 358, in get_command_line`
is not going to be frozen to produce a Windows executable.''')
RuntimeError:
Attempt to start a new process before the current process
has finished its bootstrapping phase.
This probably means that you are on Windows and you have
forgotten to use the proper idiom in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce a Windows executable.
As I understand I have to insert a line
if __name__ == '__main__':
Some where to get it to work
Can anyone tell me in which File I should insert it ? I have included the affected files list in the initial error logs
The affected file :
https://github.com/benanne/kaggle-galaxies/blob/master/try_convnet_cc_multirotflip_3x69r45_maxout2048_extradense.py
Lines 131-134
and
https://github.com/benanne/kaggle-galaxies/blob/master/load_data.py
line 572

Python documentation is quite clear in this case.
The important part is Safe importing of main module.
Your try_convnet_cc_multirotflip_3x69r45_maxout2048_extradense.py script is doing lots of things on module level. Without reading it in details I can already say that you should wrap the workflow with a function and use it like this:
if __name__ == '__main__':
freeze_support() # Optional under circumstances described in docs
your_workflow_function()
Besides the problem you have, it's a good habit not to surprise possible user of your script with side effects, if the user just wants to import it and reuse some of it's functionality.
So don't put your code on module level. It's ok to have constants on module level but the workflow should be in functions and classes.
If Python module is intended to be used as a script (like in your case), you simply put the if __name__ == '__main__' in the very end of this module, calling your_workflow_function() only if the module is the entry point for the interpreter - so called main module.

Related

RuntimeError on Mac trying python multiprocessing

Some strange problems occurred when executing the codes:
# a lot more
bank_info = p.map(itr_banks, range(len(bank_sites)))
bank_info = sum(bank_info, [])
# a lot more
Stack trace:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/Cellar/python#3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
......
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
......
What makes it strange is that the almost same code run a year ago doesn't throw any error. The project was created a year ago and this time is a slight refactor. The python version may be changed but still using PyCharm.
Any suggestions?

How to fix multiprocessing problems in python in windows10

I try to use this tutorial to train my own car model recognition model: https://github.com/Helias/Car-Model-Recognition. And i want to use coda and my gpu perfomance to enhance training speed (preprocesssing step was completed without any errors).But when I try to train my model, I've got the following errors:
######### ERROR #######
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
######### batch #######
Traceback (most recent call last):
File "D:\Car-Model-Recognition\main.py", line 78, in train_model
######### ERROR #######
[Errno 32] Broken pipe
for i, batch in enumerate(loaders[mode]):
######### batch ####### File "C:\Program Files\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 279, in __iter__
return _MultiProcessingDataLoaderIter(self)
Traceback (most recent call last):
File "C:\Program Files\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 719, in __init__
File "main.py", line 78, in train_model
w.start()
File "C:\Program Files\Python37\lib\multiprocessing\process.py", line 112, in start
for i, batch in enumerate(loaders[mode]):
File "C:\Program Files\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 279, in __iter__
self._popen = self._Popen(self)
File "C:\Program Files\Python37\lib\multiprocessing\context.py", line 223, in _Popen
return _MultiProcessingDataLoaderIter(self)
File "C:\Program Files\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 719, in __init__
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Program Files\Python37\lib\multiprocessing\context.py", line 322, in _Popen
w.start()
return Popen(process_obj)
File "C:\Program Files\Python37\lib\multiprocessing\popen_spawn_win32.py", line 46, in __init__
File "C:\Program Files\Python37\lib\multiprocessing\process.py", line 112, in start
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Program Files\Python37\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
self._popen = self._Popen(self)
File "C:\Program Files\Python37\lib\multiprocessing\context.py", line 223, in _Popen
_check_not_importing_main()
File "C:\Program Files\Python37\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Program Files\Python37\lib\multiprocessing\context.py", line 322, in _Popen
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
return Popen(process_obj)
I have used the exact code from given link, and if i start my code using wsl, everything is ok, but I can't use my gpu from wsl. Where should I insert this name == 'main' check to prevent such a mistake or how can i disable this multiprocessing
Looking at main.py, you run a lot of code at the module level. On Windows, python's multiprocessing module will start a new python interpreter, import your modules, unpickle a snapshot of your parent context and then call your worker function. The problem is that all of that module level code executes merely by import and you essentially run a new copy of your program instead of building a context for your worker.
The solution is two-fold. First, move all of the module level code into functions. You want to be a able to import your module without side effects. Second, call the function(s) that start your program from a conditional
def main():
the stuff you were doing a module level
if __name__ == "__main__":
main()
The reason this works is in the module name. When you run the top level script of a python (e.g., python main.py), its a script called "__main__", not a module. If a different program imports main its a module called "main" (or whatever you named your script). That 'if' stops your main code from executing if its imported by some other python code - such as the multiprocessing module.
Its okay to have some executable code at the module level, especially if you are setting up defaults and such. But don't do anything at the module level that you wouldn't want done if some other code imports your script.

Multiprocessing using Pool class in Python giving Pickling error

I am trying to run a simple multiprocessing example in python3.6 in a zeppelin notebook(in windows) but I am not able to execute it. Below is the code that i used:
def sqrt(x):
return x**0.5
numbers = [i for i in range(1000000)]
with Pool() as pool:
sqrt_ls = pool.map(sqrt, numbers)
After running this code I am getting the following error:
Traceback (most recent call last):
File "/tmp/zeppelin_python-3196160128578820301.py", line 315, in <module>
exec(code, _zcUserQueryNameSpace)
File "<stdin>", line 6, in <module>
File "/usr/lib64/python3.6/multiprocessing/pool.py", line 266, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/usr/lib64/python3.6/multiprocessing/pool.py", line 644, in get
raise self._value
File "/usr/lib64/python3.6/multiprocessing/pool.py", line 424, in _handle_tasks
put(task)
File "/usr/lib64/python3.6/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/usr/lib64/python3.6/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function sqrt at 0x7f6f84f1a620>: attribute lookup sqrt on __main__ failed
I am not sure if its just me who is facing the issue. As i have seen so many articles where people can run the code easily. If you know the solution please help
Thanks
From the multiprocessing documentation:
Note: Functionality within this package requires that the main module be importable by the children. This is covered in Programming guidelines however it is worth pointing out here. This means that some examples, such as the Pool examples will not work in the interactive interpreter.
Notebooks are running Python interactive interpreters behind the scene, so it's probably why you get this error. You can try to run your code from within a if __name__ == '__main__': statement.
A Zeppelin notebook does not emulate a normal module well enough to support the pickling that is used to identify the correct operation to another process. You can put all the functions you want to call into a proper module that you import in the usual fashion.

pyserial with multiprocessing gives me a ctype error

Hi I'm trying to write a module that lets me read and send data via pyserial. I have to be able to read the data in parallel to my main script. With the help of a stackoverflow user, I have a basic and working skeleton of the program, but when I tried adding a class I created that uses pyserial (handles finding port, speed, etc) found here I get the following error:
File "<ipython-input-1-830fa23bc600>", line 1, in <module>
runfile('C:.../pythonInterface1/Main.py', wdir='C:/Users/Daniel.000/Desktop/Daniel/Python/pythonInterface1')
File "C:...\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
execfile(filename, namespace)
File "C:...\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/Daniel.000/Desktop/Daniel/Python/pythonInterface1/Main.py", line 39, in <module>
p.start()
File "C:...\Anaconda3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:...\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:...\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:...\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
reduction.dump(process_obj, to_child)
File "C:...\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
ValueError: ctypes objects containing pointers cannot be pickled
This is the code I am using to call the class in SerialConnection.py
import multiprocessing
from time import sleep
from operator import methodcaller
from SerialConnection import SerialConnection as SC
class Spawn:
def __init__(self, _number, _max):
self._number = _number
self._max = _max
# Don't call update here
def request(self, x):
print("{} was requested.".format(x))
def update(self):
while True:
print("Spawned {} of {}".format(self._number, self._max))
sleep(2)
if __name__ == '__main__':
'''
spawn = Spawn(1, 1) # Create the object as normal
p = multiprocessing.Process(target=methodcaller("update"), args=(spawn,)) # Run the loop in the process
p.start()
while True:
sleep(1.5)
spawn.request(2) # Now you can reference the "spawn"
'''
device = SC()
print(device.Port)
print(device.Baud)
print(device.ID)
print(device.Error)
print(device.EMsg)
p = multiprocessing.Process(target=methodcaller("ReadData"), args=(device,)) # Run the loop in the process
p.start()
while True:
sleep(1.5)
device.SendData('0003')
What am I doing wrong for this class to be giving me problems? Is there some form of restriction to use pyserial and multiprocessing together? I know it can be done but I don't understand how...
here is the traceback i get from python
Traceback (most recent call last): File "C:...\Python\pythonInterface1\Main.py", line 45, in <module>
p.start()
File "C:...\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:...\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:...\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:...\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "C:...\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj) ValueError: ctypes objects containing pointers cannot be pickled
You are trying to pass a SerialConnection instance to another process as an argument. For that python has first to serialize (pickle) the object, and it is not possible for SerialConnection objects.
As said in Rob Streeting's answer, a possible solution would be to allow the SerialConnection object to be copied to the other process' memory using the fork that occurs when multiprocessing.Process.start is invoked, but this will not work on Windows as it does not use fork.
A simpler, cross-platform and more efficient way to achieve parallelism in your code would be to use a thread instead of a process. The changes to your code are minimal:
import threading
p = threading.Thread(target=methodcaller("ReadData"), args=(device,))
I think the problem is due to something inside device being unpicklable (i.e., not serializable by python). Take a look at this page to see if you can see any rules that may be broken by something in your device object.
So why does device need to be picklable at all?
When a multiprocessing.Process is started, it uses fork() at the operating system level (unless otherwise specified) to create the new process. What this means is that the whole context of the parent process is "copied" over to the child. This does not require pickling, as it's done at the operating system level.
(Note: On unix at least, this "copy" is actually a pretty cheap operation because it used a feature called "copy-on-write". This means that both parent and child processes actually read from the same memory until one or the other modifies it, at which point the original state is copied over to the child process.)
However, the arguments of the function that you want the process to take care of do have to be pickled, because they are not part of the main process's context. So, that includes your device variable.
I think you might be able to resolve your issue by allowing device to be copied as part of the fork operation rather than passing it in as a variable. To do this though, you'll need a wrapper function around the operation you want your process to do, in this case methodcaller("ReadData"). Something like this:
if __name__ == "__main__":
device = SC()
def call_read_data():
device.ReadData()
...
p = multiprocessing.Process(target=call_read_data) # Run the loop in the process
p.start()

hdbscan Parallel Error When Running After Import

I am building and fitting an hdbscan model on my data and when I run the script from within the file it works well and quickly, but when I import the file and run it from 'outside' it goes into a weird loop that I don't understand how it started. And I get the following error:
ImportError: [joblib] Attempting to do parallel computing without protecting your import on a system that does not support forking. To use parallel-computing in a script, you must protect your main loop using "if __name__ == '__main__'". Please see the joblib documentation on Parallel for more information
Here is an excerpt of the code:
df_pos_raw, df_pos_training = pre_process_data(df_pos)
df_pos_training_std = standardize_df(df_pos_training) # Standardized data, column-wise
print "generating model"
pos_cls = hdbscan.HDBSCAN(min_cluster_size=10, prediction_data=True)
print "fitting model to data"
pos_cls.fit(df_pos_training_std)
print 'done fitting model'
# sns.distplot(pos_cls.labels_, bins=len(set(pos_cls.labels_)))
df_filtered = filter_cons_types(df, [3, 5])
print "Done. returning variables"
return pos_cls, df_filtered
Here is the output when running from 'outside' the file:
Traceback (most recent call last):
File "<string>", line 1, in <module>
generating model
File "C:\ProgramData\Anaconda2\Lib\multiprocessing\forking.py", line 380, in main
fitting model to data
prepare(preparation_data)
File "C:\ProgramData\Anaconda2\Lib\multiprocessing\forking.py", line 510, in prepare
'__parents_main__', file, path_name, etc
File "C:\Users\sareetn\PycharmProjects\Arad\DataImputation\ClusteringExtrapolation\Dev\run_clustering_based_prediction.py", line 4, in <module>
model, raw_df = clustering()
File "C:\Users\sareetn\PycharmProjects\Arad\DataImputation\ClusteringExtrapolation\Dev\clustering_model_constype_3_5.py", line 86, in main
pos_cls.fit(df_pos_training_std)
File "C:\Users\sareetn\PycharmProjects\Arad\venv\lib\site-packages\hdbscan\hdbscan_.py", line 816, in fit
self._min_spanning_tree) = hdbscan(X, **kwargs)
File "C:\Users\sareetn\PycharmProjects\Arad\venv\lib\site-packages\hdbscan\hdbscan_.py", line 543, in hdbscan
core_dist_n_jobs, **kwargs)
File "C:\Users\sareetn\PycharmProjects\Arad\venv\lib\site-packages\sklearn\externals\joblib\memory.py", line 362, in __call__
return self.func(*args, **kwargs)
File "C:\Users\sareetn\PycharmProjects\Arad\venv\lib\site-packages\hdbscan\hdbscan_.py", line 239, in _hdbscan_boruvka_kdtree
n_jobs=core_dist_n_jobs, **kwargs)
File "hdbscan/_hdbscan_boruvka.pyx", line 375, in hdbscan._hdbscan_boruvka.KDTreeBoruvkaAlgorithm.__init__ (hdbscan/_hdbscan_boruvka.c:5195)
File "hdbscan/_hdbscan_boruvka.pyx", line 411, in hdbscan._hdbscan_boruvka.KDTreeBoruvkaAlgorithm._compute_bounds (hdbscan/_hdbscan_boruvka.c:5915)
File "C:\Users\sareetn\PycharmProjects\Arad\venv\lib\site-packages\sklearn\externals\joblib\parallel.py", line 749, in __call__
n_jobs = self._initialize_backend()
File "C:\Users\sareetn\PycharmProjects\Arad\venv\lib\site-packages\sklearn\externals\joblib\parallel.py", line 547, in _initialize_backend
**self._backend_args)
File "C:\Users\sareetn\PycharmProjects\Arad\venv\lib\site-packages\sklearn\externals\joblib\_parallel_backends.py", line 305, in configure
'[joblib] Attempting to do parallel computing '
ImportError: [joblib] Attempting to do parallel computing without protecting your import on a system that does not support forking. To use parallel-computing in a script, you must protect your main loop using "if __name__ == '__main__'". Please see the joblib documentation on Parallel for more information
generating model
fitting model to data
generating model
fitting model to data
generating model
fitting model to data
Thank you very much in advance!!
A friend helped me figure it out-
Clustering uses a library called joblib that splits the job into parallel processes. When running such functions on a Windows machine, care needs to be taken to make sure we use
if __name__ == '__main__'
in order to protect the code and allow the parallel processing to work.
After adding
if __name__ == '__main__'
and placing all of the code there, the clustering ran smoothly and quickly

Categories

Resources