multiprocessing launch from within module or class, not from main() - python

I want to use Python's multiprocessing unit to make effective use of multiple cpu's to speed up my processing.
All seems to work, however I want to run Pool.map(f, [item, item]) from within a class, in a sub module somewhere deep in my program. The reason is that the program has to prepare the data first and wait for certain events to happen before there is anything to be processed.
The multiprocessing docs says you can only run from within a if __name__ == '__main__': statement. I don't understand the significance of that and tried it anyway, like so:
from multiprocessing import Pool
class Foo(object):
n = 1000000
def __init__(self, x):
self.x = x + 1
pass
def run(self):
for i in range(1,self.n):
self.x *= 1.0*i/self.x
return self
class Bar(object):
def __init__(self):
pass
def go_all(self):
work = [Foo(i) for i in range(960)]
def do(obj):
return obj.run()
p = Pool(16)
finished_work = p.map(do, work)
return
bar = Bar()
bar.go_all()
It indeed doesn't work! I get the following error:
PicklingError: Can't pickle : attribute lookup
builtin.function failed
I don't quite understand why as everything seems to be perfectly pickeable. I have the following questions:
Can this be made to work without putting the p.map line in my main program?
If not, can "main" programs be called as sub-routines/modules, such to make it still work?
Is there some handy trick to loop back from a submodule to the main program and run it from there?
I'm on Linux and Python 2.7

I believe you misunderstood the documentation. What the documentation says is to do this:
if __name__ == '__main__':
bar = Bar()
bar.go_all()
So your p.map line does not need to be inside your "main function", or whatever. Only the code that actually spawns the subprocesses has to be "guarded". This is unavoidable due to limitations of the Windows OS.
Moreover, the function that you pass to Pool.map has to be importable (functions are pickled simply by their names, the interpreter then has to be able to import them to rebuild the function object when they are passed to the subprocess). So you should probably move your do function at the global level to avoid pickling errors.

The extra restrictions on the multiprocessing module on ms-windows stem from the fact that it doesn't have the fork system call. On UNIX-like operating systems, fork makes a perfect copy of a process and continues to run that next to the parent process. The only difference between them is that fork returns different value in the parent and child processes.
On ms-windows, multiprocessing needs to start a new Python instance using a native method to start processes. Then it needs to bring that Python instance into the same state as the "parent" process.
This means (among other things) that the Python code must be importable without side effects like trying to start yet another process. Hence the use of the if __name__ == '__main__' guard.

Related

python pickle string in multiprocessing

I'm having a hard time using multiprocessing in python, though I have not used much multiprocessing in any platform and do not have clear idea of how queues work in multi-process communication. (If anyone could provide a simple reference, that will be big help).
Coming to problem:
from concurrent.futures.process import ProcessPoolExecutor
class Sample:
def __init__(self):
self.bla_exec = ProcessPoolExecutor(max_workers=1)
def blafunc(self,stt):
print('sdasdadadsa::::::::',stt)
def on_ticks(self, ticks):
f=self.bla_exec.submit(self.blafunc,ticks)
print(f.result()) //If I dont do .result(), nothing gets printed
if __name__ == '__main__':
Sample().on_ticks('123')
This gives me error:
TypeError: cannot pickle 'weakref' object
Note that I knowingly doing f.result() just to test my sample.
think what it's doing is trying to pickle the ProcessPoolExecutor because the function you're trying to run in the pool is a bound method. The object it is bound to is a Sample that includes a reference to your ProcessPoolExecutor. If I create the ProcessPoolExecutor as a top-level object not in the class, your code runs fine.
Python serializes (pickles, by default) objects moved back and forth between the process spaces rather than relying on any kind of shared memory that might be possible between two processes. So you have to be careful what your function has access to that Python has to bring along into the other process your function will run in.

ThreadPoolExecutor, ProcessPoolExecutor and global variables

I am new to parallelization in general and concurrent.futures in particular. I want to benchmark my script and compare the differences between using threads and processes, but I found that I couldn't even get that running because when using ProcessPoolExecutor I cannot use my global variables.
The following code will output Helloas I expect, but when you change ThreadPoolExecutor for ProcessPoolExecutor, it will output None.
from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor
greeting = None
def process():
print(greeting)
return None
def main():
with ThreadPoolExecutor(max_workers=1) as executor:
executor.submit(process)
return None
def init():
global greeting
greeting = 'Hello'
return None
if __name__ == '__main__':
init()
main()
I don't understand why this is the case. In my real program, init is used to set the global variables to CLI arguments, and there are a lot of them. Hence, passing them as arguments does not seem recommended. So how do I pass those global variables to each process/thread correctly?
I know that I can change things around, which will work, but I don't understand why. E.g. the following works for both Executors, but it also means that the globals initialisation has to happen for every instance.
from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor
greeting = None
def init():
global greeting
greeting = 'Hello'
return None
def main():
with ThreadPoolExecutor(max_workers=1) as executor:
executor.submit(process)
return None
def process():
init()
print(greeting)
return None
if __name__ == '__main__':
main()
So my main question is, what is actually happening. Why does this code work with threads and not with processes? And, how do I correctly pass set globals to each process/thread without having to re-initialise them for every instance?
(Side note: because I have read that concurrent.futures might behave differently on Windows, I have to note that I am running Python 3.6 on Windows 10 64 bit.)
I'm not sure of the limitations of this approach, but you can pass (serializable?) objects between your main process/thread. This would also help you get rid of the reliance on global vars:
from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor
def process(opts):
opts["process"] = "got here"
print("In process():", opts)
return None
def main(opts):
opts["main"] = "got here"
executor = [ProcessPoolExecutor, ThreadPoolExecutor][1]
with executor(max_workers=1) as executor:
executor.submit(process, opts)
return None
def init(opts): # Gather CLI opts and populate dict
opts["init"] = "got here"
return None
if __name__ == '__main__':
cli_opts = {"__main__": "got here"} # Initialize dict
init(cli_opts) # Populate dict
main(cli_opts) # Use dict
Works with both executor types.
Edit: Even though it sounds like it won't be a problem for your use case, I'll point out that with ProcessPoolExecutor, the opts dict you get inside process will be a frozen copy, so mutations to it will not be visible across processes nor will they be visible once you return to the __main__ block. ThreadPoolExecutor, on the other hand, will share the dict object between threads.
Actually, the first code of the OP will work as intended on Linux (tested in Python 3.6-3.8) because
On Unix a child process can make use of a shared resource created in a
parent process using a global resource.
as explained in multiprocessing doc. However, for a mysterious reasons, it won't work on my Mac running Mojave (which is supposed to be a UNIX-compliant OS; tested only with Python 3.8). And for sure, it won't work on Windows, and it's in general not a recommended practice with multiple processes.
Let's image a process is a box while a thread is a worker inside a box. A worker can only access the resources in the box and cannot touch the other resources in other boxes.
So when you use threads, you are creating multiple workers for your current box(main process). But when you use process, you are creating another box. In this case, the global variables initialised in this box is completely different from ones in another box. That's why it doesn't work as you expect.
The solution given by jedwards is good enough for most situations. You can expilictly package the resources in current box(serialize variables) and deliver it to another box(transport to another process) so that the workers in that box have access to the resources.
A process represents activity that is run in a separate process in the OS meaning of the term while threads all run in your main process. Every process has its own unique namespace.
Your main process sets the value to greeting by calling init() inside your __name__ == '__main__'condition for its own namespace. In your new process, this does not happen (__name__ is '__mp_name__' here) hence greeting remains None and init() is never actually called unless you do so explicitly in the function your process executes.
While sharing state between processes is generally not recommended, there are ways to do so, like outlined in #jedwards answer.
You might also want to check Sharing State Between Processes from the docs.

Why must we explicitly pass constants into multiprocessing functions?

I have been working with the multiprocessing package to speed up some geoprocessing (GIS/arcpy) tasks that are redundant and need to be done the same for more than 2,000 similar geometries.
The splitting up works well, but my "worker" function is rather long and complicated because the task itself from start to finish is complicated. I would love to break the steps apart down more but I am having trouble passing information to/from the worker function because for some reason ANYTHING that a worker function under multiprocessing uses needs to be passed in explicitly.
This means I cannot define constants in the body of if __name__ == '__main__' and then use them in the worker function. It also means that my parameter list for the worker function is getting really long - which is super ugly since trying to use more than one parameter also requires creating a helper "star" function and then itertools to rezip them back up (a la the second answer on this question).
I have created a trivial example below that demonstrates what I am talking about. Are there any workarounds for this - a different approach I should be using - or can someone at least explain why this is the way it is?
Note: I am running this on Windows Server 2008 R2 Enterprise x64.
Edit: I seem to have not made my question clear enough. I am not that concerned with how pool.map only takes one argument (although it is annoying) but rather I do not understand why the scope of a function defined outside of if __name__ == '__main__' cannot access things defined inside that block if it is used as a multiprocessing function - unless you explicitly pass it as an argument, which is obnoxious.
import os
import multiprocessing
import itertools
def loop_function(word):
file_name = os.path.join(root_dir, word + '.txt')
with open(file_name, "w") as text_file:
text_file.write(word + " food")
def nonloop_function(word, root_dir): # <------ PROBLEM
file_name = os.path.join(root_dir, word + '.txt')
with open(file_name, "w") as text_file:
text_file.write(word + " food")
def nonloop_star(arg_package):
return nonloop_function(*arg_package)
# Serial version
#
# if __name__ == '__main__':
# root_dir = 'C:\\hbrowning'
# word_list = ['dog', 'cat', 'llama', 'yeti', 'parakeet', 'dolphin']
# for word in word_list:
# loop_function(word)
#
## --------------------------------------------
# Multiprocessing version
if __name__ == '__main__':
root_dir = 'C:\\hbrowning'
word_list = ['dog', 'cat', 'llama', 'yeti', 'parakeet', 'dolphin']
NUM_CORES = 2
pool = multiprocessing.Pool(NUM_CORES, maxtasksperchild=1)
results = pool.map(nonloop_star, itertools.izip(word_list, itertools.repeat(root_dir)),
chunksize=1)
pool.close()
pool.join()
The problem is, at least on Windows (although there are similar caveats with *nix fork style of multiprocessing, too) that, when you execute your script, it (to greatly simplify it) effectively ends up as as if you called two blank (shell) processes with subprocess.Popen() and then have them execute:
python -c "from your_script import nonloop_star; nonloop_star(('dog', 'C:\\hbrowning'))"
python -c "from your_script import nonloop_star; nonloop_star(('cat', 'C:\\hbrowning'))"
python -c "from your_script import nonloop_star; nonloop_star(('yeti', 'C:\\hbrowning'))"
python -c "from your_script import nonloop_star; nonloop_star(('parakeet', 'C:\\hbrowning'))"
python -c "from your_script import nonloop_star; nonloop_star(('dolphin', 'C:\\hbrowning'))"
one by one as soon as one of those processes finishes with the previous call. That means that your if __name__ == "__main__" block never gets executed (because it's not the main script, it's imported as a module) so anything declared within it is not readily available to the function (as it was never evaluated).
For the staff outside your function you can at least cheat by accessing your module via sys.modules["your_script"] or even with globals() but that works only for the evaluated staff, so anything that was placed within the if __name__ == "__main__" guard is not available as it didn't even had a chance. That's also a reason why you must use this guard on Windows - without it you'd be executing your pool creation, and other code that you nested within the guard, over and over again with each spawned process.
If you need to share read-only data in your multiprocessing functions, just define it in the global namespace of your script, outside of that __main__ guard, and all functions will have the access to it (as it gets re-evaluated when starting a new process) regardless if they are running as separate processes or not.
If you need data that changes then you need to use something that can synchronize itself over different processes - there is a slew of modules designed for that, but most of the time Python's own pickle-based, datagram communicating multiprocessing.Manager (and types it provides), albeit slow and not very flexible, is enough.
Python ยป 3.6.1 Documentation: multiprocessing.pool.Pool
map(func, iterable[, chunksize])
A parallel equivalent of the map() built-in function (it supports only one iterable argument though)
There is no Restriction, only it have to be a iterable!
Try a class Container, for instance:
class WP(object):
def __init__(self, name):
self.root_dir ='C:\\hbrowning'
self.name = name
word_list = [WP('dog'), WP('cat'), WP('llama'), WP('yeti'), WP('parakeet'), WP('dolphin')]
results = pool.map(nonloop_star, word_list, chunksize=1)
Note: The Var Types inside the class have to be pickleable!
Read about what-can-be-pickled-and-unpickled

Spawn another python process and get its return object

Say that you have a singleton to play with (which means that the only way to initialize that to original state is to restart the whole script) and you want to do a specific tasks multiple times on this and get the returned objects. Are there any ways I can do this without disk I/O? I know I can do that with subprocess.check_output() like How to spawn another process and capture output in python? and file I/O or piping stdio, but are there any cleaner solutions as simple as same-process communications (Edit: I mean, result = foo())?
#the singleton
def foo():
crash_when_got_run_twice()
result = something_fancy()
return result
#the runner
def bar(times):
for i in range(times):
result = magic() # run foo()
aggregate_result(result)
return aggregated_result()
What do you think you can do in magic()?
On unix-like systems you can fork a subprocess and have it run the singleton. Assuming you've already imported everything needed by the singleton and the singleton itself doesn't touch the disk, it could work. Fair warning: whatever reason this thing was a singleton in the first place may nip you. You can let the multiprocessing package do the heavy lifting for you.
On windows, a new python interpreter is executed and there may be significant state passed between parent and child, which could have harmful effects. But again, it may work...
import multiprocessing as mp
#the singleton
def foo():
crash_when_got_run_twice()
result = something_fancy()
return result
def _run_foo(i):
return foo()
#the runner
def bar(times):
with mp.Pool(min(mp.cpu_count(), times) as pool:
return pool.map(_run_foo, range(times))

How to use multiprocessing.Pool in an imported module?

I have not been able to implement the suggestion here: Applying two functions to two lists simultaneously.
I guess it is because the module is imported by another module and thus my Windows spawns multiple python processes?
My question is: how can I use the code below without the if if __name__ == "__main__":
args_m = [(mortality_men, my_agents, graveyard, families, firms, year, agent) for agent in males]
args_f = [(mortality_women, fertility, year, families, my_agents, graveyard, firms, agent) for agent in females]
with mp.Pool(processes=(mp.cpu_count() - 1)) as p:
p.map_async(process_males, args_m)
p.map_async(process_females, args_f)
Both process_males and process_females are fuctions.
args_m, args_f are iterators
Also, I don't need to return anything. Agents are class instances that need updating.
The reason you need to guard multiprocessing code in a if __name__ == "__main__" is that you don't want it to run again in the child process. That can happen on Windows, where the interpreter needs to reload all of its state since there's no fork system call that will copy the parent process's address space. But you only need to use it where code is supposed to be running at the top level since you're in the main script. It's not the only way to guard your code.
In your specific case, I think you should put the multiprocessing code in a function. That won't run in the child process, as long as nothing else calls the function when it should not. Your main module can import the module, then call the function (from within an if __name__ == "__main__" block, probably).
It should be something like this:
some_module.py:
def process_males(x):
...
def process_females(x):
...
args_m = [...] # these could be defined inside the function below if that makes more sense
args_f = [...]
def do_stuff():
with mp.Pool(processes=(mp.cpu_count() - 1)) as p:
p.map_async(process_males, args_m)
p.map_async(process_females, args_f)
main.py:
import some_module
if __name__ == "__main__":
some_module.do_stuff()
In your real code you might want to pass some arguments or get a return value from do_stuff (which should also be given a more descriptive name than the generic one I've used in this example).
The idea of if __name__ == '__main__': is to avoid infinite process spawning.
When pickling a function defined in your main script, python has to figure out what part of your main script is the function code. It will basically re run your script. If your code creating the Pool is in the same script and not protected by the "if main", then by trying to import the function, you will try to launch another Pool that will try to launch another Pool....
Thus you should separate the function definitions from the actual main script:
from multiprocessing import Pool
# define test functions outside main
# so it can be imported withou launching
# new Pool
def test_func():
pass
if __name__ == '__main__':
with Pool(4) as p:
r = p.apply_async(test_func)
... do stuff
result = r.get()
Cannot yet comment on the question, but a workaround I have used that some have mentioned is just to define the process_males etc. functions in a module that is different to where the processes are spawned. Then import the module containing the multiprocessing spawns.
I solved it by calling the modules' multiprocessing function within "if __ name__ == "__ main__":" of the main script, as the function that involves multiprocessing is the last step in my module, others could try if aplicable.

Categories

Resources