I am new to parallelization in general and concurrent.futures in particular. I want to benchmark my script and compare the differences between using threads and processes, but I found that I couldn't even get that running because when using ProcessPoolExecutor I cannot use my global variables.
The following code will output Helloas I expect, but when you change ThreadPoolExecutor for ProcessPoolExecutor, it will output None.
from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor
greeting = None
def process():
print(greeting)
return None
def main():
with ThreadPoolExecutor(max_workers=1) as executor:
executor.submit(process)
return None
def init():
global greeting
greeting = 'Hello'
return None
if __name__ == '__main__':
init()
main()
I don't understand why this is the case. In my real program, init is used to set the global variables to CLI arguments, and there are a lot of them. Hence, passing them as arguments does not seem recommended. So how do I pass those global variables to each process/thread correctly?
I know that I can change things around, which will work, but I don't understand why. E.g. the following works for both Executors, but it also means that the globals initialisation has to happen for every instance.
from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor
greeting = None
def init():
global greeting
greeting = 'Hello'
return None
def main():
with ThreadPoolExecutor(max_workers=1) as executor:
executor.submit(process)
return None
def process():
init()
print(greeting)
return None
if __name__ == '__main__':
main()
So my main question is, what is actually happening. Why does this code work with threads and not with processes? And, how do I correctly pass set globals to each process/thread without having to re-initialise them for every instance?
(Side note: because I have read that concurrent.futures might behave differently on Windows, I have to note that I am running Python 3.6 on Windows 10 64 bit.)
I'm not sure of the limitations of this approach, but you can pass (serializable?) objects between your main process/thread. This would also help you get rid of the reliance on global vars:
from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor
def process(opts):
opts["process"] = "got here"
print("In process():", opts)
return None
def main(opts):
opts["main"] = "got here"
executor = [ProcessPoolExecutor, ThreadPoolExecutor][1]
with executor(max_workers=1) as executor:
executor.submit(process, opts)
return None
def init(opts): # Gather CLI opts and populate dict
opts["init"] = "got here"
return None
if __name__ == '__main__':
cli_opts = {"__main__": "got here"} # Initialize dict
init(cli_opts) # Populate dict
main(cli_opts) # Use dict
Works with both executor types.
Edit: Even though it sounds like it won't be a problem for your use case, I'll point out that with ProcessPoolExecutor, the opts dict you get inside process will be a frozen copy, so mutations to it will not be visible across processes nor will they be visible once you return to the __main__ block. ThreadPoolExecutor, on the other hand, will share the dict object between threads.
Actually, the first code of the OP will work as intended on Linux (tested in Python 3.6-3.8) because
On Unix a child process can make use of a shared resource created in a
parent process using a global resource.
as explained in multiprocessing doc. However, for a mysterious reasons, it won't work on my Mac running Mojave (which is supposed to be a UNIX-compliant OS; tested only with Python 3.8). And for sure, it won't work on Windows, and it's in general not a recommended practice with multiple processes.
Let's image a process is a box while a thread is a worker inside a box. A worker can only access the resources in the box and cannot touch the other resources in other boxes.
So when you use threads, you are creating multiple workers for your current box(main process). But when you use process, you are creating another box. In this case, the global variables initialised in this box is completely different from ones in another box. That's why it doesn't work as you expect.
The solution given by jedwards is good enough for most situations. You can expilictly package the resources in current box(serialize variables) and deliver it to another box(transport to another process) so that the workers in that box have access to the resources.
A process represents activity that is run in a separate process in the OS meaning of the term while threads all run in your main process. Every process has its own unique namespace.
Your main process sets the value to greeting by calling init() inside your __name__ == '__main__'condition for its own namespace. In your new process, this does not happen (__name__ is '__mp_name__' here) hence greeting remains None and init() is never actually called unless you do so explicitly in the function your process executes.
While sharing state between processes is generally not recommended, there are ways to do so, like outlined in #jedwards answer.
You might also want to check Sharing State Between Processes from the docs.
Related
I'm trying to reduce the processing time of reading a database with roughly 100,000 entries, but I need them to be formatted a specific way, in an attempt to do this, I tried to use python's multiprocessing.map function which works perfectly except that I can't seem to get any form of queue reference to work across them.
I've been using information from Filling a queue and managing multiprocessing in python to guide me for using queues across multiple processes, and Using a global variable with a thread to guide me for using global variables across threads. I've gotten the software to work, but when I check the list/queue/dict/map length after running the process, it always returns zero
I've written a simple example to show what I mean:
You have to run the script as a file, the map's initialize function does not work from the interpreter.
from multiprocessing import Pool
from collections import deque
global_q = deque()
def my_init(q):
global global_q
global_q = q
q.append("Hello world")
def map_fn(i):
global global_q
global_q.append(i)
if __name__ == "__main__":
with Pool(3, my_init, (global_q,)) as pool:
pool.map(map_fn, range(3))
for p in range(len(global_q)):
print(global_q.pop())
Theoretically, when I pass the queue object reference from the main thread to the worker threads using the pool function, and then initialize that thread's global variables using with the given function, then when I insert elements into the queue from the map function later, that object reference should still be pointing to the original queue object reference (long story short, everything should end up in the same queue, because they all point to the same location in memory).
So, I expect:
Hello World
Hello World
Hello World
1
2
3
of course, the 1, 2, 3's are in arbitrary order, but what you'll see on the output is ''.
How come when I pass object references to the pool function, nothing happens?
Here's an example of how to share something between processes by extending the multiprocessing.managers.BaseManager class to support deques.
There's a Customized managers section in the documentation about creating them.
import collections
from multiprocessing import Pool
from multiprocessing.managers import BaseManager
class DequeManager(BaseManager):
pass
class DequeProxy(object):
def __init__(self, *args):
self.deque = collections.deque(*args)
def __len__(self):
return self.deque.__len__()
def appendleft(self, x):
self.deque.appendleft(x)
def append(self, x):
self.deque.append(x)
def pop(self):
return self.deque.pop()
def popleft(self):
return self.deque.popleft()
# Currently only exposes a subset of deque's methods.
DequeManager.register('DequeProxy', DequeProxy,
exposed=['__len__', 'append', 'appendleft',
'pop', 'popleft'])
process_shared_deque = None # Global only within each process.
def my_init(q):
""" Initialize module-level global. """
global process_shared_deque
process_shared_deque = q
q.append("Hello world")
def map_fn(i):
process_shared_deque.append(i) # deque's don't have a "put()" method.
if __name__ == "__main__":
manager = DequeManager()
manager.start()
shared_deque = manager.DequeProxy()
with Pool(3, my_init, (shared_deque,)) as pool:
pool.map(map_fn, range(3))
for p in range(len(shared_deque)): # Show left-to-right contents.
print(shared_deque.popleft())
Output:
Hello world
0
1
2
Hello world
Hello world
You cant use global variable for multiprocesing.
Pass to the function multiprocessing queue.
from multiprocessing import Queue
queue= Queue()
def worker(q):
q.put(something)
Also you are propably experiencing that the code is allright, but as the pool create separate processes, even the errors are separeted and therefore you dont see the code not only isnt working, but that it throws error.
The reason why your output is '', is because nothing was appended to your q/global_q. And if it was appended, then only some variable, that may be called global_q, but its totally different one than your global_q in your main thread
Try to print('Hello world') inside the function you want to multiprocess and you will see by yourself, that nothing is actually printed at all. That processes is simply outside of your main thread and the only way to access that process is by multiprocessing Queues. You access the Queue by queue.put('something') and something = queue.get()
Try to understand this code and you will do well:
import multiprocessing as mp
shared_queue = mp.Queue() # This will be shared among all procesess, but you need to pass the queue as an argument in the process. You CANNOT use it as global variable. Understand that the functions kind of run in total different processes and nothing can really access them... Except multiprocessing.Queue - that can be shared across all processes.
def channel(que,channel_num):
que.put(channel_num)
if __name__ == '__main__':
processes = [mp.Process(target=channel, args=(shared_queue, channel_num)) for channel_num in range(8)]
for p in processes:
p.start()
for p in processes: # wait for all results to close the pool
p.join()
for i in range(8): # Get data from Queue. (you can get data out of it at any time actually)
print(shared_queue.get())
I am relatively new to python and definitely new to multiprocessing. I'm following this question/answer for the structure of my multiprocessing, but in def func_A, I'm calling a module that passes a class object as one of the arguments. In the module, I change an object attribute that I would like the main program to see and update the user with the object attribute value. The child processes run for very long times, so I need the main program to provide updates as they run.
My suspicion is that I'm not understanding namespace/object scoping or something similar, but from what I've read, passing an object (an instance of a class?) to a module as an argument passes a reference to the object and not a copy. I would have thought this meant that changing the attributes of the object in the child process/module would have changed the attributes in the main program object (since they're the same object). Or am I confusing things?
The code for my main program:
# MainProgram.py
import multiprocessing as mp
import time
from time import sleep
import sys
from datetime import datetime
import myModule
MYOBJECTNAMES = ['name1','name2']
class myClass:
def __init__(self, name):
self.name = name
self.value = 0
myObjects = []
for n in MYOBJECTNAMES:
myObjects.append(myClass(n))
def func_A(process_number, queue):
start = datetime.now()
print("Process {} (object: {}) started at {}".format(process_number, myObjects[process_number].name, start))
myModule.Eval(myObjects[process_number])
sys.stdout.flush()
def multiproc_master():
queue = mp.Queue()
proceed = mp.Event()
processes = [mp.Process(target=func_A, args=(x, queue)) for x in range(len(myObjects))]
for p in processes:
p.start()
for i in range(100):
for o in myObjects:
print("In main: Value of {} is {}".format(o.name, o.value))
sleep(10)
for p in processes:
p.join()
if __name__ == '__main__':
split_jobs = multiproc_master()
print(split_jobs)
The code for my module program:
# myModule.py
from time import sleep
def Eval(myObject):
for i in range(100):
myObject.value += 1
print("In module: Value of {} is {}".format(myObject.name, myObject.value))
sleep(5)
That question/answer you linked to probably was probably a poor choice to use as a template, as it's doing many things that your code doesn't require (much less use).
I think your biggest misconception about how multiprocessing works is thinking that all the code is running in the same address-space. The main task runs in its own, and there are separate ones for each subtask. The way your code is written, each of them will end up with its own separate myObjects list. That's why the main task doesn't see any of the changes made by any of the other tasks.
While there are ways share objects using the multiprocessing module, doing so often introduces significant overhead because keeping it or them all in-sync between all the processes requires lots of things happening "under the covers" to make seem like they're shared (which is what is really going on since they can't actually be because of having separate address-spaces). This overhead frequently completely cancels out any speed gained by parallel-processing.
As stated in the documentation: "when doing concurrent programming it is usually best to avoid using shared state as far as possible".
Say that you have a singleton to play with (which means that the only way to initialize that to original state is to restart the whole script) and you want to do a specific tasks multiple times on this and get the returned objects. Are there any ways I can do this without disk I/O? I know I can do that with subprocess.check_output() like How to spawn another process and capture output in python? and file I/O or piping stdio, but are there any cleaner solutions as simple as same-process communications (Edit: I mean, result = foo())?
#the singleton
def foo():
crash_when_got_run_twice()
result = something_fancy()
return result
#the runner
def bar(times):
for i in range(times):
result = magic() # run foo()
aggregate_result(result)
return aggregated_result()
What do you think you can do in magic()?
On unix-like systems you can fork a subprocess and have it run the singleton. Assuming you've already imported everything needed by the singleton and the singleton itself doesn't touch the disk, it could work. Fair warning: whatever reason this thing was a singleton in the first place may nip you. You can let the multiprocessing package do the heavy lifting for you.
On windows, a new python interpreter is executed and there may be significant state passed between parent and child, which could have harmful effects. But again, it may work...
import multiprocessing as mp
#the singleton
def foo():
crash_when_got_run_twice()
result = something_fancy()
return result
def _run_foo(i):
return foo()
#the runner
def bar(times):
with mp.Pool(min(mp.cpu_count(), times) as pool:
return pool.map(_run_foo, range(times))
I have not been able to implement the suggestion here: Applying two functions to two lists simultaneously.
I guess it is because the module is imported by another module and thus my Windows spawns multiple python processes?
My question is: how can I use the code below without the if if __name__ == "__main__":
args_m = [(mortality_men, my_agents, graveyard, families, firms, year, agent) for agent in males]
args_f = [(mortality_women, fertility, year, families, my_agents, graveyard, firms, agent) for agent in females]
with mp.Pool(processes=(mp.cpu_count() - 1)) as p:
p.map_async(process_males, args_m)
p.map_async(process_females, args_f)
Both process_males and process_females are fuctions.
args_m, args_f are iterators
Also, I don't need to return anything. Agents are class instances that need updating.
The reason you need to guard multiprocessing code in a if __name__ == "__main__" is that you don't want it to run again in the child process. That can happen on Windows, where the interpreter needs to reload all of its state since there's no fork system call that will copy the parent process's address space. But you only need to use it where code is supposed to be running at the top level since you're in the main script. It's not the only way to guard your code.
In your specific case, I think you should put the multiprocessing code in a function. That won't run in the child process, as long as nothing else calls the function when it should not. Your main module can import the module, then call the function (from within an if __name__ == "__main__" block, probably).
It should be something like this:
some_module.py:
def process_males(x):
...
def process_females(x):
...
args_m = [...] # these could be defined inside the function below if that makes more sense
args_f = [...]
def do_stuff():
with mp.Pool(processes=(mp.cpu_count() - 1)) as p:
p.map_async(process_males, args_m)
p.map_async(process_females, args_f)
main.py:
import some_module
if __name__ == "__main__":
some_module.do_stuff()
In your real code you might want to pass some arguments or get a return value from do_stuff (which should also be given a more descriptive name than the generic one I've used in this example).
The idea of if __name__ == '__main__': is to avoid infinite process spawning.
When pickling a function defined in your main script, python has to figure out what part of your main script is the function code. It will basically re run your script. If your code creating the Pool is in the same script and not protected by the "if main", then by trying to import the function, you will try to launch another Pool that will try to launch another Pool....
Thus you should separate the function definitions from the actual main script:
from multiprocessing import Pool
# define test functions outside main
# so it can be imported withou launching
# new Pool
def test_func():
pass
if __name__ == '__main__':
with Pool(4) as p:
r = p.apply_async(test_func)
... do stuff
result = r.get()
Cannot yet comment on the question, but a workaround I have used that some have mentioned is just to define the process_males etc. functions in a module that is different to where the processes are spawned. Then import the module containing the multiprocessing spawns.
I solved it by calling the modules' multiprocessing function within "if __ name__ == "__ main__":" of the main script, as the function that involves multiprocessing is the last step in my module, others could try if aplicable.
I want to use Python's multiprocessing unit to make effective use of multiple cpu's to speed up my processing.
All seems to work, however I want to run Pool.map(f, [item, item]) from within a class, in a sub module somewhere deep in my program. The reason is that the program has to prepare the data first and wait for certain events to happen before there is anything to be processed.
The multiprocessing docs says you can only run from within a if __name__ == '__main__': statement. I don't understand the significance of that and tried it anyway, like so:
from multiprocessing import Pool
class Foo(object):
n = 1000000
def __init__(self, x):
self.x = x + 1
pass
def run(self):
for i in range(1,self.n):
self.x *= 1.0*i/self.x
return self
class Bar(object):
def __init__(self):
pass
def go_all(self):
work = [Foo(i) for i in range(960)]
def do(obj):
return obj.run()
p = Pool(16)
finished_work = p.map(do, work)
return
bar = Bar()
bar.go_all()
It indeed doesn't work! I get the following error:
PicklingError: Can't pickle : attribute lookup
builtin.function failed
I don't quite understand why as everything seems to be perfectly pickeable. I have the following questions:
Can this be made to work without putting the p.map line in my main program?
If not, can "main" programs be called as sub-routines/modules, such to make it still work?
Is there some handy trick to loop back from a submodule to the main program and run it from there?
I'm on Linux and Python 2.7
I believe you misunderstood the documentation. What the documentation says is to do this:
if __name__ == '__main__':
bar = Bar()
bar.go_all()
So your p.map line does not need to be inside your "main function", or whatever. Only the code that actually spawns the subprocesses has to be "guarded". This is unavoidable due to limitations of the Windows OS.
Moreover, the function that you pass to Pool.map has to be importable (functions are pickled simply by their names, the interpreter then has to be able to import them to rebuild the function object when they are passed to the subprocess). So you should probably move your do function at the global level to avoid pickling errors.
The extra restrictions on the multiprocessing module on ms-windows stem from the fact that it doesn't have the fork system call. On UNIX-like operating systems, fork makes a perfect copy of a process and continues to run that next to the parent process. The only difference between them is that fork returns different value in the parent and child processes.
On ms-windows, multiprocessing needs to start a new Python instance using a native method to start processes. Then it needs to bring that Python instance into the same state as the "parent" process.
This means (among other things) that the Python code must be importable without side effects like trying to start yet another process. Hence the use of the if __name__ == '__main__' guard.