python multiprocessing.pool.map, passing arguments to spawned processes - python

def content_generator(applications, dict):
for app in applications:
yield(app, dict[app])
with open('abc.pickle', 'r') as f:
very_large_dict = pickle.load(f)
all_applications = set(very_large_dict.keys())
pool = multiprocessing.Pool()
for result in pool.imap_unordered(func_process_application, content_generator(all_applications, very_large_dict)):
do some aggregation on result
I have a really large dictionary whose keys are strings (application names), values are information concerning the application. Since applications are independent, I want to use multiprocessing to process them in parallel. Parallelization works when the dictionary is not that big but all the python processes were killed when the dictionary is too big. I used dmesg to check what went wrong and found they were killed since the machine ran out of memory. I did top when the pool processes are running and found that they all occupy the same amount of resident memory(RES), which is all 3.4G. This confuses me since it seems to have copied the whole dictionaries into the spawned processes. I thought I broke up the dictionary and passing only what is relevant to the spawned process by yielding only dict[app] instead of dict. Any thoughts on what I did wrong?

The comments are becoming impossible to follow, so I'm pasting in my important comment here:
On a Linux-y system, new processes are created by fork(), so get a copy of the entire parent-process address space at the time they're created. It's "copy on write", so is more of a "virtual" copy than a "real" copy, but still ... ;-) For a start, try creating your Pool before creating giant data structures. Then the child processes will inherit a much smaller address space.
Then some answers to questions:
so in python 2.7, there is no way to spawn a new process?
On Linux-y systems, no. The ability to use "spawn" on those was first added in Python 3.4. On Windows systems, "spawn" has always been the only choice (no fork() on Windows).
The big dictionary is passed in to a function as an argument and I
could only create the pool inside this function. How would I be able
to create the pool before the big dictionary
As simple as this: make these two lines the first two lines in your program:
import multiprocessing
pool = multiprocessing.Pool()
You can create the pool any time you like (just so long as it exists sometime before you actually use it), and worker processes will inherit the entire address space at the time the Pool constructor is invoked.
ANOTHER SUGGESTION
If you're not mutating the dict after it's created, try using this instead:
def content_generator(dict):
for app in dict:
yield app, dict[app]
That way you don't have to materialize a giant set of the keys either. Or, even better (if possible), skip all that and iterate directly over the items:
for result in pool.imap_unordered(func_process_application, very_large_dict.iteritems()):

Related

multiprocessing in python - what gets inherited by forkserver process from parent process?

I am trying to use forkserver and I encountered NameError: name 'xxx' is not defined in worker processes.
I am using Python 3.6.4, but the documentation should be the same, from https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods it says that:
The fork server process is single threaded so it is safe for it to use os.fork(). No unnecessary resources are inherited.
Also, it says:
Better to inherit than pickle/unpickle
When using the spawn or forkserver start methods many types from multiprocessing need to be picklable so that child processes can use them. However, one should generally avoid sending shared objects to other processes using pipes or queues. Instead you should arrange the program so that a process which needs access to a shared resource created elsewhere can inherit it from an ancestor process.
So apparently a key object that my worker process needs to work on did not get inherited by the server process and then passing to workers, why did that happen? I wonder what exactly gets inherited by forkserver process from parent process?
Here is what my code looks like:
import multiprocessing
import (a bunch of other modules)
def worker_func(nameList):
global largeObject
for item in nameList:
# get some info from largeObject using item as index
# do some calculation
return [item, info]
if __name__ == '__main__':
result = []
largeObject # This is my large object, it's read-only and no modification will be made to it.
nameList # Here is a list variable that I will need to get info for each item in it from the largeObject
ctx_in_main = multiprocessing.get_context('forkserver')
print('Start parallel, using forking/spawning/?:', ctx_in_main.get_context())
cores = ctx_in_main.cpu_count()
with ctx_in_main.Pool(processes=4) as pool:
for x in pool.imap_unordered(worker_func, nameList):
result.append(x)
Thank you!
Best,
Theory
Below is an excerpt from Bojan Nikolic blog
Modern Python versions (on Linux) provide three ways of starting the separate processes:
Fork()-ing the parent processes and continuing with the same processes image in both parent and child. This method is fast, but potentially unreliable when parent state is complex
Spawning the child processes, i.e., fork()-ing and then execv to replace the process image with a new Python process. This method is reliable but slow, as the processes image is reloaded afresh.
The forkserver mechanism, which consists of a separate Python server with that has a relatively simple state and which is fork()-ed when a new processes is needed. This method combines the speed of Fork()-ing with good reliability (because the parent being forked is in a simple state).
Forkserver
The third method, forkserver, is illustrated below. Note that children retain a copy of the forkserver state. This state is intended to be relatively simple, but it is possible to adjust this through the multiprocess API through the set_forkserver_preload() method.
Practice
Thus, if you want simething to be inherited by child processes from the parent, this must be specified in the forkserver state by means of set_forkserver_preload(modules_names), which set list of module names to try to load in forkserver process. I give an example below:
# inherited.py
large_obj = {"one": 1, "two": 2, "three": 3}
# main.py
import multiprocessing
import os
from time import sleep
from inherited import large_obj
def worker_func(key: str):
print(os.getpid(), id(large_obj))
sleep(1)
return large_obj[key]
if __name__ == '__main__':
result = []
ctx_in_main = multiprocessing.get_context('forkserver')
ctx_in_main.set_forkserver_preload(['inherited'])
cores = ctx_in_main.cpu_count()
with ctx_in_main.Pool(processes=cores) as pool:
for x in pool.imap(worker_func, ["one", "two", "three"]):
result.append(x)
for res in result:
print(res)
Output:
# The PIDs are different but the address is always the same
PID=18603, obj id=139913466185024
PID=18604, obj id=139913466185024
PID=18605, obj id=139913466185024
And if we don't use preloading
...
ctx_in_main = multiprocessing.get_context('forkserver')
# ctx_in_main.set_forkserver_preload(['inherited'])
cores = ctx_in_main.cpu_count()
...
# The PIDs are different, the addresses are different too
# (but sometimes they can coincide)
PID=19046, obj id=140011789067776
PID=19047, obj id=140011789030976
PID=19048, obj id=140011789030912
So after an inspiring discussion with Alex I think I have sufficient info to address my question: what exactly gets inherited by forkserver process from parent process?
Basically when the server process starts, it will import your main module and everything before if __name__ == '__main__' will be executed. That's why my code don't work, because large_object is nowhere to be found in server process and in all those worker processes that fork from the server process.
Alex's solution works because large_object now gets imported to both main and server process so every worker forked from server will also gets large_object. If combined with set_forkserver_preload(modules_names) all workers might even get the same large_object from what I saw. The reason for using forkserver is explicitly explained in Python documentations and in Bojan's blog:
When the program starts and selects the forkserver start method, a server process is started. From then on, whenever a new process is needed, the parent process connects to the server and requests that it fork a new process. The fork server process is single threaded so it is safe for it to use os.fork(). No unnecessary resources are inherited.
The forkserver mechanism, which consists of a separate Python server with that has a relatively simple state and which is fork()-ed when a new processes is needed. This method combines the speed of Fork()-ing with good reliability (because the parent being forked is in a simple state).
So it's more on the safe side of concern here.
On a side note, if you use fork as the starting method though, you don't need to import anything since all child process gets a copy of parents process memory (or a reference if the system use COW-copy-on-write, please correct me if I am wrong). In this case using global large_object will get you access to large_object in worker_func directly.
The forkserver might not be a suitable approach for me because the issue I am facing is memory overhead. All the operations that gets me large_object in the first place are memory-consuming, so I don't want any unnecessary resources in my worker processes.
If I put all those calculations directly into inherited.py as Alex suggested, it will be executed twice (once when I imported the module in main and once when the server imports it; maybe even more when worker processes were born?), this is suitable if I just want a single-threaded safe process that workers can fork from. But since I am trying to get workers to not inherit unnecessary resources and only get large_object, this won't work.
And putting those calculations in __main__ in inherited.py won't work either since now none of the processes will execute them, including main and server.
So, as a conclusion, if the goal here is to get workers to inherit minimal resources, I am better off breaking my code into 2, do calculation.py first, pickle the large_object, exit the interpreter, and start a fresh one to load the pickled large_object. Then I can just go nuts with either fork or forkserver.

Python - Merge data from multiple thread instances

I am currently working on a project that involves connecting two devices to a python script, retrieving data from them and outputting the data.
Code outline:
• Scans for paired devices
• Paired device found creates thread instance (Two devices connected = two thread instances )
• Data is printed within the thread i.e. each instance has a separate
bundle of data
Basically when two devices are connected two instances of my thread class is created. Each thread instance returns a different bundle of data.
My question is: Is there a way I can combine the two bundles of data into one bundle of data?
Any help on this is appreciated :)
I assume you are using the threading module.
Threading in Python
Python is not multithreaded for CPU activity. The interpreter still uses a GIL (Global Interpreter Lock) for most operations and therefore linearizing operations in a python script. Threading is good to do IO however, as other threads can be woken up while a thread waits for IO.
Idea
Because of the GIL we can just use a standard list to combine our data. The idea is to pass the same list or dictionary to every Thread we create using the args parameter. See pydoc for threading.
Our simple implementation uses two Threads to show how it can be done. In real-world applications you probably use a Thread group or something similar..
Implementation
def worker(data):
# retrieve data from device
data.append(1)
data.append(2)
l = []
# Let's pass our list to the target via args.
a = Thread(target=worker, args=(l,))
b = Thread(target=worker, args=(l,))
# Start our threads
a.start()
b.start()
# Join them and print result
a.join()
b.join()
print(l)
Further thoughts
If you want to be 100% correct and don't rely on the GIL to linearize access to your list, you can use a simple mutex to lock and unlock or use the Queue module which implements correct locking.
Depending on the nature of the data a dictionary might be more convenient to join data by certain keys.
Other considerations
Threads should be carefully considered. Alternatives such as asyncio, etc might be better suited.
My general advice: Avoid using any of these things
avoid threads
avoid the multiprocessing module in Python
avoid the futures module in Python.
Use a tool like http://python-rq.org/
Benefit:
You need to define the input- and output data well, since only serializable data can be passed around
You have distinct interpreters.
No dead locks
Easier to debug.

Parallel python loss of data

I have a python function that creates and stores a object instance in a global list and this function is called by a thread. While the thread runs the lists is filled up as it should be, but when the thread exits the list is empty and I have no idea why. Any help would be appreciated.
simulationResults = []
def run(width1, height1, seed1, prob1):
global simulationResults
instance = Life(width1, height1, seed1, prob1)
instance.run()
simulationResults.append(instance)
this is called in my main by:
for i in range(1, nsims + 1):
simulations.append(multiprocessing.Process(target=run, args=(width, height, seed, prob)))
simulations[(len(simulations) - 1)].start()
for i in simulations:
i.join()
multiprocessing is based on processes, not threads. The important difference: Each process has a separate memory space, while threads share a common memory space. When first created, a process may (depending on OS, spawn method, etc.) be able to read the same values the parent process has, but if it writes to them, only the local values are changed, not the parent's copy. Only threads can rely on being able to access an arbitrary single shared global variable and have it behave as expected.
I'd suggest looking at either multiprocessing.Pool and its various methods to dispatch tasks and retrieve their results later, or if you must use raw Processes, look at the various ways to exchange data between processes; you can't just assign to a global variable, because globals stop being shared when the new Process is forked/spawned.
In your code you are creating new processes rather than threads. When the process is created the new process will have deep copies of the variables in the main process, but they are independent from each other. I think for your case it makes sense to use processes rather than threads because It would allow you to utilise multiple cores as opposed to thread that will be limited to a single core due to GIL.
You will have to use interprocess communication techniques to communicate between processes. But since in your case the processes are not persistent daemons, it would make sense to write the simulationResults into a different unique file by each process and read them back from the main process.

How to pass variables in parent to subprocess in python?

I am trying to have a parent python script sent variables to a child script to help me speed-up and automate video analysis.
I am now using the subprocess.Popen() call to start-up 6 instances of a child script but cannot find a way to pass variables and modules already called for in the parent to the child. For example, the parent file would have:
import sys
import subprocess
parent_dir = os.path.realpath(sys.argv[0])
subprocess.Popen(sys.executable, 'analysis.py')
but then import sys; import subprocess; parent_dir have to be called again in "analysis.py". Is there a way to pass them to the child?
In short, what I am trying to achieve is: I have a folder with a couple hundred video files. I want the parent python script to list the video files and start up to 6 parallel instances of an analysis script that each analyse one video file. If there are no more files to be analysed the parent file stops.
The simple answer here is: don't use subprocess.Popen, use multiprocessing.Process. Or, better yet, multiprocessing.Pool or concurrent.futures.ProcessPoolExecutor.
With subprocess, your program's Python interpreter doesn't know anything about the subprocess at all; for all it knows, the child process is running Doom. So there's no way to directly share information with it.* But with multiprocessing, Python controls launching the subprocess and getting everything set up so that you can share data as conveniently as possible.
Unfortunately "as conveniently as possible" still isn't 100% as convenient as all being in one process. But what you can do is usually good enough. Read the section on Exchanging objects between processes and the following few sections; hopefully one of those mechanisms will be exactly what you need.
But, as I implied at the top, in most cases you can make it even simpler, by using a pool. Instead of thinking about "running 6 processes and sharing data with them", just think about it as "running a bunch of tasks on a pool of 6 processes". A task is basically just a function—it takes arguments, and returns a value. If the work you want to parallelize fits into that model—and it sounds like your work does—life is as simple as could be. For example:
import multiprocessing
import os
import sys
import analysis
parent_dir = os.path.realpath(sys.argv[0])
paths = [os.path.join(folderpath, file)
for file in os.listdir(folderpath)]
with multiprocessing.Pool(processes=6) as pool:
results = pool.map(analysis.analyze, paths)
If you're using Python 3.2 or earlier (including 2.7), you can't use a Pool in a with statement. I believe you want this:**
pool = multiprocessing.Pool(processes=6)
try:
results = pool.map(analysis.analyze, paths)
finally:
pool.close()
pool.join()
This will start up 6 processes,*** then tell the first one to do analysis.analyze(paths[0]), the second to do analysis.analyze(paths[1]), etc. As soon as any of the processes finishes, the pool will give it the next path to work on. When they're all finished, you get back a list of all the results.****
Of course this means that the top-level code that lived in analysis.py has to be moved into a function def analyze(path): so you can call it. Or, even better, you can move that function into the main script, instead of a separate file, if you really want to save that import line.
* You can still indirectly share information by, e.g., marshaling it into some interchange format like JSON and pass it via the stdin/stdout pipes, a file, a shared memory segment, a socket, etc., but multiprocessing effectively wraps that up for you to make it a whole lot easier.
** There are different ways to shut a pool down, and you can also choose whether or not to join it immediately, so you really should read up on the details at some point. But when all you're doing is calling pool.map, it really doesn't matter; the pool is guaranteed to shut down and be ready to join nearly instantly by the time the map call returns.
*** I'm not sure why you wanted 6; most machines have 4, 8, or 16 cores, not 6; why not use them all? The best thing to do is usually to just leave out the processes=6 entirely and let multiprocessing ask your OS how many cores to use, which means it'll still run at full speed on your new machine with twice as many cores that you'll buy next year.
**** This is slightly oversimplified; usually the pool will give the first process a batch of files, not one at a time, to save a bit of overhead, and you can manually control the batching if you need to optimize things or sequence them more carefully. But usually you don't care, and this oversimplification is fine.

Python Manager Dictionary Efficiency

I have an object oriented Python program where I am doing certain data operations in each object using multiprocessing. I am storing each object in a common manager dictionary. When I want to update an object, first, I am retrieving the object from the dictionary and after the update I am putting it back. My Class structure is like
from src.data_element import Data_element
from multiprocessing import freeze_support, Process, Manager
import pandas as pd
class Data_Obj(Data_element):
def __init__(self, dataset_name,name_wo_fields, fields):
Data_element.__init__(self, dataset_name, name_wo_fields, fields)
self.depends=['data_1','data_2']
def calc(self,obj_dict_manager):
data_1=obj_dict_manager['data_1']
data_2=obj_dict_manager['data_2']
self.df = pd.merge(
data_1.df,
data_2.df,
on='week',
suffixes=('', '_y')
)[['week','val']]
def calculate(obj_dict_manager,data):
data_obj=obj_dict_manager[data]
data_obj.calc(obj_dict_manager)
obj_dict_manager[data]=data_obj
if __name__ == '__main__':
freeze_support()
manager=Manager()
obj_dict_manager=manager.dict()
obj_dict_manager=create_empty_objects(obj_dict_manager)
joblist=[]
for data in obj_dict_manager.keys():
p=Process(target=calculate, args=(obj_dict_manager,data))
joblist.append(p)
p.start()
for job in joblist:
job.join()
During these operations, there is a significant time spend on
data_1=obj_dict_manager['data_1']
data_2=obj_dict_manager['data_2']
i.e., the 1 second spend during retrieving the objects from the manager dictionary and the rest of the calculation takes another 1 second.
Is there any way that I can reduce the time spent here? I will be doing thousands of such operations and performance is critical for me.
An Important Note
You're doing something that is potentially dangerous: as you're iterating over the keys in obj_dict_manager, you're launching processes that modify the very same dictionary. You should never be modifying something that while you're iterating over it, and doing the modifications asynchronously from a subprocess could introduce especially strange results.
Possible Causes of your Issue
1) I can't tell how many objects are actually stored in your shared dictionary (because we don't have the code for create_empty_objects()), but if it is a significant amount, your subprocesses may be competing for access to the shared dictionary. In particular, since you have both reading and writing to the dictionary, it's going to be locked by one process or another a lot of the time.
2) Since we can't see how many keys are in your shared dictionary, we also can't see how many processes are being launched. If you're creating more processes than cores on your system, you may be subjecting your CPU to a lot of context switching, which is going to slow everything down.
3) A combination of #1 & #2 - This could be especially problematic if the manager grants a lock to one process, then that process gets put to sleep because you have dozens of processes competing for CPU time on an 8-core machine, and now everyone has to wait until that process wakes up and releases the lock.
How to Fix It
1) If your issue is skewed towards #1, consider splitting up your dictionary instead of utilize a shared one, and pass a chunk of the dictionary to each subprocess, let them do whatever they need, have them return the resulting dictionary, then recombine all the returned dictionaries as the processes complete. Something like multiprocessing.map_async() may work better for you if you can divide the dictionary up.
2) In most cases, try to limit the number of processes you spawn to the number of cores you have on your system, some times even less if you have a lot of other stuff running at the same time on your system. An exception to this is if you're doing a lot of parallel processing AND you expect the subprocesses to get block a lot, like when doing IO in parallel.

Categories

Resources