Sharing an object without IPC communication - python

I want to share a multiprocessing.Array between the parent and its child processes:
self.procs = [MP.Process(target=worker, kwargs=dict(sh_array=array)
for _ in range(num_workers)]
Does the code above do the right thing? I only want fast IPC communication based on shared memory / file-mapping when I access the shared array. I don't want any message passing or copy-based IPC happening behind the curtain. That would defeat the purpose of the code I'm writing.
Also, I'd like to pass the same way different instances of a class which all refer to the same shared array. Will this work correctly or should I pass the shared array separately and then rebuild the objects in the child processes manually?

Does the code above do the right thing?
Yes, that's exactly what multiprocessing.Array is for and how it is used.
I don't want any IPC communication when I access the shared array.
I don't think the term "IPC" is used correctly here. IPC stands for inter-process communication, and if you have an array that is shared between processes, then anything you write to the array will be available to be read from the other processes. In other words, you are communicating between processes. In other words, IPC. Shared memory is IPC, and if you don't want IPC, then you can't share things between processes.
You may mean something completely different. Maybe you don't want to pass messages back and forth, or something like that?
Also, I'd like to pass the same way different instances of a class which all refer to the same shared array. Will this work correctly or should I pass the shared array separately and then rebuild the objects in the child processes manually?
Either way works. Do whichever option makes the code more natural to read.

Related

Are global variables get replicated in each process when doing multiprocessing in Python?

We have used parallel processing by having some functions being called by runInParallel that you will find in this answer: https://stackoverflow.com/a/7207336/720484
All of these functions are supposed to have access to a single global variable which they should read.
This global variable is actually an instance of a class. This instance contains a member variable/attribute and all of the processes read and write to it.
However things are not happening like this. The object(class instance) seems to be replicated and that its attributes are independent on each process. So if one process changes the value this is not visible to the variable of the other process.
Is this the expected behavior?
How to overcome it?
Thank you
All children processes will inherit that instance at the moment of forking from the parent process. Any changes made to the instance in the children and in the parent will NOT be seen after the fork.
This is how the processes work in Linux — every process has its own memory, protected from other processes (unless you intentionally shared it). It is not Python-specific.
What you are looking for is called IPC (Inter-Process Communication) in general. There are multiple ways how the processes can communicate with each another. You might want to use pipes or the shared memory.
In Python, read this: https://docs.python.org/2/library/multiprocessing.html#sharing-state-between-processes

Python - Merge data from multiple thread instances

I am currently working on a project that involves connecting two devices to a python script, retrieving data from them and outputting the data.
Code outline:
• Scans for paired devices
• Paired device found creates thread instance (Two devices connected = two thread instances )
• Data is printed within the thread i.e. each instance has a separate
bundle of data
Basically when two devices are connected two instances of my thread class is created. Each thread instance returns a different bundle of data.
My question is: Is there a way I can combine the two bundles of data into one bundle of data?
Any help on this is appreciated :)
I assume you are using the threading module.
Threading in Python
Python is not multithreaded for CPU activity. The interpreter still uses a GIL (Global Interpreter Lock) for most operations and therefore linearizing operations in a python script. Threading is good to do IO however, as other threads can be woken up while a thread waits for IO.
Idea
Because of the GIL we can just use a standard list to combine our data. The idea is to pass the same list or dictionary to every Thread we create using the args parameter. See pydoc for threading.
Our simple implementation uses two Threads to show how it can be done. In real-world applications you probably use a Thread group or something similar..
Implementation
def worker(data):
# retrieve data from device
data.append(1)
data.append(2)
l = []
# Let's pass our list to the target via args.
a = Thread(target=worker, args=(l,))
b = Thread(target=worker, args=(l,))
# Start our threads
a.start()
b.start()
# Join them and print result
a.join()
b.join()
print(l)
Further thoughts
If you want to be 100% correct and don't rely on the GIL to linearize access to your list, you can use a simple mutex to lock and unlock or use the Queue module which implements correct locking.
Depending on the nature of the data a dictionary might be more convenient to join data by certain keys.
Other considerations
Threads should be carefully considered. Alternatives such as asyncio, etc might be better suited.
My general advice: Avoid using any of these things
avoid threads
avoid the multiprocessing module in Python
avoid the futures module in Python.
Use a tool like http://python-rq.org/
Benefit:
You need to define the input- and output data well, since only serializable data can be passed around
You have distinct interpreters.
No dead locks
Easier to debug.

Parallel python loss of data

I have a python function that creates and stores a object instance in a global list and this function is called by a thread. While the thread runs the lists is filled up as it should be, but when the thread exits the list is empty and I have no idea why. Any help would be appreciated.
simulationResults = []
def run(width1, height1, seed1, prob1):
global simulationResults
instance = Life(width1, height1, seed1, prob1)
instance.run()
simulationResults.append(instance)
this is called in my main by:
for i in range(1, nsims + 1):
simulations.append(multiprocessing.Process(target=run, args=(width, height, seed, prob)))
simulations[(len(simulations) - 1)].start()
for i in simulations:
i.join()
multiprocessing is based on processes, not threads. The important difference: Each process has a separate memory space, while threads share a common memory space. When first created, a process may (depending on OS, spawn method, etc.) be able to read the same values the parent process has, but if it writes to them, only the local values are changed, not the parent's copy. Only threads can rely on being able to access an arbitrary single shared global variable and have it behave as expected.
I'd suggest looking at either multiprocessing.Pool and its various methods to dispatch tasks and retrieve their results later, or if you must use raw Processes, look at the various ways to exchange data between processes; you can't just assign to a global variable, because globals stop being shared when the new Process is forked/spawned.
In your code you are creating new processes rather than threads. When the process is created the new process will have deep copies of the variables in the main process, but they are independent from each other. I think for your case it makes sense to use processes rather than threads because It would allow you to utilise multiple cores as opposed to thread that will be limited to a single core due to GIL.
You will have to use interprocess communication techniques to communicate between processes. But since in your case the processes are not persistent daemons, it would make sense to write the simulationResults into a different unique file by each process and read them back from the main process.

Concurrently searching a graph in Python 3

I'd like to create a small p2p application that concurrently processes incoming data from other known / trusted nodes (it mostly stores it in an SQLite database). In order to recognize these nodes, upon connecting, each node introduces itself and my application then needs to check whether it knows this node directly or maybe indirectly through another node. Hence, I need to do a graph search which obviously needs processing time and which I'd like to outsource to a separate process (or even multiple worker processes? See my 2nd question below). Also, in some cases it is necessary to adjust the graph, add new edges or vertices.
Let's say I have 4 worker processes accepting and handling incoming connections via asynchronous I/O. What's the best way for them to access (read / modify) the graph? A single queue obviously doesn't do the trick for read access because I need to pass the search results back somehow.
Hence, one way to do it would be another queue which would be filled by the graph searching process and which I could add to the event loop. The event loop could then pass the results to a handler. However, this event/callback-based approach would make it necessary to also always pass the corresponding sockets to the callbacks and thus to the Queue – which is nasty because sockets are not picklable. (Let alone the fact that callbacks lead to spaghetti code.)
Another idea that's just crossed my mind might be to create a pipe to the graph process for each incoming connection and then, on the graph's side, do asynchronous I/O as well. However, in order to avoid callbacks, if I understand correctly, I would need an async I/O library making use of yield from (i.e. tulip / PEP 3156). Are there other options?
Regarding async I/O on the graph's side: This is certainly the best way to handle many incoming requests at once but doing graph lookups is a CPU intensive task, thus could profit from using multiple worker threads or processes. The problem is: Multiple threads allow shared data but Python's GIL somewhat negates the performance benefit. Multiple processes on the other hand don't have this problem but how can I share and synchronize data between them? (For me it seems quite impossible to split up a graph.) Is there any way to solve this problem in a nice way? Also, does it make sense in terms of performance to mix asynchronous I/O with multithreading / multiprocessing?
Answering your last question: It does! But, IMHO, the question is: does it makes sense mix Events and Threads? You can check this article about hybrid concurrency models: http://bibliotecadigital.sbc.org.br/download.php?paper=3027
My tip: Start with just one process and an event loop, like in the tulip model. I'll try to explain how can you use tulip to have Events+async I/O (and threads or other processes) without callbacks at all.
You could have something like accept = yield from check_incoming(), which should be a tulip coroutine (check_incoming), and inside this function you could use loop.run_in_executor() to run your graph search in a thread/process pool (I'll explain more about this later). This function run_in_executor() returns a Future, in which you can also yield from tasks.wait([future_returned_by_run_in_executor], loop=self). The next step would be result = future_returned_by_run_in_executor.result() and finally return True or False.
The process pool requires that only pickable objects can be executed and returned. This requirement is not a problem but it's implicit that the graph operation must be self contained in a function and must obtain the graph instance somehow. The Thread pool has the GIL problem since you mentioned CPU bound tasks which can lead to 'acquiring-gil-conflicts' but this was improved in the new Python 3.x GIL. Both solutions have limitations..
So.. instead of a pool, you can have another single process with it's own event loop just to manage all the graph work and connect both processes with a unix domain socket for instance..
This second process, just like the first one, must also accept incoming connections (but now they are from a known source) and can use a thread pool just like I said earlier but it won't "conflict" with the first event loop process(the one that handles external clients), only with the second event loop. Threads sharing the same graph instance requires some locking/unlocking.
Hope it helped!

Python fork(): passing data from child to parent

I have a main Python process, and a bunch or workers created by the main process using os.fork().
I need to pass large and fairly involved data structures from the workers back to the main process. What existing libraries would you recommend for that?
The data structures are a mix of lists, dictionaries, numpy arrays, custom classes (which I can tweak) and multi-layer combinations of the above.
Disk I/O should be avoided. If I could also avoid creating copies of the data -- for example by having some kind of shared-memory solution -- that would be nice too, but is not a hard constraint.
For the purposes of this question, it is mandatory that the workers are created using os.fork(), or a wrapper thereof that would clone the master process's address space.
This only needs to work on Linux.
multiprocessing's queue implementation works. Internally, it pickles data to a pipe.
q = multiprocessing.Queue()
if (os.fork() == 0):
print(q.get())
else:
q.put(5)
# outputs: 5

Categories

Resources