How to share object tree with process fork? - python

I don't have much experience with multithreading, and I'm trying to get something like the below working:
from multiprocessing import Process
class Node:
def __init__(self):
self.children = {}
class Test(Process):
def __init__(self, tree):
super().__init__()
self.tree = tree
def run(self):
# infinite loop which does stuff to the tree
self.tree.children[1] = Node()
self.tree.children[2] = Node()
x = Node()
t = Test(x)
t.start()
print(x.children) # random access to tree
I realize this shouldn't (and doesn't) work for a variety of very sensible reasons, but I'm not sure how to get it to work. Referring to the documentation, it seems that I need to do something with Managers and Proxies, but I honestly have no idea where to start, or whether that is actually what I'm looking for. Could someone provide an example of the above that works?

multiprocessing has limited support for implicitly shared objects, which can even share lists and dicts.
In general, multiprocessing is shared-nothing (after the initial fork) and relies on explicit communication between the processes. This adds overhead (how much really depends on the kind of interaction between the processes), but neatly avoids a lot of the pitfalls of multithreaded programming. The high-level building blocks of multiprocessing favor master/slave models (esp. the Pool class), with masters handing out work items, and slaves operating on them, returning results.
Keeping state in sync across several processes may, depending how often they change, incur a prohibitive overhead.
TL;DR: It can be done, but probably shouldn't.
import time, multiprocessing
class Test(multiprocessing.Process):
def __init__(self, manager):
super().__init__()
self.quit = manager.Event()
self.dict = manager.dict()
def stop(self):
self.quit.set()
self.join()
def run(self):
self.dict['item'] = 0
while not self.quit.is_set():
time.sleep(1)
self.dict['item'] += 1
m = multiprocessing.Manager()
t = Test(m)
t.start()
for x in range(10):
time.sleep(1.2)
print(t.dict)
t.stop()
The multiprocessing examples show how to create proxies for more complicated objects, which should allow you to implement the tree structure in your question.

It seems to me that what you want is actual multithreading, rather than multiprocessing. With threads rather than processes, you can do precisely that, since threads run in the same process, sharing all memory and therefore data with each other.

Related

How to periodically call instance method from a separate process

I'm trying to write a class to help with buffering some data that takes a while to read in, and which needs to be periodically updated. The python version is 3.7.
There are 3 criteria I would like the class to satisfy:
Manual update: An instance of the class should have an 'update' function, which reads in new data.
Automatic update: An instance's update method should be periodically run, so the buffered data never gets too old. As reading takes a while, I'd like to do this without blocking the main process.
Self contained: Users should be able to inherit from the class and overwrite the method for refreshing data, i.e. the automatic updating should work out of the box.
I've tried having instances create their own subprocess for running the updates. This causes problems because simply passing the instance to another process seems to create a copy, so the desired instance is not updated automatically.
Below is an example of the approach I'm trying. Can anyone help getting the automatic update to work?
import multiprocessing as mp
import random
import time
def refresh_helper(buffer, lock):
"""Periodically calls refresh method in a buffer instance."""
while True:
with lock.acquire():
buffer._refresh_data()
time.sleep(10)
class Buffer:
def __init__(self):
# Set up a helper process to periodically update data
self.lock = mp.Lock()
self.proc = mp.Process(target=refresh_helper, args=(self, self.lock), daemon=True)
self.proc.start()
# Do an initial update
self.data = None
self.update()
def _refresh_data(self):
"""Pretends to read in some data. This would take a while for real data"""
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9]
data = [random.choice(numbers) for _ in range(3)]
self.data = data
def update(self):
with self.lock.acquire():
self._refresh_data()
def get_data(self):
return self.data
#
if __name__ == '__main__':
buffer = Buffer()
data_first = buffer.get_data()
time.sleep(11)
data_second = buffer.get_data() # should be different from first
Here is an approach that makes use a of a multiprocessing queue. It's similar to what you had implemented, but your implementation was trying to assign to self within Buffer._refresh_data in both processes. Because self refers to a different Buffer object in each process, they did not affect each other.
To send data from one process to another you need to use shared memory, pipes, or some other such mechanism. Python's multiprocessing library provides multiprocess.Queue, which simplifies this for us.
To send data from the refresh helper to the main process we need only use queue.put in the helper process, and queue.get in the main process. The data being sent must be serializable using Python's pickle module to be sent between the processes through a multiprocess.Queue.
Using a multiprocess.Queue also saves us from having to use locks ourselves, since the queue handles that internally.
To handle the helper process starting and stopping cleanly for the example, I have added __enter__ and __exit__ methods to make Buffer into a context manager. They can be removed if you would rather manually stop the helper process.
I have also changed your _refresh_data method into _get_new_data, which returns new data half the time, and has no new data to give the other half of the time (i.e. it returns None). This was done to make it more similar to what I imagine a real application for this class would be.
It is important that only static/class methods or external functions are called from the other process, as otherwise they may operate on a self attribute that refers to a completely different instance. The exception is if the attribute is meant to be sent across the process barrier, like with self.queue. That is why the update method can use self.queue to send data to the main process despite self being a different Buffer instance in the other process.
The method get_next_data will return the oldest item found in the queue. If there is nothing in the queue, it will wait until something is added to the queue. You can change this behaviour by giving the call to self.queue.get a timeout (which will cause an exception to be raised if it times out), or using self.queue.get_nowait (which will return None immediately if the queue is empty).
from __future__ import annotations
import multiprocessing as mp
import random
import time
class Buffer:
def __init__(self):
self.queue = mp.Queue()
self.proc = mp.Process(target=self._refresh_helper, args=(self,))
self.update()
def __enter__(self):
self.proc.start()
return self
def __exit__(self, ex_type, ex_val, ex_tb):
self.proc.kill()
self.proc.join()
#staticmethod
def _refresh_helper(buffer: "Buffer", period: float = 1.0) -> None:
"""Periodically calls refresh method in a buffer instance."""
while True:
buffer.update()
time.sleep(period)
#staticmethod
def _get_new_data() -> list[int] | None:
"""Pretends to read in some data. This would take a while for real data"""
if random.randint(0, 1):
return random.choices(range(10), k=3)
return None
def update(self) -> None:
new_data = self._get_new_data()
if new_data is not None:
self.queue.put(new_data)
def get_next_data(self):
return self.queue.get()
if __name__ == '__main__':
with Buffer() as buffer:
for _ in range(5):
print(buffer.get_next_data())
Running this code will, as an example, start the helper process, then print out the first 5 pieces of data it gets from the buffer. The first one will be from the update that is performed when the buffer is initialized. The others will all be provided by the helper process running update.
Let's review your criteria:
Manual update: An instance of the class should have an 'update' function, which reads in new data.
The Buffer.update method can be used for this.
Automatic update: An instance's update method should be periodically run, so the buffered data never gets too old. As reading takes a while, I'd like to do this without blocking the main process.
This is done by a helper process which adds data to a queue for later processing. If you would rather throw away old data, and only process the newest data, then the queue can be swapped out for a multiprocess.Array, or whatever other multiprocessing compatible shared memory wrapper you prefer.
Self contained: Users should be able to inherit from the class and overwrite the method for refreshing data, i.e. the automatic updating should work out of the box.
This works by overwriting the _get_new_data method. So long as it's a static or class method which returns the data, automatic updating should work with it without any changes.
All processes exist in different areas of memory from one another, each of which is meant to be fully separate from all others. As you pointed out, the additional process creates a copy of the instance on which it operates, meaning the updated version exists in a separate memory space from the instance you're running get_data() on. Because of this there is no easy way to perform this operation on this specific instance from a different process.
Given that you want the updating of the data to not block the checking of the data, you may not use threading, as only 1 thread may operate at a time in any given process. Instead, you need to use an object which exists in a memory space shared between all processes. To do this, you can use a multiprocessing.Value object or a multiprocessing.Array, both of which store ctypes objects. Both of these objects existed in 3.7 (appropriate documentation attached.)
If this approach does not work, consider examining these similar threads:
Sharing a complex object between processes?
multiprocessing: sharing a large read-only object between processes?
Good luck with your project!

Data inconsistency in multithreaded Python code

First time SO user here. I have a problem with my "thread-safe" Python singleton. The class stores data that is only read by the worker threads. The main thread updates some of the data. I was under the impression that a singleton will ensure that my worker threads will have access to the same data. However, in reality, some of the threads still process "old" data (pre data change in the main thread).
I used the singleton implementation from https://refactoring.guru:
from threading import Lock
class SingletonMeta(type):
_instances = {}
_lck = Lock()
def __call__(cls, *args, **kwargs):
with cls._lck:
if cls not in cls._instances:
instance = super().__call__(*args, **kwargs)
cls._instances[cls] = instance
return cls._instances[cls]
class Storage(metaclass=SingletonMeta):
def __init__(self, shared=None):
self._shared = shared
#property
def shared(self):
return self._shared
def refresh(self, update):
if update:
self._shared['managed'] = update
logger.debug("[+] Shared data refreshed.")
The refresh() method is only called from the main thread. The worker threads read the dictionary via the shared property.
Is this the correct approach? I am afraid not, since the data is not consistent among the worker threads. Can anybody help me understand what I am doing wrong and why the data is not updated for all threads?
Thank you
Update
After more investigation and reading, it turns out, the Singleton is not the problem. My problem (and the part I didn't mention because I did't think of it at first) is rooted in the different gunicorn worker processes. So the thread(s) in the worker process that applies the data update has the correct data and the others don't. I will have to think about how I can synchronize the data between workers.

python multiproccesing and thread safety in shared objects

I am a bit uncertain regarding thread safety and multiprocessing.
From what I can tell multiprocessing.Pool.map pickles the calling function or object but leaves members passed by references intact.
This seems like it could be beneficial since it saves memory but I haven't found any information about the thread safety in those objects.
In my case I am trying to read numpy data from disk, however, I want to be able to modify the source with out changing the implementation so I've broken out the reading part to its own classes.
I roughly have the following situation:
import numpy as np
from multiprocessing import Pool
class NpReader():
def read_row(self, row_index):
pass
class NpReaderSingleFile(NpReader):
def read_row(self, row_index):
return np.load(self._filename_from_row(row_index))
def _filename_from_row(self, row_index):
return Path(row_index).with_suffix('.npy')
class NpReaderBatch(NpReader):
def __init__(self, batch_file, mmap_mode=None):
self.batch = np.load(batch_file, mmap_mode=mmap_mode)
def read_row(self, row_index):
read_index = row_index
return self.batch[read_index]
class ProcessRow():
def __init__(self, reader):
self.reader = reader
def __call__(self, row_index):
return reader.read_row(row_index).shape
readers = [
NpReaderSingleFile(),
NpReaderBatch('batch.npy'),
NpReaderBatch('batch.npy', mmap_mode='r')
]
res = []
for reader in readers:
with Pool(12) as mp:
res.append(mp.map(ProcessRow(reader), range(100000))
It seems to me that there are alot of things that could go wrong here but I, unfortunately does not have the knowledge to determine what of test for it.
Are there any obvious problems with the above approach?
Some things that occurred to me are:
np.load (it seems to work well for small single files, but can I test it to see that it is safe?
Is NpReaderBatch safe or can read_index be modified at the same time by different processes?

python threading - best way to pass arguments to threads

I was wondering what would be the best way, performance wise, to pass shared arguments to threads (e.g. an input Queue).
I used to pass them as arguments to the __init__ function, because that's what I saw in most of the examples out there in the internet.
But I was wondering whether it would be faster to set them as class variables, is there a reason not to do that?
Here is what I mean:
class Worker(threading.Thread):
def __init__(self, in_q):
self.in_q = in_q
or:
class Worker(threading.Thread):
in_q = None
def __init__(self):
...
...
def main():
Worker.in_q = Queue.Queue()
Class attributes are sometimes called "static" for a reason. They are part of the static model structure and tell something about the classes. Attributes tell something about the object in the runtime. This does not apply to your case.
For example, at some point you may want to have, e.g. two separate groups of workers running in parallel, but sharing different queues. The design with the static attributes will prevent you from doing that. Basically, that's a slightly disguised global variable with the same drawbacks (implicit coupling, encapsulation leakage etc).

Python Multiprocessing - apply class method to a list of objects

Is there a simple way to use Multiprocessing to do the equivalent of this?
for sim in sim_list:
sim.run()
where the elements of sim_list are "simulation" objects and run() is a method of the simulation class which does modify the attributes of the objects. E.g.:
class simulation:
def __init__(self):
self.state['done']=False
self.cmd="program"
def run(self):
subprocess.call(self.cmd)
self.state['done']=True
All the sim in sim_list are independent, so the strategy does not have to be thread safe.
I tried the following, which is obviously flawed because the argument is passed by deepcopy and is not modified in-place.
from multiprocessing import Process
for sim in sim_list:
b = Process(target=simulation.run, args=[sim])
b.start()
b.join()
One way to do what you want is to have your computing class (simulation in your case) be a subclass of Process. When initialized properly, instances of this class will run in separate processes and you can set off a group of them from a list just like you wanted.
Here's an example, building on what you wrote above:
import multiprocessing
import os
import random
class simulation(multiprocessing.Process):
def __init__(self, name):
# must call this before anything else
multiprocessing.Process.__init__(self)
# then any other initialization
self.name = name
self.number = 0.0
sys.stdout.write('[%s] created: %f\n' % (self.name, self.number))
def run(self):
sys.stdout.write('[%s] running ... process id: %s\n'
% (self.name, os.getpid()))
self.number = random.uniform(0.0, 10.0)
sys.stdout.write('[%s] completed: %f\n' % (self.name, self.number))
Then just make a list of objects and start each one with a loop:
sim_list = []
sim_list.append(simulation('foo'))
sim_list.append(simulation('bar'))
for sim in sim_list:
sim.start()
When you run this you should see each object run in its own process. Don't forget to call Process.__init__(self) as the very first thing in your class initialization, before anything else.
Obviously I've not included any interprocess communication in this example; you'll have to add that if your situation requires it (it wasn't clear from your question whether you needed it or not).
This approach works well for me, and I'm not aware of any drawbacks. If anyone knows of hidden dangers which I've overlooked, please let me know.
I hope this helps.
For those who will be working with large data sets, an iterable would be your solution here:
import multiprocessing as mp
pool = mp.Pool(mp.cpu_count())
pool.imap(sim.start, sim_list)

Categories

Resources