AttributeError 'DupFd' in 'multiprocessing.resource_sharer' | Python multiprocessing + threading - python

I'm trying to communicate between multiple threading.Thread(s) doing I/O-bound tasks and multiple multiprocessing.Process(es) doing CPU-bound tasks. Whenever a thread finds work for a process, it will be put on a multiprocessing.Queue, together with the sending end of a multiprocessing.Pipe(duplex=False). The processes then do their part and send results back to the threads via the Pipe. This procedure seems to work in roughly 70% of the cases, the other 30% I receive an AttributeError: Can't get attribute 'DupFd' on <module 'multiprocessing.resource_sharer' from '/usr/lib/python3.5/multiprocessing/resource_sharer.py'>
To reproduce:
import multiprocessing
import threading
import time
def thread_work(work_queue, pipe):
while True:
work_queue.put((threading.current_thread().name, pipe[1]))
received = pipe[0].recv()
print("{}: {}".format(threading.current_thread().name, threading.current_thread().name == received))
time.sleep(0.3)
def process_work(work_queue):
while True:
thread, pipe = work_queue.get()
pipe.send(thread)
work_queue = multiprocessing.Queue()
for i in range(0,3):
receive, send = multiprocessing.Pipe(duplex=False)
t = threading.Thread(target=thread_work, args=[work_queue, (receive, send)])
t.daemon = True
t.start()
for i in range(0,2):
p = multiprocessing.Process(target=process_work, args=[work_queue])
p.daemon = True
p.start()
time.sleep(5)
I had a look in the multiprocessing source code, but couldn't understand why this error occurs.
I tried using the queue.Queue, or a Pipe with duplex=True (default) but coudn't find a pattern in the error. Does anyone have a clue how to debug this?

You are forking an already multi-threaded main-process here. That is known to be problematic in general.
It is in-fact problem prone (and not just in Python). The rule is "thread after you fork, not before". Otherwise, the locks used by the thread executor will get duplicated across processes. If one of those processes dies while it has the lock, all of the other processes using that lock will deadlock -Raymond Hettinger.
Trigger for the error you get is apparantly that the duplication of the file-descriptor for the pipe fails in the child process.
To resolve this issue, either create your child-processes as long as your main-process is still single-threaded or use another start_method for creating new processes like 'spawn' (default on Windows) or 'forkserver', if available.
forkserver
When the program starts and selects the forkserver start method, a server process is started. From then on, whenever a new process is needed, the parent process connects to the server and requests that it fork a new process. The fork server process is single threaded so it is safe for it to use os.fork(). No unnecessary resources are inherited.
Available on Unix platforms which support passing file descriptors over Unix pipes. docs
You can specify another start_method with:
multiprocessing.set_start_method(method)
Set the method which should be used to start child processes. method can be 'fork', 'spawn' or 'forkserver'.
Note that this should be called at most once, and it should be protected inside the if name == 'main' clause of the main module. docs
For a benchmark of the specific start_methods (on Ubuntu 18.04) look here.

Related

Multithreading: How to avoid hanging caused by worker thread erroring out

I created a script that executes multiples threads, where each thread makes a request to an API to retrieve some data. Unfortunately, one of the threads might run into a disconnection error (perhaps due to overloading the site's API), and as a result, the entire python script hangs indefinitely...How can I force the script to exit gracefully when one of the worker threads has a disconnection error? I thought using terminate would close the thread.
My code:
runId = sys.argv[1]
trth = TrThDownload(runId)
data = trth.data
concurrences = min(len(data),10)
p = pool.ThreadPool(concurrences)
p.map(trth.runDownloader, data)
p.terminate()
p.close()
p.join()
You really should try async programming. I prefer gevent. At the top of your script just do this:
import gevent
gevent.monkey.patch_all()
Also, don't terminate or close before your join. Just use join.

What happened if I fork a process which has a daemon thread with it?

My question is I have a parent process A, and I set up a daemon thread as a RPC server like TaskRPCServer(Thread). Then I would like to spawn a child process using Python multiprocessing.Process object. Eg: B = Process(), B.start(). Dese B will have the same daemon thread as A? Is there a way that I can force B not have the daemon thread running in A? Because there is some cases that a lot of process will listen to the RPC ports. Or if my design was wrong, how can I do it correctly? Thank you!
When you fork a child, it starts with only one thread. This is defined by POSIX:1
A process shall be created with a single thread. If a multi-threaded process calls fork(), the new process shall contain a replica of the calling thread and its entire address space, possibly including the states of mutexes and other resources.
So, your child process will not have the daemon thread. You don't have to do anything to force it not to.
You can test this yourself pretty easily:
import threading
import os
import time
def threadfunc():
while True:
print(os.getpid())
time.sleep(1)
def main():
t = threading.Thread(target=threadfunc)
t.start()
pid = os.fork()
if pid:
print(f'Forked {pid}; sleep time')
time.sleep(5)
else:
print(f'Forked child; sleep time')
time.sleep(5)
main()
If you run this, you'll see something like this:
12345
Forked 12346; sleep time
Forked child; sleep time
12345
12345
12345
Notice that the daemon thread printed 12345, the PID of the parent process, 5 times, and nobody ever printed 12346, the PID of the child process.
But meanwhile, even though it the problem you're asking about doesn't exist, there are sometimes problems mixing fork and threads, and multiprocessing gives you a way around those problems, as described in Contexts and start methods.
multiprocessing.set_start_method('forkserver') guarantees that your child processes are spun up from a clean state as far as threading, mutexes, etc.2 It also protects you from accidentally sharing file handles. (The third option, spawn, is usually only needed if you want to make sure your code runs the same on Unix and Windows.)
1. This may not be true for some very old Unix platforms, but it will be true for any macOS, Linux, *BSD, etc. that support POSIX threading at least back to 2004, and probably earlier, but I can't find the older POSIX/SUS specs free and legal online anywhere…
2. Besides the problems the POSIX docs warn about, there are murkier problems with things like trying to run a Cocoa main loop while multiprocessing from a background thread.

threading and multithreading in python with an example

I am a beginner in python and unable to get an idea about threading.By using simple example could someone please explain threading and multithreading in python?
-Thanks
Here is Alex Martelli's answer about multithreading, as linked above.
He uses a simple program that tries some URLs then returns the contents of first one to respond.
import Queue
import threading
import urllib2
# called by each thread
def get_url(q, url):
q.put(urllib2.urlopen(url).read())
theurls = ["http://google.com", "http://yahoo.com"]
q = Queue.Queue()
for u in theurls:
t = threading.Thread(target=get_url, args = (q,u))
t.daemon = True
t.start()
s = q.get()
print s
This is a case where threading is used as a simple optimization: each subthread is waiting for a URL to resolve and respond, in order to put its contents on the queue; each thread is a daemon (won't keep the process up if main thread ends -- that's more common than not); the main thread starts all subthreads, does a get on the queue to wait until one of them has done a put, then emits the results and terminates (which takes down any subthreads that might still be running, since they're daemon threads).
Proper use of threads in Python is invariably connected to I/O operations (since CPython doesn't use multiple cores to run CPU-bound tasks anyway, the only reason for threading is not blocking the process while there's a wait for some I/O). Queues are almost invariably the best way to farm out work to threads and/or collect the work's results, by the way, and they're intrinsically threadsafe so they save you from worrying about locks, conditions, events, semaphores, and other inter-thread coordination/communication concepts.

Blocking threads in Python

I'm debugging some Python code which has a blocking issue. I have some hypothesis on the what is going on there, but I do not know Python thread mechanisms well enough to verify it.
Here is the code:
class Executor:
def execute_many(commands):
with_processes = zip(commands, seld.process_cycle)
def write():
for command, process in with_processes:
send_command_to_process(process, command)
writing_thread = threading.Thread(target=write)
writing_thread.start()
for _, process in with_processes:
yield receive_result_from_process(process)
thread.join()
and somewhere else:
foos = [make_foo(result) for result in executor.execute_many(commands)]
The process_cycle of Executor yields subprocess.Popen objects. The send_command_to_process and receive_result_from_process communicate with these processes by pipes.
The issue I'm debugging is that from time to time this code freezes: all Popen processes and the writing_thread are blocked on flushing after writing to the pipes.
I did not expect it to happen, since (even if buffers are full) the execute_many generator will yield receive_result_from_process(process) and unblock one of the processes (which does not happen - execute_many freezes inside the loop).
So I came up with a hypothesis, that if writing_thread is blocked by a full pipe buffer, the main thread is blocked too (they are in the same process).
Is that possible? If so it a Python feature, or Linux feature?
TL;DR
If a Python process has two threads and one of them is blocked on flushing after write to a full pipe buffer, could that block the other thread?
If so it a Python feature, or Linux feature?
There is something called the Global Interpreter Lock in CPython which prevents python bytecode to be interpreted in different threads.
Each thread needs to actively release the thread so that another one can execute.
If one thread is blocked, other thread can continue execution for sure.

Python, Using Remote Managers and Multiprocessing

I want to use the remote manager functions in the multiprocessing module to distribute work among many machines. I know there are 3rd party modules, but I want to stick with core as much as possible. I know for desktop (single machine), you can use the multiprocessing.Pool class to limit the number of CPUs, but have a couple of questions with remote managers.
I have the following code for the remote manager:
from multiprocessing.managers import BaseManager
import Queue
queue = Queue.Queue()
class QueueManager(BaseManager): pass
QueueManager.register('get_queue', callable=lambda:queue)
m = QueueManager(address=('', 50000), authkey='abracadabra')
s = m.get_server()
s.serve_forever()
This works great, and I can even submit a job into the Queue using the following code:
QueueManager.register('get_queue')
m = QueueManager(address=('machinename', 50000), authkey='abracadabra')
m.connect()
queue = m.get_queue()
queue.put('hello')
You can also the queue.get() to get a single entry in the queue.
How do you get the items in the queue? When I tried to iterate through the queue, I enter an infinite loop.
On the workers, can you limit each machine to 1 job per machine?
Since this method seems to be a pull method, where the workers need to examine if a job exists, can there be a push method where the multiprocessing server can be triggered?
Iterating over a queue is the same as doing:
while True:
elem = queue.get() #queue empty -> it blocks!!!
An elegant way to "iterate" over a queue and block your worker process when there are no more jobs to execute is to use None(or something else) as a sentinel and use iter(callable, sentinel):
for job in iter(queue.get, None):
# execute the calculation
output_queue.put(result)
#shutdown the worker process
Which is equivalent to:
while True:
job = queue.get()
if job is None:
break
#execute the calculation
output_queue.put(result)
#shutdown the worker process
Note that you have to insert in the queu a sentinel for each worker subprocess, otherwise there will be subprocesses waiting for it.
Regarding your second question, I don't understand what you are asking. The BaseManager provides one server that executes the calls from the clients, so, obviously, all requests are satisfied by the same machine.
Or do you mean allow each client to do only a request? I don't see any option for this, even though it could be implemented "by hand".
I don't understand your question. What is like a pull method? Can you rephrase your question with a bit more details on what you mean by "a push method where the multiprocessing server can be triggered"?

Categories

Resources