Communication between parent child processes - python

Im trying to create a Python 3 program that has one or more child processes.
The Parent process spawns the child processes and then goes on with its own buisiness, now and then I want to send a message to a specific child process that catches it and takes action.
Also the child process need to be non locked while waiting for message, it will run a own loop maintaning a server connection and send any recived messages on to parent.
Im currently looking at multiprocessing, threading, subprocess modules in python but have not been able to find any solution.
What Im trying to achive is to have a Main part of the program that interacts with the user, taking care of user inputs and presenting information to the user.
This will be asychronous from the child parts that talks with different servers, reciving messages from server and sending correct messages from user to server.
The child processes will then send information back to main part where they will be pressented to user
My questions are:
Am I going at this in the wrong way
Which module would be the best to use
2.1 How would I set this up

See Doug Hellmann's (multiprocessing) "Communication Between Processes". Part of his Python Module of the Week series. It is fairly simple to use a dictionary or list to communicate with a process.
import time
from multiprocessing import Process, Manager
def test_f(test_d):
""" frist process to run
exit this process when dictionary's 'QUIT' == True
"""
test_d['2'] = 2 ## change to test this
while not test_d["QUIT"]:
print "test_f", test_d["QUIT"]
test_d["ctr"] += 1
time.sleep(1.0)
def test_f2(name):
""" second process to run. Runs until the for loop exits
"""
for j in range(0, 10):
print name, j
time.sleep(0.5)
print "second process finished"
if __name__ == '__main__':
##--- create a dictionary via Manager
manager = Manager()
test_d = manager.dict()
test_d["ctr"] = 0
test_d["QUIT"] = False
##--- start first process and send dictionary
p = Process(target=test_f, args=(test_d,))
p.start()
##--- start second process
p2 = Process(target=test_f2, args=('P2',))
p2.start()
##--- sleep 3 seconds and then change dictionary
## to exit first process
time.sleep(3.0)
print "\n terminate first process"
test_d["QUIT"] = True
print "test_d changed"
print "data from first process", test_d
time.sleep(5.0)
p.terminate()
p2.terminate()

Sounds like you might be familiar with multi-processing, just not with python.
os.pipe will supply you with pipes to connect parent and child. And semaphores can be used to coordinate/signal between parent&child processes. You might want to consider queues for passing messages.

Related

Python multiprocessing without blocking parent process

I am attempting to create a simple application which continuously monitors an inbox, then calls various functions as child processes, after categorising incoming mail.
I would like the parent process to continue it's while loop without waiting for the child process to complete.
For example:
def main():
while 1:
checkForMail()
if mail:
if mail['type'] = 'Type1':
process1() #
'''
spawn process1, as long as no other process1 process running,
however it's fine for a process2 to be currently running
'''
elif mail['type'] = 'Type2':
process2()
'''
spawn process2, as long as no other process2 process running,
however it's fine for a process1 to be currently running
'''
# Wait a bit, then continue loop regardless of whether child processes have finished or not
time.sleep(10)
if __name__ == '__main__':
main()
As commented above, there should never be more than once concurrent child process instance for a function, however processes can run concurrently if they are running different functions.
Is this possible to do with the multiprocessing package?
Following on from pdeubel's answer which was very helpful, the completed skeleton script is as follows:
So start the two Processes before the main loop, then start the main loop and the mails should get put on the Queues where they get picked up in the subprocesses.
def func1(todo):
# do stuff with current todo item from queue1
def func2(todo):
# do stuff with current todo item from queue2
def listenQ1(q):
while 1:
# Fetch jobs from queue1
todo = q.get()
func1(todo)
def listenQ2(q):
while 1:
# Fetch jobs from queue2
todo = q.get()
func2(todo)
def main(queue1, queue2):
while 1:
checkForMail()
if mail:
if mail['type'] = 'Type1':
# Add to queue1
queue1.put('do q1 stuff')
elif mail['type'] = 'Type2':
# Add job to queue2
queue2.put('do q2 stuff')
time.sleep(10)
if __name__ == '__main__':
# Create 2 multiprocessing queues
queue1 = Queue()
queue2 = Queue()
# Create and start two new processes, with seperate targets and queues
p1 = Process(target=listenQ1, args=(queue1,))
p1.start()
p2 = Process(target=listenQ2, args=(queue2,))
p2.start()
# Start main while loop and check for mail
main(queue1, queue2)
p1.join()
p2.join()
You could use two Queues, one for mails of Type1 and one for mails of Type2 and two Processes again one for mails of Type1 and one for mails of Type2.
Start by creating these Queues. Then create the Processes and give the first Queue to the first Process and the second Queue to the second Process. Both Process objects need a parameter target which is the function that the Process executes. Depending on the logic you probably will need two functions (again one for each type). Inside the function you want something like an infinite loop which takes items from the Queue (i.e. the mails) and then act on them according to your logic. The main function would also consist of an infinite loop where the mails are retrieved and depending on their type they get placed on the correct Queue.
So start the two Processes before the main loop, then start the main loop and the mails should get put on the Queues where they get picked up in the subprocesses.

In Python multiprocessing when child process writes data to Queue and no one reads it, child process does not exit. WHY

I have a python code where the main process creates a child process. There is a shared queue between the two processes. The child process writes some data to this shared queue. The main process join()s on the child process.
If the data in the queue is not removed with get(), the child process does not terminate and the main is blocked at join(). Why is it so.
Following is the code that I used :
from multiprocessing import Process, Queue
from time import *
def f(q):
q.put([42, None, 'hello', [x for x in range(100000)]])
print (q.qsize())
#q.get()
print (q.qsize())
q = Queue()
print (q.qsize())
p = Process(target=f, args=(q,))
p.start()
sleep(1)
#print (q.get())
print('bef join')
p.join()
print('aft join')
At present the q.get() is commented and so the output is :
0
1
1
bef join
and then the code is blocked.
But if I uncomment one of the q.get() invocations, then the code runs completely with the following output :
0
1
0
bef join
aft join
Well, if you look at the Queue documentation, it explicitly says that
Queue.join : Blocks until all items in the queue have been gotten and processed. It seems logic to me that join() blocks your program if you don't empty the Queue.
To me, you need to learn about the philosophy of Multiprocessing. You have several tasks to do that don't need each other to be run, and your program at the moment is too slow for you. You need to use Multiprocess !
But don't forget there will (trust me) come a time when you will need to wait until some parallel computations are all done, because you need all of these elements to do your next task. And that's where, in your case, join() comes in. You are basically saying : I was doing things asynchronously. But now, my next task needs to be synced with the different items I computed before. Let's wait here until they are all ready.

Running Python multi-threaded process & interrupt a child thread with a signal

I am trying to write a Python multi-threaded script that does the following two things in different threads:
Parent: Start Child Thread, Do some simple task, Stop Child Thread
Child: Do some long running task.
Below is a simple way to do it. And it works for me:
from multiprocessing import Process
import time
def child_func():
while not stop_thread:
time.sleep(1)
if __name__ == '__main__':
child_thread = Process(target=child_func)
stop_thread = False
child_thread.start()
time.sleep(3)
stop_thread = True
child_thread.join()
But a complication arises because in actuality, instead of the while-loop in child_func(), I need to run a single long-running process that doesn't stop unless it is killed by Ctrl-C. So I cannot periodically check the value of stop_thread in there. So how can I tell my child process to end when I want it to?
I believe the answer has to do with using signals. But I haven't seen a good example of how to use them in this exact situation. Can someone please help by modifying my code above to use signals to communicate between the Child and the Parent thread. And making the child-thread terminate iff the user hits Ctrl-C.
There is no need to use the signal module here unless you want to do cleanup on your child process. It is possible to stop any child processes using the terminate method (which has the same effect as SIGTERM)
from multiprocessing import Process
import time
def child_func():
time.sleep(1000)
if __name__ == '__main__':
event = Event()
child_thread = Process(target=child_func)
child_thread.start()
time.sleep(3)
child_thread.terminate()
child_thread.join()
The docs are here: https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Process.terminate

How do I cleanly exit from a multiprocessing script?

I am building a non-blocking chat application for my website, and I decided to implement some multiprocessing to deal with DB querying and real-time messaging.
I assume that when a user lands on a given URL to see their conversation with the other person, I will fire off the script, the multiprocessing will begin, the messages will be added to a queue and displayed on the page, new messages will be sent to a separate queue that interacts with the DB, etc. (Regular message features ensue.)
However, what happens when the user leaves this page? I assume I need to exit these various processes, but currently, this does not lend itself to a "clean" exit. I would have to terminate processes and according to the multiprocessing docs:
Warning: If this method (terminate()) is used when the associated process is using a pipe
or queue then the pipe or queue is liable to become corrupted and may become
unusable by other process.Similarly, if the process has acquired a lock or
semaphore etc. then terminating it is liable to cause other processes to
deadlock.
I have also looked into sys.exit(); however, it doesn't fully exit the script without the use of terminate() on the various processes.
Here is my code that is simplified for the purposes of this question. If I need to change it, that's completely fine. I simply want to make sure I am going about this appropriately.
import multiprocessing
import Queue
import time
import sys
## Get all past messages
def batch_messages():
# The messages list here will be attained via a db query
messages = [">> This is the message.", ">> Hello, how are you doing today?", ">> Really good!"]
for m in messages:
print m
## Add messages to the DB
def add_messages(q2):
while True:
# Retrieve from the queue
message_to_db = q2.get()
# For testing purposes only; perfrom another DB query to add the message to the DB
print message_to_db, "(Add to DB)"
## Recieve new, inputted messages.
def receive_new_message(q1, q2):
while True:
# Add the new message to the queue:
new_message = q1.get()
# Print the message to the (other user's) screen
print ">>", new_message
# Add the q1 message to q2 for databse manipulation
q2.put(new_message)
def shutdown():
print "Shutdown initiated"
p_rec.terminate()
p_batch.terminate()
p_add.terminate()
sys.exit()
if __name__ == "__main__":
# Set up the queue
q1 = multiprocessing.Queue()
q2 = multiprocessing.Queue()
# Set up the processes
p_batch = multiprocessing.Process(target=batch_messages)
p_add = multiprocessing.Process(target=add_messages, args=(q2,))
p_rec = multiprocessing.Process(target=receive_new_message, args=(q1, q2,))
# Start the processes
p_batch.start() # Perfrom batch get
p_rec.start()
p_add.start()
time.sleep(0.1) # Test: Sleep to allow proper formatting
while True:
# Enter a new message
input_message = raw_input("Type a message: ")
# TEST PURPOSES ONLY: shutdown
if input_message == "shutdown_now":
shutdown()
# Add the new message to the queue:
q1.put(input_message)
# Let the processes catch up before printing "Type a message: " again. (Shell purposes only)
time.sleep(0.1)
How should I deal with this situation? Does my code need to be fundamentally revised?, and if so, what should I do to fix it?
Any thoughts, comments, revisions, or resources appreciated.
Thank you!
Disclaimer: I don't actually know python. But multithreadding concepts are similar enough in all the langauges I do know that I feel confident enough to try to answer anyway.
When using multiple threads/proccesses each one should have a step in it's loop to check a variable, (I often call the variable "active", or "keepGoing" or something and it's usually a boolean.)
That variable is usually either shared between the threads, or sent as a message to each thread depending on your programming language and when you want the proccessing to stop, (finish your work first y/n?)
Once the variable is set all threads quit their proccessing loops and proceed to exit their threads.
In your case you have a loop "while true". This never exits. Change it to exit when a variable is set and the thread should close itself when the function exit is reached.

Let parent process return before child process using python multiprocessing library

When one creates Processes with multiprocessing library from python, the parent process waits for its children to return before it returns. In fact, the documentation recommends joining all children.
But I would like to let the parent return before its child process finish.
Is there any way to "detach" the child process.
I know that using subprocess.Popen it is possible to create detached child processes, but i would like to use the features from multiprocessing library, like Queues, Locks and so.
I made two examples to show the difference.
The first example uses the multiprocessing library. When this script is called, it prints the parent message, waits 5 seconds, prints the child message and only then return.
# Using multiprocessing, only returns after 5 seconds
from multiprocessing import Process
from time import sleep, asctime
def child():
sleep(5.0)
print 'Child end reached on', asctime()
if __name__ == '__main__':
p = Process(target = child)
p.start()
# Detach child process here so parent can return.
print 'Parent end reached on', asctime()
The second example uses the subprocess.Popen. When this script is called, it prints the parent message, returns (!!!) and after 5 seconds prints the child message.
# Using Popen, returns immediately.
import sys
from subprocess import Popen
from time import sleep, asctime
def child():
sleep(5)
print 'Child end reached on', asctime()
if __name__ == '__main__':
if 'call_child' in sys.argv:
child()
else:
Popen([sys.executable] + [__file__] + ['call_child'])
print 'Parent end reached on', asctime()
The second example would be acceptable if I could pass Queues, Pipes, Locks, Semaphores, etc..
IMO, the first example also leads to a more cleaner code.
I'm using python 2.7 on Windows.
Just remove the process object from the _children set of the current process object, the the parent process will exit immediately.
Theh multiprocessing module manages child processes in a private set and join them when the current process exits. You can remove children from the set if you don't care of them.
process = multiprocessing.Process(target=proc_main)
multiprocessing.current_process()._children.discard(process)
exit(0)

Categories

Resources