Evaluating data passed through multiprocess pipe - python

I noticed data received through a multiprocess pipe can not be evaulated directly. In the example below, the code gets stuck in the child process.
import multiprocessing as mp
def child(conn):
while True:
if conn.recv()==1:
conn.send(1)
if conn.recv()==2:
conn.send(2)
conn.close()
def main():
parent_conn, child_conn = mp.Pipe()
p = mp.Process(target=child, args=(child_conn,))
p.start()
while True:
parent_conn.send(1)
print(parent_conn.recv())
p.join()
if __name__ == '__main__':
main()
But if I assign a variable to conn.recv() in the child process, as shown below. Then everything works.
def child(conn):
while True:
x = conn.recv()
if x==1:
conn.send(1)
if x==2:
conn.send(2)
conn.close()
I assume this is because the parent and child processes are running concurrently, so the data being passed should only be evaluated as they are received. Is this the the cause?
I am running Python 3.7 on Windows 10.

Related

Starting multiple processes inside a function makes it loop for each process started

I'm writing a program and made a "pseudo" program which imitates same thing as the main one does. The main idea is that a program starts and it scans a game. First part detects if game started, then it open 2 processes. 1 that scans the game all the time and sends info to the second process, which analyzes the data and plots it. In short, its 2 infinite loops running simultaneously.
I'm trying to put it all into functions now so I can run it through tkinter and make a GUI for it.
The issue is, every time a process starts, it loops back on start of parent function, executes it again, then goes to start second process. What is the issue here? In this test model, one process sends value of X to second process which prints it out.
import multiprocessing
import time
from multiprocessing import Pipe
def function_start():
print("GAME DETECTED AND STARTED")
parent_conn, child_conn = Pipe()
p1 = multiprocessing.Process(target=function_first_process_loop, args=(child_conn,))
p2 = multiprocessing.Process(target=function_second_process_loop, args=(parent_conn,))
function_load(p1)
function_load(p2)
def function_load(process):
if __name__ == '__main__':
print("slept 1")
process.start()
def function_first_process_loop(conn):
x=0
print("FIRST PROCESS STARTED")
while True:
time.sleep(1)
x += 1
conn.send(x)
print(x)
def function_second_process_loop(conn):
print("SECOND PROCESS STARTED")
while True:
data = conn.recv()
print(data)
function_start()
I've also tried rearranging functions a bit on different ways. This is one of them:
import multiprocessing
import time
from multiprocessing import Pipe
def function_load():
if __name__ == '__main__':
parent_conn, child_conn = Pipe()
p1 = multiprocessing.Process(target=function_first_process_loop, args=(child_conn,))
p2 = multiprocessing.Process(target=function_second_process_loop, args=(parent_conn,))
p1.start()
p2.start()
#FIRST
def function_start():
print("GAME LOADED AND STARTED")
function_load()
def function_first_process_loop(conn):
x=0
print("FIRST PROCESS STARTED")
while True:
time.sleep(1)
x += 1
conn.send(x)
print(x)
def function_second_process_loop(conn):
print("SECOND PROCESS STARTED")
while True:
data = conn.recv()
print(data)
#
function_start()
You should always tag your question tagged with multiprocessing with platform you are running under, but I will infer that it is probably Windows or some other platform that uses the spawn method to launch new processes. That means when a new process is created, a new Python interpreter is launched an the program source is processed from the top and any code at global scope that is not protected by the check if __name__ == '__main__': will be executed, which means that each started process will be executing the statement function_start().
So, as #PranavHosangadi rightly pointed out you need the __name__ check in the correct place.
import multiprocessing
from multiprocessing import Pipe
import time
def function_start():
print("GAME DETECTED AND STARTED")
parent_conn, child_conn = Pipe()
p1 = multiprocessing.Process(target=function_first_process_loop, args=(child_conn,))
p2 = multiprocessing.Process(target=function_second_process_loop, args=(parent_conn,))
function_load(p1)
function_load(p2)
def function_load(process):
print("slept 1")
process.start()
def function_first_process_loop(conn):
x=0
print("FIRST PROCESS STARTED")
while True:
time.sleep(1)
x += 1
conn.send(x)
print(x)
def function_second_process_loop(conn):
print("SECOND PROCESS STARTED")
while True:
data = conn.recv()
print(data)
if __name__ == '__main__':
function_start()
Let's do an experiment: Before function_start(), add this line:
print(__name__, "calling function_start()")
Now, you get the following output:
__main__ calling function_start()
GAME DETECTED AND STARTED
slept 1
slept 1
__mp_main__ calling function_start()
GAME DETECTED AND STARTED
__mp_main__ calling function_start()
GAME DETECTED AND STARTED
FIRST PROCESS STARTED
SECOND PROCESS STARTED
1
1
2
2
...
Clearly, function_start() is called by the child process every time you start it. This is because python loads the entire script, and then calls the function you want from that script. The new processes have the name __mp_main__ to differentiate them from the main process, and you can make use of that to prevent the call to function_start() by these processes.
So instead of function_start(), call it this way:
if __name__ == "__main__":
print(__name__, "calling function_start()")
function_start()
and now you get what you wanted:
__main__ calling function_start()
GAME DETECTED AND STARTED
slept 1
slept 1
FIRST PROCESS STARTED
SECOND PROCESS STARTED
1
1
2
2
...

How to modify a variable in one thread and check it in another?

Below is the code which demonstrates the problem. Please note that this is only an example, I am using the same logic in a more complicated application, where I can't use sleep as the amount of time, it will take for process1 to modify the variable, is dependent on the speed of the internet connection.
from multiprocessing import Process
code = False
def func():
global code
code = True
pro = Process(target=func)
pro.start()
while code == False:
pass
pro.terminate()
pro.join()
print('Done!')
On running this nothing appears on the screen. When I terminate the program, by pressing CTRL-C, the stack trace shows that the while loop was being executed.
Python has a few concurrency libraries: threading, multiprocessing and asyncio (and more).
multiprocessing is a library which uses subprocesses to bypass python's inability to concurrently run CPU intensive tasks. To share variables between different multiprocessing.Processes, create them via a multiprocessing.Manager() instance. For example:
import multiprocessing
import time
def func(event):
print("> func()")
time.sleep(1)
print("setting event")
event.set()
time.sleep(1)
print("< func()")
def main():
print("In main()")
manager = multiprocessing.Manager()
event = manager.Event()
p = multiprocessing.Process(target=func, args=(event,))
p.start()
while not event.is_set():
print("waiting...")
time.sleep(0.2)
print("OK! joining func()...")
p.join()
print('Done!')
if __name__ == "__main__":
main()

multiprocess messaging queue between functions or process python

Im trying to understand how processes are messaging the other one, below example;
i use second function to do my main job, and queue feeds first function sometimes to do it own job and no matter when its finished, i look many example and try different ways, but no success, is any one can explain how can i do it over my example.
from multiprocessing import Process, Queue, Manager
import time
def first(a,b):
q.get()
print a+b
time.sleep(3)
def second():
for i in xrange(10):
print "seconf func"
k+=1
q.put=(i,k)
if __name__ == "__main__":
processes = []
q = Queue()
manager = Manager()
p = Process(target=first, args=(a,b))
p.start()
processes.append(p)
p2 = Process(target=second)
p2.start()
processes.append(p2)
try:
for process in processes:
process.join()
except KeyboardInterrupt:
print "Interupt"

Python Synchronization Multiprocessing Lock

I've gone through (this SO thread)[ Synchronization issue using Python's multiprocessing module but it doesnt provide the answer.
The following code :-
rom multiprocessing import Process, Lock
def f(l, i):
l.acquire()
print 'hello world', i
l.release()
# do something else
if __name__ == '__main__':
lock = Lock()
for num in range(10):
Process(target=f, args=(lock, num)).start()
How do I get the processes to execute in order.? I want to hold up a lock for a few seconds and then release it and thereby moving forward with the P1 and P2 into the lock, and then P2 moving forward and P3 exceuting that lock. How would I get the processes to execute in order.?
It sounds like you just want to delay the start of each successive process. If that's the case, you can use a multiprocessing.Event to delay starting the next child in the main process. Just pass the event to the child, and have the child set the Event when its done doing whatever should run prior to starting the next child. The main process can wait on that Event, and once it's signalled, clear it and start the next child.
from multiprocessing import Process, Event
def f(e, i):
print 'hello world', i
e.set()
# do something else
if __name__ == '__main__':
event = Event()
for num in range(10):
p = Process(target=f, args=(event, num))
p.start()
event.wait()
event.clear()
this is not the purpose of locks. Your code architecture is bad for your use case. I think you should refactor your code to this:
from multiprocessing import Process
def f(i):
# do something here
if __name__ == '__main__':
for num in range(10):
print 'hello world', num
Process(target=f, args=(num,)).start()
in this case it will print in order and then will do the remaining part asynchronously

Python multiprocessing pipe recv() doc unclear or did I miss anything?

I have been learning how to use the Python multiprocessing module recently, and reading the official doc. In 16.6.1.2. Exchanging objects between processes there is a simple example about using pipe to exchange data.
And, in 16.6.2.4. Connection Objects, there is this statement, quoted "Raises EOFError if there is nothing left to receive and the other end was closed."
So, I revised the example as shown below. IMHO this should trigger an EOFError exception: nothing sent and the sending end is closed.
The revised code:
from multiprocessing import Process, Pipe
def f(conn):
#conn.send([42, None, 'hello'])
conn.close()
if __name__ == '__main__':
parent_conn, child_conn = Pipe()
p = Process(target=f, args=(child_conn,))
p.start()
#print parent_conn.recv() # prints "[42, None, 'hello']"
try:
print parent_conn.recv()
except EOFError:
pass
p.join()
But, when I tried the revised example on my Ubuntu 11.04 machine, Python 2.7.2, the script hang.
If anyone can point out to me what I missed, I would be very appreciative.
When you start a new process with mp.Process, the child process inherits the pipes of the parent. When the child closes conn, the parent process still has child_conn open, so the reference count for the pipe file descriptor is still greater than 0, and so EOFError is not raised.
To get the EOFError, close the end of the pipe in both the parent and child processes:
import multiprocessing as mp
def foo_pipe(conn):
conn.close()
def pipe():
conn = mp.Pipe()
parent_conn, child_conn = conn
proc = mp.Process(target = foo_pipe, args = (child_conn, ))
proc.start()
child_conn.close() # <-- Close the child_conn end in the main process too.
try:
print(parent_conn.recv())
except EOFError as err:
print('Got here')
proc.join()
if __name__=='__main__':
pipe()

Categories

Resources