from multiprocessing import Process, Pipe
def f(a1):
print a1.name
a1.conn2.send('why!?!?!?!?!!?!??')
a1.conn2.close()
class test1:
name = 'this is name in class'
conn1,conn2 = Pipe()
if __name__ == '__main__':
a1 = test1
p = Process(target=f, args=(a1,))
p.start()
print a1.conn1.recv()
p.join()
I have to give multiple argument to child process and pipe communication
between parent and child.
So I try to give Class to child process include pipe.
But this code's child process can't send anything.
So parent process hanging at recv().....
How can solve this? help me plz.....T.T
PS : This is python 2.7
Assuming you're using Windows:
You are initializing the Pipes statically inside the class test1, so when the new process is created (when you call start()), the test1 class is recreated and with it the Pipe. That means your function running in the new process is using a different Pipe altogether.
This can be solved by creating the Pipes in an instance of test1 (or passing the connection directly):
from multiprocessing import Process, Pipe
def f(a1):
print a1.name
a1.conn2.send('why!?!?!?!?!!?!??')
a1.conn2.close()
class Test1(object):
def __init__(self):
self.name = 'this is name in class'
self.conn1, self.conn2 = Pipe()
if __name__ == '__main__':
a1 = Test1()
p = Process(target=f, args=(a1,))
p.start()
print a1.conn1.recv()
p.join()
Related
I am trying to achieve a multiprocessing scenario with Kubernetes.
I have python based code who regularly check if given process exist using psutil:
For example:
import psutil
import time
from multiprocessing import Process
class MyProcessChecker:
def __init__(self):
pass
def task(self):
self.job()
def job(self):
# GET pids in the "some" database
pids = [26, 27, 30]
if not psutil.pid_exists(pid):
# Then update databse and remove pid in the list of PID.
def run(self):
p = Process(target=self.task, name="MyProcessChecker")
p.start()
return p
class MyProcess:
def __init__(self):
pass
def task(self):
self.job()
def job(self):
# Long process
def run(self):
p = Process(target=self.task, name="MyProcess")
p.start()
return p
if __name__ == "__main__":
#Running
proc_1 = MyProcessChecker.run()
for x in range(0, 10):
proc = MyProcess.run()
pid = proc.pid
# Save the pid in some database...
time.sleep(60)
This code should work on single instance. But in a multi replicas instance (pod), ie, 2 replicas,
we will have twice the same for loop so twice the number of process.
But the checker also will be launched two times.
Both will not have access to all linux PID cause their not in same process namespace...
How can I make the launcher also can access PID in other replicas pod in order to check them ?
I create sub-class from multiprocessing.Process.
Object p.run() can update instance.ret_value from the long_runtime_proc, but p.start() can't get the ret_value updated though long_runtime_proc called and ran.
How can I get ret_value with p.start()?
*class myProcess (multiprocessing.Process):
def __init__(self, pid, name, ret_value=0):
multiprocessing.Process.__init__(self)
self.id = pid
self.ret_value = ret_value
def run(self):
self.ret_value = long_runtime_proc (self.id)*
Calling Process.run() directly does not start a new process, i.e. the code in Process.run() is executed in the same process that invoked it. That's why changes to self.ret_value are effective. However, you are not supposed to call Process.run() directly.
When you start the subprocess with Process.start() a new child process is created and then the code in Process.run() is executed in this new process. When you assign the return value of long_runtime_proc to self.ret_value, this occurs in the child process, not the parent and thus the parent ret_vaule is not updated.
What you probably need to do is to use a pipe or a queue to send the return value to the parent process. See the documentation for details. Here is an example using a queue:
import time
import multiprocessing
def long_runtime_proc():
'''Simulate a long running process'''
time.sleep(10)
return 1234
class myProcess(multiprocessing.Process):
def __init__(self, result_queue):
self.result_queue = result_queue
super(myProcess, self).__init__()
def run(self):
self.result_queue.put(long_runtime_proc())
q = multiprocessing.Queue()
p = myProcess(q)
p.start()
ret_value = q.get()
p.join()
With this code ret_value will end up being assigned the value off the queue which will be 1234.
Is it okay to initialize the state of a multiprocessing.Process subclass in the __init__() method? Or will this result in duplicate resource utilization when the process forks? Take this example:
from multiprocessing import Process, Pipe
import time
class MyProcess(Process):
def __init__(self, conn, bar):
super().__init__()
self.conn = conn
self.bar = bar
self.databuffer = []
def foo(self, baz):
return self.bar * baz
def run(self):
'''Process mainloop'''
running = True
i = 0
while running:
self.databuffer.append(self.foo(i))
if self.conn.poll():
m = self.conn.recv()
if m=='get':
self.conn.send((i, self.databuffer))
elif m=='stop':
running = False
i += 1
time.sleep(0.1)
if __name__=='__main__':
conn, child_conn = Pipe()
p = MyProcess(child_conn, 5)
p.start()
time.sleep(2)
# Touching the instance does not affect the process which has forked.
p.bar=1
print(p.databuffer)
time.sleep(2)
conn.send('get')
i,data = conn.recv()
print(i,data)
conn.send('stop')
p.join()
As I note in the code, you cannot communicate with the process via the instance p, only via the Pipe so if I do a bunch of setup in the __init__ method such as create file handles, how is this duplicated when the process forks?
Does this mean that subclassing multiprocessing.Process in the same way you would a threading.Thread a bad idea?
Note that my processes are long running and meant to handle blocking IO.
This is easy to test. In __init__, add the following:
self.file = open('does_it_open.txt'.format(self.count), 'w')
Then run:
$ strace -f python youprogram.py 2> test.log
$ grep does_it_open test.log
open("does_it_open.txt", O_WRONLY|O_CREAT|O_TRUNC|O_CLOEXEC, 0666) = 6
That means that at least on my system, copying your code and adding that call, the file was opened once, and only once.
For more about the wizardry that is strace, check out this fantastic blog post.
When you execute a python script, does the process/interpreter exit because it reads an EOF character from the script? [i.e. is that the exit signal?]
The follow up to this is how/when a python child process knows to exit, namely, when you start a child process by overriding the run() method, as here:
class Example(multiprocessing.Process):
def __init__(self, task_queue, result_queue):
multiprocessing.Process.__init__(self)
self.task_queue = task_queue
self.result_queue = result_queue
def run(self):
while True:
next_task = self.task_queue.get()
if next_task is None:
print '%s: Exiting' % proc_name
break
#more stuff...[assume there's some task_done stuff, etc]
if __name__ == '__main__':
tasks = multiprocessing.JoinableQueue()
results = multiprocessing.Queue()
processes = [ Example(tasks, results)
for i in range(5) ]
for i in processes:
i.start()
#more stuff...like populating the queue, etc.
Now, what I'm curious about is: Do the child processes automatically exit upon completion of the run() method? And if I kill the main thread during execution, will the child processes end immediately? Will they end if their run() calls can complete independently of the status of the parent process?
Yes, each child process terminates automatically after completion of the run method, even though I think you should avoid subclassing Process and use the target argument instead.
Note that in linux the child process may remain in zombie state if you do not read the exit status:
>>> from multiprocessing import Process
>>> def target():
... print("Something")
...
>>> Process(target=target).start()
>>> Something
>>>
If we look at the processes after this:
While if we read the exit status of the process (with Process.exitcode), this does not happen.
Each Process instance launches a new process in the background, how and when this subprocess is terminated is OS-dependant. Every OS provides some mean of communication between processes. Child processes are usually not terminated if you kill the "parent" process.
For example doing this:
>>> from multiprocessing import Process
>>> import time
>>> def target():
... while True:
... time.sleep(0.5)
...
>>> L = [Process(target=target) for i in range(10)]
>>> for p in L: p.start()
...
The main python process will have 10 children:
Now if we kill that process we obtain this:
Note how the child processes where inherited by init and are still running.
But, as I said, this is OS specific. On some OSes killing the parent process will kill all child processes.
I would like to create a shell which will control a separate process that I created with the multiprocessing module. Possible? How?
EDIT:
I have already achieved a way to send commands to the secondary process: I created a code.InteractiveConsole in that process, and attached it to an input queue and an output queue, so I can command the console from my main process. But I want it in a shell, probably a wx.py.shell.Shell, so a user of the program could use it.
First create the shell
Decouple the shell from your app by making its locals empty
Create your code string
Compile the code string and get a code object
Execute the code object in the shell
from wx.py.shell import Shell
frm = wx.Frame(None)
sh = Shell(frm)
frm.Show()
sh.interp.locals = {}
codeStr = """
from multiprocessing import Process, Queue
def f(q):
q.put([42, None, 'hello'])
q = Queue()
p = Process(target=f, args=(q,))
p.start()
print q.get() # prints "[42, None, 'hello']"
p.join()
"""
code = compile(codeStr, '', 'exec')
sh.interp.runcode(code)
Note:
The codeStr I stole from the first poster may not work here due to some pickling issues. But the point is you can execute your own codeStr remotely in a shell.
You can create a Queue which you pass to the separate process. From the Python Docs:
from multiprocessing import Process, Queue
def f(q):
q.put([42, None, 'hello'])
if __name__ == '__main__':
q = Queue()
p = Process(target=f, args=(q,))
p.start()
print q.get() # prints "[42, None, 'hello']"
p.join()
EXAMPLE: In the wx.py.shell.Shell Docs the constructur parameters are given as
__init__(self, parent, id, pos, size, style, introText, locals,
InterpClass, startupScript, execStartupScript, *args, **kwds)
I have not tried it, but locals might be a dictionary of local variables, which you can pass to the shell. So, I would try the following:
def f(cmd_queue):
shell = wx.py.shell.Shell(parent, id, pos, size, style, introText, locals(),
...)
q = Queue()
p = Process(target=f, args=(q,))
p.start()
Inside the shell, you should then be able to put your commands into cmd_queue which have then to be read in the parent process to be executed.