Can multiprocessing Process class be run from IDLE - python

A basic example of multiprocessing Process class runs when executed from file, but not from IDLE. Why is that and can it be done?
from multiprocessing import Process
def f(name):
print('hello', name)
p = Process(target=f, args=('bob',))
p.start()
p.join()

Yes. The following works in that function f is run in a separate (third) process.
from multiprocessing import Process
def f(name):
print('hello', name)
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
However, to see the print output, at least on Windows, one must start IDLE from a console like so.
C:\Users\Terry>python -m idlelib
hello bob
(Use idlelib.idle on 2.x.) The reason is that IDLE runs user code in a separate process. Currently the connection between the IDLE process and the user code process is via a socket. The fork done by multiprocessing does not duplicate or inherit the socket connection. When IDLE is started via an icon or Explorer (in Windows), there is nowhere for the print output to go. When started from a console with python (rather than pythonw), output goes to the console, as above.

Related

Why don't I see output from this function when calling it with multiprocessing? [duplicate]

A basic example of multiprocessing Process class runs when executed from file, but not from IDLE. Why is that and can it be done?
from multiprocessing import Process
def f(name):
print('hello', name)
p = Process(target=f, args=('bob',))
p.start()
p.join()
Yes. The following works in that function f is run in a separate (third) process.
from multiprocessing import Process
def f(name):
print('hello', name)
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
However, to see the print output, at least on Windows, one must start IDLE from a console like so.
C:\Users\Terry>python -m idlelib
hello bob
(Use idlelib.idle on 2.x.) The reason is that IDLE runs user code in a separate process. Currently the connection between the IDLE process and the user code process is via a socket. The fork done by multiprocessing does not duplicate or inherit the socket connection. When IDLE is started via an icon or Explorer (in Windows), there is nowhere for the print output to go. When started from a console with python (rather than pythonw), output goes to the console, as above.

how to start a process in python on windows?

I am trying to start a process using the multiprocessing.Process example from the python documentation.
Here is the example code:
from multiprocessing import Process
import os
def info(title):
print(title)
print('module name:', __name__)
print('parent process:', os.getppid())
print('process id:', os.getpid())
def f(name):
info('function f')
print('hello', name)
if __name__ == '__main__':
info('main line')
p = Process(target=f, args=('bob',))
p.start()
p.join()
I would expect the console to show me the output of the function f('bob'), but I only get to see the output of info('mainline').
So I think the process doesn't even start??
I have never before worked with multiprocessing, I bet it's a silly mistake I'm making.
I have also tried to set the start method multiprocessing.set_start_method('spawn') (see here), as 'spawn' seems to be the only valid one for windows.
But I only get a
RuntimeError: context has already been set
At the moment I think, I can't get the process to start.
Any Ideas how to solve this?
P.S. I am working on windows 10 in spyder 4.2.5 (maybe this something with the ipython console? Because I have heared, this is no normal python console).
But I have also tried the same example in the normal python shell, and it also only showed the output of info('mainline').
SOLVED: by running the script from cmd

not able to terminate the process in multiprocessing python (linux)

I am new to python and using multiprocessing, I am starting one process and calling one shell script through this process. After terminating this process shell script keeps running in the background, how do I kill it, please help.
python script(test.py)
#!/usr/bin/python
import time
import os
import sys
import multiprocessing
# test process
def test_py_process():
os.system("./test.sh")
return
p=multiprocessing.Process(target=test_py_process)
p.start()
print 'STARTED:', p, p.is_alive()
time.sleep(10)
p.terminate()
print 'TERMINATED:', p, p.is_alive()
shell script (test.sh)
#!/bin/bash
for i in {1..100}
do
sleep 1
echo "Welcome $i times"
done
The reason is that the child process that is spawned by the os.system call spawns a child process itself. As explained in the multiprocessing docs descendant processes of the process will not be terminated – they will simply become orphaned. So. p.terminate() kills the process you created, but the OS process (/bin/bash ./test.sh) simply gets assigned to the system's scheduler process and continues executing.
You could use subprocess.Popen instead:
import time
from subprocess import Popen
if __name__ == '__main__':
p = Popen("./test.sh")
print 'STARTED:', p, p.poll()
time.sleep(10)
p.kill()
print 'TERMINATED:', p, p.poll()
Edit: #Florian Brucker beat me to it. He deserves the credit for answering the question first. Still keeping this answer for the alternate approach using subprocess, which is recommended over os.system() in the documentation for os.system() itself.
os.system runs the given command in a separate process. Therefore, you have three processes:
The main process in which your script runs
The process in which test_py_processes runs
The process in which the bash script runs
Process 2 is a child process of process 1, and process 3 is a child of process 1.
When you call Process.terminate from within process 1 this will send the SIGTERM signal to process two. That process will then terminate. However, the SIGTERM signal is not automatically propagated to the child processes of process 2! This means that process 3 is not notified when process 2 exits and hence keeps on running as a child of the init process.
The best way to terminate process 3 depends on your actual problem setting, see this SO thread for some suggestions.

Running function as a thread in background python and exit before its application

I'm executing a function as a thread in python. Now, the program will wait for the function to execute and then terminate after its completion.
My target is to starting the background thread and closing the program calling it.
how can we do it. As in below code, the thread will take 30 min to execute. I want to stop the main program after calling the thread and let the thread run in background.
thread = threading.Thread(target=function_that_runs_for_30_min)
thread.start()
print "Thread Started"
quit()
You cannot do that directly. A thread is just a part of a process. Once the process exits, all the threads are gone. You need to create a background process to achieve that.
You cannot use the multiprocessing module either because it is a package that supports spawning processes using an API similar to the threading module (emphasize mine). As such it has no provision to allow a process to run after the end of the calling one.
The only way I can imagine is to use the subprocess module to restart the script with a specific parameter. For a simple use case, adding a parameter is enough, for more complex command line parameters, the module argparse should be used. Example of code:
import subprocess
import sys
# only to wait some time...
import time
def f(name):
"Function that could run in background for a long time (30')"
time.sleep(5)
print 'hello', name
if __name__ == '__main__':
if (len(sys.argv) > 1) and (sys.argv[1] == 'SUB'):
# Should be an internal execution: start the lengthy function
f('bar')
else:
# normal execution: start a subprocess with same script to launch the function
p = subprocess.Popen("%s %s SUB" % (sys.executable, sys.argv[0]))
# other processing...
print 'END of normal process'
Execution:
C:\>python foo.py
END of normal process
C:\>
and five seconds later:
hello bar

fabric run fork

I want to use Fabric.api.run to directly start an application in a remote box. Since the application takes really a long to finish, I wish to be able to fork a child process, such that I don't need to wait for a long time.
The code is like:
from fabric.api import run
....
run("python ./myApp.py --fork=True >myApp.log 2>&1")
I used the following code to enable forking side the code:
if settings.fork:
child_pid = os.fork()
if child_pid == 0:
print "Starting Child Process: PID# %s" % os.getpid()
else:
print "Terminating Parent Process: PID# %s" % os.getpid()
os._exit(0)
The problem is after I do the run command, I sshed into the remote box, and found out the program has been quit for some unknown reason, I check the log file, there is nothing there.
Somebody could let me know how I can work this around? Many thanks!!
Talking of forks, there is a fork of Fabric that enables parallel execution, apart from lots of other improvements.
http://tav.espians.com/fabric-python-with-cleaner-api-and-parallel-deployment-support.html
Depending on what you are doing, you may want to consider that.
Apart from that, I think you want to use multiprocessing:
from multiprocessing import Process
def f(name):
print 'hello', name
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
#p.join()
http://docs.python.org/library/multiprocessing.html

Categories

Resources