Is there a way I can write a python script that emulates the use of GNU Screen and Bash? I was originally trying to write a simple Bash script, but I suspect learning the multiprocessing module will give me a little bit flexibility down the road, not to mention that python modules are very well documented.
So, I have seen in the tutorials and documentation the use of a single function run in parallel, but am a little bit lost on how to make to use this. Any reference would be extremely helpful.
Below is basically what I want:
If I have a bunch of experiments in different python files, then in Bash:
$python experiment1.py&
$python experiment2.py& ...
In Python, if I have a bunch of functions in the same script, the main emulates the above (? this is really just a guess and don't mean to offend people other than myself with my ignorance):
import multiprocessing as mp
def experiment1():
"""run collection of simulations and collect relevant statistics"""
....
def experiment2():
"""run different collection of simulations and collect relevant statistics"""
....
if __name__ == '__main__':
one = mp.process(target = experiment1)
two = mp.process(target = experiment2)
...
one.start()
two.start()
...
one.join()
two.join()
I am not sure how I would test this except maybe my activity monitor on OSX, which doesn't seem to tell me the distribution of the cores, so suggestions as to checking python-ically without runtime would be helpful. This last question might be too general, but thought I would throw it in. Thank you for your help!
The following program runs a bunch of scripts in parallel. For each one it prints a message when it starts, and when it ends. If it exited with an error, the error code and command line are printed, and the program continues.
It runs one shell script per CPU in the system, at a time.
source
import multiprocessing as mp, subprocess
def run_script(script_name):
curproc = mp.current_process()
cmd = ['python', script_name]
print curproc, 'start:', cmd
try:
return subprocess.check_output(
cmd, shell=False)
except subprocess.CalledProcessError as err:
print '{} error: {}'.format(
curproc, dict(
status=err.returncode,
command=cmd,
)
)
finally:
print curproc, "done"
scripts = ['zhello.py', 'blam']
pool = mp.Pool() # default: num of CPUs
print pool.map(
run_script, scripts,
)
pool.close()
pool.join()
output
python: can't open file 'blam': [Errno 2] No such file or directory
<Process(PoolWorker-2, started daemon)> start: ['python', 'blam']
<Process(PoolWorker-2, started daemon)> error: {'status': 2, 'command': ['python', 'blam']}
<Process(PoolWorker-2, started daemon)> done
<Process(PoolWorker-1, started daemon)> start: ['python', 'zhello.py']
<Process(PoolWorker-1, started daemon)> done
['howdy\n', None]
Related
Assume the following python-program from the official docs (The Process class):
from multiprocessing import Process
def f(name):
print('hello', name)
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
Running this code against my 3.7.7 interpreter on my Windows machine works, as expected, without any problems. However, running the same code against a Subinterpreter created in C++ fails with the following error (no Exception actually, the following error just gets printed to the console):
unrecognised option '-c'
I assume that the reason for this error is to be found in spawn.py (within the multiprocessing module, line 89):
...
return [_python_exe] + opts + ['-c', prog, '--multiprocessing-fork']
...
I could create my new process via Popen. This works, but the spawned process should be a child-process, not a completely independent process.
My question:
Why does this error occur? Is there any way to spawn a child process within a Subinterpreter via multiprocessing.Process?
Thank you!
UPDATE 1
As suggested, adding freeze_support fixes the error, but a new one occurs:
unrecognised option '--multiprocessing-fork'
Intro
Hi, I'm trying to run a windows OS command in a loop using python 3 multiprocessing, but when the loop gets to big (thousand commands) I'm getting memory errors and the process exits / never completes.
Why?
I need to run 65,000 commands as fast as possible, and one by one seems non efficient. these commands are a windows normal command (dir is for example).
-- I do not need the results of the command ! just for it to run.
Code
import multiprocessing
import subprocess
def worker(num):
print("worker:", num)
subprocess.Popen('dir') # or os.system('dir') for example
return
def main():
jobs = []
for i in list(range(1,65535)):
i = str(i)
p = multiprocessing.Process(target=worker, args=(i,))
jobs.append(p)
p.start()
Question
What am I doing wrong here? whats the correct way to run a windows OS command multiple times with python (while maintaining any threading).
You should limit the number of workers running at the same time.
You can use the p.is_alive() to check how many of them are currently running.
I am programming in python which involves me implementing a shell in Python in Linux. I am trying to run standard unix commands by using os.execvp(). I need to keep asking the user for commands so I have used an infinite while loop. However, the infinite while loop doesn't work. I have tried searching online but they're isn't much available for Python. Any help would be appreciated. Thanks
This is the code I have written so far:
import os
import shlex
def word_list(line):
"""Break the line into shell words."""
lexer = shlex.shlex(line, posix=True)
lexer.whitespace_split = False
lexer.wordchars += '#$+-,./?#^='
args = list(lexer)
return args
def main():
while(True):
line = input('psh>')
split_line = word_list(line)
if len(split_line) == 1:
print(os.execvp(split_line[0],[" "]))
else:
print(os.execvp(split_line[0],split_line))
if __name__ == "__main__":
main()
So when I run this and put in the input "ls" I get the output "HelloWorld.py" (which is correct) and "Process finished with exit code 0". However I don't get the output "psh>" which is waiting for the next command. No exceptions are thrown when I run this code.
Your code does not work because it uses os.execvp. os.execvp replaces the current process image completely with the executing program, your running process becomes the ls.
To execute a subprocess use the aptly named subprocess module.
In case of an ill-advised programming exercise then you need to:
# warning, never do this at home!
pid = os.fork()
if not pid:
os.execvp(cmdline) # in child
else:
os.wait(pid) # in parent
os.fork returns twice, giving the pid of child in parent process, zero in child process.
If you want it to run like a shell you are looking for os.fork() . Call this before you call os.execvp() and it will create a child process. os.fork() returns the process id. If it is 0 then you are in the child process and can call os.execvp(), otherwise continue with the code. This will keep the while loop running. You can have the original process either wait for it to complete os.wait(), or continue without waiting to the start of the while loop. The pseudo code on page 2 of this link should help https://www.cs.auckland.ac.nz/courses/compsci340s2c/assignments/A1/A1.pdf
Is it possible to start subprocess without waiting for it to terminate?
I have a Windows program that I need to execute in python script and I want to leave it run on background without waiting for the subprocess to quit since the program is expecting input to terminate itself (press q to quit).
I have tried many ways but none of them worked.
What I basically want to achieve is the following:
args = [os.path.join(path, 'myProgram.exe'), '/run']
p = subprocess.Popen(args)
and do some other stuff here with the myProgram.exe still running.
The myProgram.exe can also be installed as service. When I tried this approach by
subprocess.call('net start myService', shell=True)
the service always fails to start. It fails with system error code 1067 which means the process terminated unexpectedly.
NOTE: I'm using python 2.7
Thanks for the advices.
Edit:
I have found a workaround which I don't understand...
As workaround I've created a myProgram.bat file which starts myProgram.exe.
BUT there's a catch, if I do only
start myProgram.exe
it behaves exactly the same as when calling subprocess - it terminates. However if I do
timeout 0 -- #means wait 0 seconds
start myProgram.exe
the program starts normally.
you could try:
args = ['start', os.path.join(path, 'myProgram.exe'), '/run']
see start /? for help ( or http://www.computerhope.com/starthlp.htm)
If the program expects a q to quit, maybe send it one?
args = [os.path.join(path, 'myProgram.exe'), '/run']
p = subprocess.Popen(args, stdin=subprocess.PIPE)
...
p.stdin.write("q\n")
Working on a multi-threaded cross-platform python3.3 application I came across some weird behavior I was not expecting and am not sure is expected. The issue is on Windows 8 calling the input() method in one thread blocks other threads until it completes. I have tested the below example script on three Linux, two Windows 7 and one Windows 8 computers and this behavior is only observed on the Windows 8 computer. Is this expected behavior for Windows 8?
test.py:
import subprocess, threading, time
def ui():
i = input("-->")
print(i)
def loop():
i = 0
f = 'sky.{}'.format(i)
p = subprocess.Popen(['python', 'copy.py', 'sky1', f])
t = time.time()
while time.time() < t+15:
if p.poll() != None:
print(i)
time.sleep(3)
i+=1
f = 'sky.{}'.format(i)
p = subprocess.Popen(['python', 'copy.py', 'sky1', f])
p.terminate()
p.wait()
def start():
t1 = threading.Thread(target=ui)
t2 = threading.Thread(target=loop)
t1.start()
t2.start()
return t2
t2 = start()
t2.join()
print('done')
copy.py:
import shutil
import sys
src = sys.argv[1]
dst = sys.argv[2]
print('Copying \'{0}\' to \'{1}\''.format(src, dst))
shutil.copy(src, dst)
Update:
While trying out one of the suggestions I realized that I rushed to a conclusion missing something obvious. I apologize for getting off to a false start.
As Schollii suggested just using threads (no subprocess or python files) results in all threads making forward progress so the problem actually is using input() in one python process will cause other python processes to block/not run (I do not know exactly what is going on). Furthermore, it appears to be just python processes that are affected. If I use the same code shown above (with some modifications) to execute non-python executables with subprocess.Popen they will run as expected.
To summarize:
Using subprocess to execute non-python executable: works as expected with and without any calls to input().
Using subprocess to execute python executable: created processes appear to not run if a a call to input() is made in the original process.
Use subprocess to create python processes with a call to input() in a new process and not the original process: A call to input() blocks all python processes spawned by the 'main' process.
Side Note: I do not have Windows 8 platform so debugging/tests can be a little slow.
Because there are several problems with input in Python 3.0-3.2 this method has been impacted with few changes.
It's possible that we have a new bug again.
Can you try the following variant, which is raw_input() "back port" (which was avaiable in Python 2.x):
...
i = eval(input("-->"))
...
It's a very good problem to work with,
since you are dependent with input() method, which, usually needs the console input,
since you have threads, all the threads are trying to communicate with the console,
So, I advice you to use either Producer-Consumer concept or define all your inputs to a text file and pass the text file to the program.