I need to make sure to run two processes (python scripts) almost at the same time. But I want the program to continue until one of them is finished. I am running these processes from a C++ program using system.
Is this the right way to run script1 and script2 at the same time and continue just after script2 is finished?
python ./script1.py & python ./script2.py
Thank you!
Your snippet won't work because it will continue as soon as script2 finishes. script1 may still be working at the background.
If you are using bash shell you can do the following:
python ./script1.py &
PID1=$!
python ./script2.py
wait $PID1
$! has the process id of the previously background command. So we run script1 in the background, then we run script2 until completion, and then we wait for script1 to finish (if not already finished).
Related
My question seems to be quite easy, but for some reason I did not find a quick answer to it. I have a python script that I want to run on the terminal command line (Ubuntu linux server), which works for me. But then I can't use the command line until the script ends. The script takes a long time to run, and I would like to continue using the command line to perform other tasks. How can you do the work of a script when its progress is not shown on the command line, but keep its work? And how can I see the active processes that are running on the server to see if a process is running?
Run script command:
python script.py
Add next with & echo "123":
The script takes a long time to run, and I would like to continue
using the command line to perform other tasks.
It seems that you want to run said process in background, please try pasting following
python script.py &
echo "123"
it should start your script.py and then output 123 (without waiting until script.py ends)
how can I see the active processes that are running on the server to
see if a process is running?
Using ps command
ps -ef
will list all processes which you would probably want to filter to get interested one to you.
When I run the following bash script via cron, the last line runs before the prior line has completed. Why? And how can I enforce the order I want?
I'm convinced that cron is the culprit here somehow, but for the sake of argument, here's a dummy bash script (Obviously, this is just an illustration. In the real world I'm doing some work in the python program and then trying to copy its work product to another place after it's done.):
#!/usr/bin/env bash
cd /tmp/kier/script
script output -c "./sleeper.py; echo '...and we are done'"
echo "This is the next line after invoking script..."
...and for completeness, here's the python script, sleeper.py:
#!/usr/bin/env python3
import time
print("python program starting")
time.sleep(5)
print("python program done")
When I run the bash script from the command line all is well. Specifically, the "This is the next line..." text prints at the very end, after the 5-second sleep.
But when I run it from cron, the output comes in the wrong order (this is the email that comes to me after cron runs the job):
Script started, file is output
Script done, file is output
This is the next line after invoking script...
python program starting
python program done
...and we are done
Script started, file is output
So you can see that "This is the next line..." prints before the python script has even really started. As though it's running the python script in background or something.
I'm stumped. Why is this happening and how can I make the echo command wait until script has finished running the python program?
(Finally, Yes, I could include my extra command inside the commands I send to script, and I am actually considering that. But come on! This is nuts!)
I should follow up and share the solution I came up with. In the end I never got a good answer to WHY it is behaving this way in my (RedHat) environment, so I settled on a workaround. I...
created a sentinel file before invoking "script",
included an extra command deleting the sentinel file in the script's command text, and then
waited for the sentinel file to go away before continuing.
Like this:
sentinel=`mktemp`
script output -c "./sleeper.py; rm $sentinel"
while [ -f $sentinel ]
do
sleep 3
done
Yes, it's a hack, but I needed to move on.
In my current application, I have a Python 2.7 script called main.py that launches another Python 2.7 script called calculator.py using GNU Parallel like the following:
os.system("seq 10000 | parallel -N0 -j 50 nohup python calculator.py &")
print "Done"
This works pretty well, with one exception: I need to resume executing other commands in main.py (that is, after the os.system call, e.g. the print "Done" line) just after all the 10000 instances spawned with GNU Parallel finish running.
Is there a proper way to do that? Solutions with os.spawn and Python 2.7 subprocess are both welcome, but using GNU Parallel is absolutely mandatory.
EDIT: Here are my requirements:
1) it is crucial to me that the many instances of calculator.py that are spawned keep running if the terminal closes (hence the nohup)
2) I need it to not block current terminal session (hence the &)
3) I need it to print "Done" in the example above gets executed only after the 10000 jobs finish
If achieving all above at the same time is not possible, I think I could then manually keep a log of all launched processes and then manually force the rest of the code "main.py" code to continue after all those processes end. This, of course, is a cumbersome last-resource option.
I'm using Python 3.4.2 on Windows. In script1.py I'm doing this:
myProc = subprocess.Popen([sys.executable, "script2.py", "argument"])
myProc.communicate()
it works and call script2.py .
The problem is that in script2.py there is a infinite loop (there must be) and the script1.py is waiting for script2.py to finish. How can I tell to script1.py to just call script2.py and don't wait for the process to finish?
Just don't call myProc.communicate() if you don't want to wait. subprocess.Popen will start the process.
Call the script in another window.
myProc = subprocess.Popen(["start", sys.executable, "script2.py", "argument"])
myProc.communicate()
start is a windows shell function that runs a program separately, allowing the current one to continue its process. I haven't tested this as I've no access to a Windows OS, but The linux equivalent (nohup) works as required.
If you need fine control over what happens with script2.py, refer to the multiprocessing module here.
I have an issue using subprocess.call when calling scripts which in turn run background processes.
I am calling a bash script from a python script.
python 2.7.3.
#!/bin/python
from subprocess import call
.
.
call(["run_exp",file_name])
print "exp complete!"
.
.
run_exp is a bash script which runs a process in the background.
#!/bin/bash
.
.
run_task auto_output 2>/dev/null &
.
.
echo "run_exp finished!"
The run task command is another bash script. This is always completed by the time run_exp has finished.
Running run_exp from command line I see expected behaviour and all processes finish completed.
An issue arises when i call the run_exp command using python call. When using call I see the output "run_exp finished!" but never "exp complete!". If I remove the run_task operation (and associated code with its operation in run_exp) from run_exp, the call command runs to completion as expected. This leads me to believe there is an issue using call when the script called runs processes in the background.
Can anyone shed any light on why this might occur.
Thanks!
The output of the background scripts is still going to the same file descriptor as the child script. That's why the parent script is still waiting for it to finish.
You should close all file descriptors in your background scripts if you want to demonize them:
(run_task auto_output >/dev/null 2>&1) &
(The parentheses do this in a subshell which I sometimes found was needed.)
Also it can help to wait explicitly at the end of your child script for the background process:
run_task auto_output 2>/dev/null & backgroundPid=$!
...
echo "run_exp finished!"
wait "$backgroundPid"
And maybe combining both strategies should also tried if both fail alone.