Stopping a program in terminal that has been started from Python - python

I have a python script that starts another program from system (the shell script actually calls the program itself, the tag-bm.sh):
def tag(obt_path, corpora_path):
print corpora_path
os.system('cd ' + obt_path + ' && ./tag-bm.sh ' + corpora_path + ' > ' + corpora_path + '.obt')
os.system('pwd')
Sometimes this program goes into an infinite loop, which creates a problem for my main program. Is there a way to set things up so that if the program has not ended within a set time python disrupts it?

Find out the name of the command, for instance if it's cp then you can get the pid of the command using either pgrep cp or pidof cp. Note that this will return the pids of all cp processes. Call the pgrep right after you started the command and it should be on top so the pid in that case is pgrep cp | head -n1 or pidof cp | cut -s -f1. Store this variable and kill it later at a desired time.
Alternatively you could run the command with timeout you can add a value after which the command will automatically receive a kill signal. Example usage: timeout 500s cp large_file destination_file. The cp will get killed after 500 seconds in this case. `man timeout´ for more information.

Related

Kill a running subprocess with a button click

I'm running strandtest.py to power a neo pixel matrix on a django server.
A button click runs the program
I'd like a second button click to kill it.
I've read a bunch of different solutions but I am having trouble following the instructions. Is there another subprocess command I can execute while the program runs?
This is how I start the program
import subprocess
def test(request):
sp = subprocess.Popen('sudo python /home/pi/strandtest.py', shell=True)
return HttpResponse()
I want to kill the process and clear the matrix.
You can try to run below command with subprocess.Popen:
kill -9 `ps aux | grep /home/pi/strandtest.py | awk -F' ' '{print $2}'`
kill -9 - will kill your process
ps aux combine with grep - to find only process you need
awk - get only id of that process

How to run the sequence of python scripts one after another in a loop?

How to run sequentially 20 - 30 scripts one-by-one and after the last one is executed - run the first one again and run this iteration on a hourly basis?
I tried to implement it by using crontab, but it's a bulky way. I want to guarantee that only one script for every moment is running. The time of execution for each script is about 1 minute.
I wrote a bash script for such a goal and think to run it every hour by using cron:
if ps ax | grep $0 | grep -v $$ | grep bash | grep -v grep
then
echo "The script is already running."
exit 1
else
python script1.py
python script2.py
python script3.py
...
python script30.py
fi
but is it a good way?
From this question, I assume you only want to run the next program when the older one has finished.
I suggest subprocess.call, it will only return to the call of the function when the program that is called has finished executing.
Here's an example. It will run script1, and then script2, when script1 has finished.
import subprocess
program_list = ['script1.py', 'script2.py']
for program in program_list:
subprocess.call(['python', 'program'])
print("Finished:" + program)
Correction to #twaxter's:
import subprocess
program_list = ['script1.py', 'script2.py']
for program in program_list:
subprocess.call(['python', program])
print("Finished:" + program)
You may use a for-loop:
scripts = "script1.py script2.py script3.py"
for s in $scripts
do
python $s
done
You can also use the exec command:
program_list = ["script1.py", "script2.py"]
for program in program_list:
exec(open(program).read())
print("\nFinished: " + program)
If your files match a glob pattern
files=( python*.py )
for f in "${files[#]}"
do
python "$f"
done

Run shutdown command in background after delay on windows

I am trying to run a command on a windows machine running a python client program. It needs to return a value to the server before executing the command.
def shutdown():
if OS_LINUX:
os.system("sleep 10s && sudo shutdown &")
elif OS_WIN:
os.system(<What to put HERE>)
return (1,"Shutting down")
As you can see, the unix command works just fine.
In the background, it runs sleep for 10 seconds and after that is done, it runs sudo shutdown. The function is allowed to return properly and the server gets notice that the client is shutting down BEFORE "sudo shutdown" is run.
However, on the windows side of things, I can't seem to run shutdown -s after a delay AND run it in the background.
This is what I have so far:
start /b /wait timeout 10 && shutdown -s
I have tried many variations of it: with/without /wait and /b, using ping instead of timeout, sending the output of ping/timeout to ">nul", etc.
This one has been the closest to what I want, but timeout takes over the command line until it is done, which doesn't allow my return statement to be covered in shutdown() before "shutdown -s" is run. This leaves the server hanging until it times out, which is not what I want the user to see, especially because I can't guarantee that the client didn't just loose connection at the same time the server told it to shutdown.
I might be able to solve the problem by throwing "timeout 10 && shutdown -s" in a batch script and running that in the background using os.system("start /b shutdown_script.bat"), but this client program needs to be just a single portable file for distribution reasons.
The solution is easy on unix, what am I missing on dos?
EDIT: Running os.system("shutdown -s") command (at least on win10) causes a screen to show to the user saying that the system will be shutdown. This allows my function to work properly and return a value to the server. This is NOT the case for other commands like hibernate ("shutdown -h"), and not necessarily the case on older version of windows either. The problem still remains for other commands, such as closing the client program remotely.
EDIT2: I also need to run commands for hibernate, and logoff. Shutdown -h and -l respectively. The -t parameter of shutdown only works with -s and -r (At least on windows 10)
This command work fine for me:
shutdown /t 10 /s
So I am taking that you the command window to display the timeout, and and as soon as the timer is up you want the SHUTDOWN command to execute and exit. Here is what I would do:
echo off
goto :a
cls
:a
cls
timeout 10
cls
:b
cls
shutdown -s >nul
cls
YOU COULD ALSO USE THIS COMMAND IF YOU WANT A SHORTER VERSION:
shutdown /t 10 /s
I ended up solving the problem using schtasks because shutdown only supports a timeout for /s and /r.
The python code adds a 1 minute offset to the current time (schtasks doesn't deal in seconds), then calls os.system("schtasks.....") with a /f parameter to avoid schtasks holding up the console asking for a y/n overwrite.
def get_time_offset(offset):
now = datetime.datetime.now()
offset_now = now + datetime.timedelta(minutes=offset)
offset_now_time = offset_now.time()
future_time = (str(offset_now_time.hour) + ":" + str(offset_now_time.minute) + ":" + str(offset_now_time.second) )
return future_time
def sch_task(name, time, task):
os.system("schtasks /create /tn \"" + str(name) + "\" /sc once /st " + str(time) + " /tr \"" + str(task) + "\" /f")
The final call looks like this:
sch_task("client_hibernate", get_time_offset(1), "shutdown /h")
It will run 1 minute after the call is made.
The only problem with this solution is that schtasks only has precision by the minute, so you can't schedule a task for less than a minute in the future.
I will probably write a multithreaded timer in python to run the command instead of relying on schtasks for windows and sleep for linux.

How to run a background process and do *not* wait?

My goal is simple: kick off rsync and DO NOT WAIT.
Python 2.7.9 on Debian
Sample code:
rsync_cmd = "/usr/bin/rsync -a -e 'ssh -i /home/myuser/.ssh/id_rsa' {0}#{1}:'{2}' {3}".format(remote_user, remote_server, file1, file1)
rsync_cmd2 = "/usr/bin/rsync -a -e 'ssh -i /home/myuser/.ssh/id_rsa' {0}#{1}:'{2}' {3} &".format(remote_user, remote_server, file1, file1)
rsync_path = "/usr/bin/rsync"
rsync_args = shlex.split("-a -e 'ssh -i /home/mysuser/.ssh/id_rsa' {0}#{1}:'{2}' {3}".format(remote_user, remote_server, file1, file1))
#subprocess.call(rsync_cmd, shell=True) # This isn't supposed to work but I tried it
#subprocess.Popen(rsync_cmd, shell=True) # This is supposed to be the solution but not for me
#subprocess.Popen(rsync_cmd2, shell=True) # Adding my own shell "&" to background it, still fails
#subprocess.Popen(rsync_cmd, shell=True, stdin=None, stdout=None, stderr=None, close_fds=True) # This doesn't work
#subprocess.Popen(shlex.split(rsync_cmd)) # This doesn't work
#os.execv(rsync_path, rsync_args) # This doesn't work
#os.spawnv(os.P_NOWAIT, rsync_path, rsync_args) # This doesn't work
#os.system(rsync_cmd2) # This doesn't work
print "DONE"
(I've commented out the execution commands only because I'm actually keeping all of my trials in my code so that I know what I've done and what I haven't done. Obviously, I would run the script with the right line uncommented.)
What happens is this...I can watch the transfer on the server and when it's finished, then I get a "DONE" printed to the screen.
What I'd like to have happen is a "DONE" printed immediately after issuing the rsync command and for the transfer to start.
Seems very straight-forward. I've followed details outlined in other posts, like this one and this one, but something is preventing it from working for me.
Thanks ahead of time.
(I have tried everything I can find in StackExchange and don't feel like this is a duplicate because I still can't get it to work. Something isn't right in my setup and need help.)
Here is verified example for Python REPL:
>>> import subprocess
>>> import sys
>>> p = subprocess.Popen([sys.executable, '-c', 'import time; time.sleep(100)'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT); print('finished')
finished
How to verify that via another terminal window:
$ ps aux | grep python
Output:
user 32820 0.0 0.0 2447684 3972 s003 S+ 10:11PM 0:00.01 /Users/user/venv/bin/python -c import time; time.sleep(100)
Popen() starts a child process—it does not wait for it to exit. You have to call .wait() method explicitly if you want to wait for the child process. In that sense, all subprocesses are background processes.
On the other hand, the child process may inherit various properties/resources from the parent such as open file descriptors, the process group, its control terminal, some signal configuration, etc—it may lead to preventing ancestors processes to exit e.g., Python subprocess .check_call vs .check_output or the child may die prematurely on Ctrl-C (SIGINT signal is sent to the foreground process group) or if the terminal session is closed (SIGHUP).
To disassociate the child process completely, you should make it a daemon. Sometimes something in between could be enough e.g., it is enough to redirect the inherited stdout in a grandchild so that .communicate() in the parent would return when its immediate child exits.
I encountered a similar issue while working with qnx devices and wanted a sub-process that runs independently of the main process and even runs after the main process terminates.
Here's the solution I found that actually works 'creationflags=subprocess.DETACHED_PROCESS':
import subprocess
import time
pid = subprocess.Popen(["python", "path_to_script\turn_ecu_on.py"], creationflags=subprocess.DETACHED_PROCESS)
time.sleep(15)
print("Done")
Link to the doc: https://docs.python.org/3/library/subprocess.html#subprocess.Popen
In Ubuntu the following commands keep working even if python app exits.
url = "https://www.youtube.com/watch?v=t3kcqTE6x4A"
cmd = f"mpv '{url}' && zenity --info --text 'you have watched {url}' &"
os.system(cmd)

timeout limit for holding exit status from system in perl/python

I have a simple perl script that calls another python script to do the deployment of a server in cloud .
I capture the exit status of the deployment inside perl to take any further action after success/failure setup.
It's like:
$cmdret = system("python script.py ARG1 ARG2");
Here the python script runs for 3hrs to 7 hrs.
The problem here is that, irrespective of the success or failure return status, the system receive a Signal HUP at this step randomly even if the process is running in backened and breaks the steps further.
So does anyone know, if there is any time limit for holding the return status from the system which leads to sending Hangup Signal?
Inside the python script script.py, pexpect is used execute scripts remotely:
doSsh(User,Passwd,Name,'cd '+OutputDir+';python host-bringup.py setup')
doSsh(User,Passwd,Name,'cd '+OpsHome+'/ops/hlevel;python dshost.py start')
....
And doSsh is a pexpect subroutine:
def doSsh(user,password,host,command):
try:
child = pexpect.spawn("ssh -o ServerAliveInterval=100 -n %s#%s '%s'" % (user,host,command),logfile=sys.stdout,timeout=None)
i = child.expect(['password:', r'\(yes\/no\)',r'.*password for paasusr: ',r'.*[$#] ',pexpect.EOF])
if i == 0:
child.sendline(password)
elif i == 1:
child.sendline("yes")
child.expect("password:")
child.sendline(password)
data = child.read()
print data
child.close()
return True
except Exception as error:
print error
return False
This first doSsh execution takes ~6 hours and this session is killed after few hours of execution with the message : Signal HUP caught; exitingbut
the execution python host-bringup.py setup still runs in the remote host.
So in the local system, the next doSsh never runs and also the rest steps inside the perl script never continue.
SIGHUP is sent when the terminal disconnects. When you want to create a process that's not tied to the terminal, you daemonize it.
Note that nohup doesn't deamonize.
$ nohup perl -e'system "ps", "-o", "pid,ppid,sid,cmd"'
nohup: ignoring input and appending output to `nohup.out'
$ cat nohup.out
PID PPID SID CMD
21300 21299 21300 -bash
21504 21300 21300 perl -esystem "ps", "-o", "pid,ppid,sid,cmd"
21505 21504 21300 ps -o pid,ppid,sid,cmd
As you can see,
perl's PPID is that of the program that launched it.
perl's SID is that of the program that launched it.
Since the session hasn't changed, the terminal will send SIGHUP to perl when it disconnects as normal.
That said, nohup changes how perl's handles SIGHUP by causing it to be ignored.
$ perl -e'system "kill", "-HUP", "$$"; print "SIGHUP was ignored\n"'
Hangup
$ echo $?
129
$ nohup perl -e'system "kill", "-HUP", "$$"; print "SIGHUP was ignored\n"'
nohup: ignoring input and appending output to `nohup.out'
$ echo $?
0
$ tail -n 1 nohup.out
SIGHUP was ignored
If perl is killed by the signal, it's because something changed how perl handles SIGHUP.
So, either daemonize the process, or have perl ignore use SIGHUP (e.g. by using nohup). But if you use nohup, don't re-enable the default SIGHUP behaviour!
If your goal is to make your perl program ignore the HUP signal, you likely just need to set the HUP entry of the $SIG global signal handler hash:
$SIG{ 'HUP' } = 'IGNORE';
for gory details, see
perldoc perlipc

Categories

Resources