Run shutdown command in background after delay on windows - python

I am trying to run a command on a windows machine running a python client program. It needs to return a value to the server before executing the command.
def shutdown():
if OS_LINUX:
os.system("sleep 10s && sudo shutdown &")
elif OS_WIN:
os.system(<What to put HERE>)
return (1,"Shutting down")
As you can see, the unix command works just fine.
In the background, it runs sleep for 10 seconds and after that is done, it runs sudo shutdown. The function is allowed to return properly and the server gets notice that the client is shutting down BEFORE "sudo shutdown" is run.
However, on the windows side of things, I can't seem to run shutdown -s after a delay AND run it in the background.
This is what I have so far:
start /b /wait timeout 10 && shutdown -s
I have tried many variations of it: with/without /wait and /b, using ping instead of timeout, sending the output of ping/timeout to ">nul", etc.
This one has been the closest to what I want, but timeout takes over the command line until it is done, which doesn't allow my return statement to be covered in shutdown() before "shutdown -s" is run. This leaves the server hanging until it times out, which is not what I want the user to see, especially because I can't guarantee that the client didn't just loose connection at the same time the server told it to shutdown.
I might be able to solve the problem by throwing "timeout 10 && shutdown -s" in a batch script and running that in the background using os.system("start /b shutdown_script.bat"), but this client program needs to be just a single portable file for distribution reasons.
The solution is easy on unix, what am I missing on dos?
EDIT: Running os.system("shutdown -s") command (at least on win10) causes a screen to show to the user saying that the system will be shutdown. This allows my function to work properly and return a value to the server. This is NOT the case for other commands like hibernate ("shutdown -h"), and not necessarily the case on older version of windows either. The problem still remains for other commands, such as closing the client program remotely.
EDIT2: I also need to run commands for hibernate, and logoff. Shutdown -h and -l respectively. The -t parameter of shutdown only works with -s and -r (At least on windows 10)

This command work fine for me:
shutdown /t 10 /s

So I am taking that you the command window to display the timeout, and and as soon as the timer is up you want the SHUTDOWN command to execute and exit. Here is what I would do:
echo off
goto :a
cls
:a
cls
timeout 10
cls
:b
cls
shutdown -s >nul
cls
YOU COULD ALSO USE THIS COMMAND IF YOU WANT A SHORTER VERSION:
shutdown /t 10 /s

I ended up solving the problem using schtasks because shutdown only supports a timeout for /s and /r.
The python code adds a 1 minute offset to the current time (schtasks doesn't deal in seconds), then calls os.system("schtasks.....") with a /f parameter to avoid schtasks holding up the console asking for a y/n overwrite.
def get_time_offset(offset):
now = datetime.datetime.now()
offset_now = now + datetime.timedelta(minutes=offset)
offset_now_time = offset_now.time()
future_time = (str(offset_now_time.hour) + ":" + str(offset_now_time.minute) + ":" + str(offset_now_time.second) )
return future_time
def sch_task(name, time, task):
os.system("schtasks /create /tn \"" + str(name) + "\" /sc once /st " + str(time) + " /tr \"" + str(task) + "\" /f")
The final call looks like this:
sch_task("client_hibernate", get_time_offset(1), "shutdown /h")
It will run 1 minute after the call is made.
The only problem with this solution is that schtasks only has precision by the minute, so you can't schedule a task for less than a minute in the future.
I will probably write a multithreaded timer in python to run the command instead of relying on schtasks for windows and sleep for linux.

Related

Rebooting a server after Windows application is installed using Python

I am developing some Python (version 3.6.1) code to install an application in Windows 7. The code used is this:
winCMD = r'"C:\PowerBuild\setup.exe" /v"/qr /l C:\PowerBuild\TUmsi.log"'
output = subprocess.check_call(winCMD, shell = True)
The application is installed successfully. The problem is that it always requires a reboot after it is finished (a popup with a message "You must restart your system for the configuration changes made to to take effect. Click Yes to restart now or No if you plan to restart later.).
I tried to insert parameter "/forcerestart" (source here) in the installation command but it still stops to request the reboot:
def installApp():
winCMD = r'"C:\PowerBuild\setup.exe" /v"/qr /forcerestart /l C:\PowerBuild\TUmsi.log"'
output = subprocess.check_call(winCMD, shell = True)
Another attempt was to create a following command like this one below, although since the previous command is not finished yet (as per my understanding) I realized it will never be called:
rebootSystem = 'shutdown -t 0 /r /f'
subprocess.Popen(rebootSystem, stdout=subprocess.PIPE, shell=True)
Does anyone had such an issue and could solve it?
As an ugly workaround, if you're not time-critical but you want to emphasise the "automatic" aspect, why not
run the installCMD in a thread
wait sufficiently long to be sure that the command has completed
perform the shutdown
like this:
import threading,time
def installApp():
winCMD = r'"C:\PowerBuild\setup.exe" /v"/qr /l C:\PowerBuild\TUmsi.log"'
output = subprocess.check_call(winCMD, shell = True)
t = threading.Thread(target=installApp)
t.start()
time.sleep(1800) # half-hour should be enough
rebootSystem = 'shutdown -t 0 /r /f'
subprocess.Popen(rebootSystem, stdout=subprocess.PIPE, shell=True)
Another (safer) way would be to find out which file is created last in the installation, and monitor for its existence in a loop like this:
while not os.path.isfile("somefile"):
time.sleep(60)
time.sleep(60) # another minute for safety
# perform the reboot
To be clean, you'd have to use subprocess.Popen for the installation process, export it as global and call terminate() on it in the main process, but since you're calling a shutdown that's not necessary.
(to be clean, we wouldn't have to do that hack in the first place)

How to run a background process and do *not* wait?

My goal is simple: kick off rsync and DO NOT WAIT.
Python 2.7.9 on Debian
Sample code:
rsync_cmd = "/usr/bin/rsync -a -e 'ssh -i /home/myuser/.ssh/id_rsa' {0}#{1}:'{2}' {3}".format(remote_user, remote_server, file1, file1)
rsync_cmd2 = "/usr/bin/rsync -a -e 'ssh -i /home/myuser/.ssh/id_rsa' {0}#{1}:'{2}' {3} &".format(remote_user, remote_server, file1, file1)
rsync_path = "/usr/bin/rsync"
rsync_args = shlex.split("-a -e 'ssh -i /home/mysuser/.ssh/id_rsa' {0}#{1}:'{2}' {3}".format(remote_user, remote_server, file1, file1))
#subprocess.call(rsync_cmd, shell=True) # This isn't supposed to work but I tried it
#subprocess.Popen(rsync_cmd, shell=True) # This is supposed to be the solution but not for me
#subprocess.Popen(rsync_cmd2, shell=True) # Adding my own shell "&" to background it, still fails
#subprocess.Popen(rsync_cmd, shell=True, stdin=None, stdout=None, stderr=None, close_fds=True) # This doesn't work
#subprocess.Popen(shlex.split(rsync_cmd)) # This doesn't work
#os.execv(rsync_path, rsync_args) # This doesn't work
#os.spawnv(os.P_NOWAIT, rsync_path, rsync_args) # This doesn't work
#os.system(rsync_cmd2) # This doesn't work
print "DONE"
(I've commented out the execution commands only because I'm actually keeping all of my trials in my code so that I know what I've done and what I haven't done. Obviously, I would run the script with the right line uncommented.)
What happens is this...I can watch the transfer on the server and when it's finished, then I get a "DONE" printed to the screen.
What I'd like to have happen is a "DONE" printed immediately after issuing the rsync command and for the transfer to start.
Seems very straight-forward. I've followed details outlined in other posts, like this one and this one, but something is preventing it from working for me.
Thanks ahead of time.
(I have tried everything I can find in StackExchange and don't feel like this is a duplicate because I still can't get it to work. Something isn't right in my setup and need help.)
Here is verified example for Python REPL:
>>> import subprocess
>>> import sys
>>> p = subprocess.Popen([sys.executable, '-c', 'import time; time.sleep(100)'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT); print('finished')
finished
How to verify that via another terminal window:
$ ps aux | grep python
Output:
user 32820 0.0 0.0 2447684 3972 s003 S+ 10:11PM 0:00.01 /Users/user/venv/bin/python -c import time; time.sleep(100)
Popen() starts a child process—it does not wait for it to exit. You have to call .wait() method explicitly if you want to wait for the child process. In that sense, all subprocesses are background processes.
On the other hand, the child process may inherit various properties/resources from the parent such as open file descriptors, the process group, its control terminal, some signal configuration, etc—it may lead to preventing ancestors processes to exit e.g., Python subprocess .check_call vs .check_output or the child may die prematurely on Ctrl-C (SIGINT signal is sent to the foreground process group) or if the terminal session is closed (SIGHUP).
To disassociate the child process completely, you should make it a daemon. Sometimes something in between could be enough e.g., it is enough to redirect the inherited stdout in a grandchild so that .communicate() in the parent would return when its immediate child exits.
I encountered a similar issue while working with qnx devices and wanted a sub-process that runs independently of the main process and even runs after the main process terminates.
Here's the solution I found that actually works 'creationflags=subprocess.DETACHED_PROCESS':
import subprocess
import time
pid = subprocess.Popen(["python", "path_to_script\turn_ecu_on.py"], creationflags=subprocess.DETACHED_PROCESS)
time.sleep(15)
print("Done")
Link to the doc: https://docs.python.org/3/library/subprocess.html#subprocess.Popen
In Ubuntu the following commands keep working even if python app exits.
url = "https://www.youtube.com/watch?v=t3kcqTE6x4A"
cmd = f"mpv '{url}' && zenity --info --text 'you have watched {url}' &"
os.system(cmd)

timeout limit for holding exit status from system in perl/python

I have a simple perl script that calls another python script to do the deployment of a server in cloud .
I capture the exit status of the deployment inside perl to take any further action after success/failure setup.
It's like:
$cmdret = system("python script.py ARG1 ARG2");
Here the python script runs for 3hrs to 7 hrs.
The problem here is that, irrespective of the success or failure return status, the system receive a Signal HUP at this step randomly even if the process is running in backened and breaks the steps further.
So does anyone know, if there is any time limit for holding the return status from the system which leads to sending Hangup Signal?
Inside the python script script.py, pexpect is used execute scripts remotely:
doSsh(User,Passwd,Name,'cd '+OutputDir+';python host-bringup.py setup')
doSsh(User,Passwd,Name,'cd '+OpsHome+'/ops/hlevel;python dshost.py start')
....
And doSsh is a pexpect subroutine:
def doSsh(user,password,host,command):
try:
child = pexpect.spawn("ssh -o ServerAliveInterval=100 -n %s#%s '%s'" % (user,host,command),logfile=sys.stdout,timeout=None)
i = child.expect(['password:', r'\(yes\/no\)',r'.*password for paasusr: ',r'.*[$#] ',pexpect.EOF])
if i == 0:
child.sendline(password)
elif i == 1:
child.sendline("yes")
child.expect("password:")
child.sendline(password)
data = child.read()
print data
child.close()
return True
except Exception as error:
print error
return False
This first doSsh execution takes ~6 hours and this session is killed after few hours of execution with the message : Signal HUP caught; exitingbut
the execution python host-bringup.py setup still runs in the remote host.
So in the local system, the next doSsh never runs and also the rest steps inside the perl script never continue.
SIGHUP is sent when the terminal disconnects. When you want to create a process that's not tied to the terminal, you daemonize it.
Note that nohup doesn't deamonize.
$ nohup perl -e'system "ps", "-o", "pid,ppid,sid,cmd"'
nohup: ignoring input and appending output to `nohup.out'
$ cat nohup.out
PID PPID SID CMD
21300 21299 21300 -bash
21504 21300 21300 perl -esystem "ps", "-o", "pid,ppid,sid,cmd"
21505 21504 21300 ps -o pid,ppid,sid,cmd
As you can see,
perl's PPID is that of the program that launched it.
perl's SID is that of the program that launched it.
Since the session hasn't changed, the terminal will send SIGHUP to perl when it disconnects as normal.
That said, nohup changes how perl's handles SIGHUP by causing it to be ignored.
$ perl -e'system "kill", "-HUP", "$$"; print "SIGHUP was ignored\n"'
Hangup
$ echo $?
129
$ nohup perl -e'system "kill", "-HUP", "$$"; print "SIGHUP was ignored\n"'
nohup: ignoring input and appending output to `nohup.out'
$ echo $?
0
$ tail -n 1 nohup.out
SIGHUP was ignored
If perl is killed by the signal, it's because something changed how perl handles SIGHUP.
So, either daemonize the process, or have perl ignore use SIGHUP (e.g. by using nohup). But if you use nohup, don't re-enable the default SIGHUP behaviour!
If your goal is to make your perl program ignore the HUP signal, you likely just need to set the HUP entry of the $SIG global signal handler hash:
$SIG{ 'HUP' } = 'IGNORE';
for gory details, see
perldoc perlipc

Stopping a program in terminal that has been started from Python

I have a python script that starts another program from system (the shell script actually calls the program itself, the tag-bm.sh):
def tag(obt_path, corpora_path):
print corpora_path
os.system('cd ' + obt_path + ' && ./tag-bm.sh ' + corpora_path + ' > ' + corpora_path + '.obt')
os.system('pwd')
Sometimes this program goes into an infinite loop, which creates a problem for my main program. Is there a way to set things up so that if the program has not ended within a set time python disrupts it?
Find out the name of the command, for instance if it's cp then you can get the pid of the command using either pgrep cp or pidof cp. Note that this will return the pids of all cp processes. Call the pgrep right after you started the command and it should be on top so the pid in that case is pgrep cp | head -n1 or pidof cp | cut -s -f1. Store this variable and kill it later at a desired time.
Alternatively you could run the command with timeout you can add a value after which the command will automatically receive a kill signal. Example usage: timeout 500s cp large_file destination_file. The cp will get killed after 500 seconds in this case. `man timeout´ for more information.

Signals are not always seen by all children in a process group

I have a problem with the way signals are propagated within a process group. Here is my situation and an explication of the problem :
I have an application, that is launched by a shell script (with a su). This shell script is itself launched by a python application using subprocess.Popen
I call os.setpgrp as a preexec_function and have verified using ps that the bash script, the su command and the final application all have the same pgid.
Now when I send signal USR1 to the bash script (the leader of the process group), sometimes the application see this signal, and sometimes not. I can't figure out why I have this random behavior (The signal is seen by the app about 50% of the time)
Here is he example code I am testing against :
Python launcher :
#!/usr/bin/env python
p = subprocess.Popen( ["path/to/bash/script"], stdout=…, stderr=…, preexec_fn=os.setpgrp )
# loop to write stdout and stderr of the subprocesses to a file
# not that I use fcntl.fcntl(p.stdXXX.fileno(), fcntl.F_SETFL, os.O_NONBLOCK)
p.wait()
Bash script :
#!/bin/bash
set -e
set -u
cd /usr/local/share/gios/exchange-manager
CONF=/etc/exchange-manager.conf
[ -f $CONF ] && . $CONF
su exchange-manager -p -c "ruby /path/to/ruby/app"
Ruby application :
#!/usr/bin/env ruby
Signal.trap("USR1") do
puts "Received SIGUSR1"
exit
end
while true do
sleep 1
end
So I try to send the signal to the bash wrapper (from a terminal or from the python application), sometimes the ruby application will see the signal and sometimes not. I don't think it's a logging issue as I have tried to replace the puts by a method that write directly to a different file.
Do you guys have any idea what could be the root cause of my problem and how to fix it ?
Your signal handler is doing too much. If you exit from within the signal handler, you are not sure that your buffers are properly flushed, in other words you may not be exiting gracefully your program. Be careful of new signals being received when the program is already inside a signal handler.
Try to modify your Ruby source to exit the program from the main loop as soon as an "exit" flag is set, and don't exit from the signal handler itself.
Your Ruby application becomes:
#!/usr/bin/env ruby
$done = false
Signal.trap("USR1") do
$done = true
end
until $done do
sleep 1
end
puts "** graceful exit"
Which should be much safer.
For real programs, you may consider using a Mutex to protect your flag variable.

Categories

Resources