How to shutdown a computer using Python - python

I've written a Python script which should eventually shutdown the computer.
This line is a part of it :
os.system("shutdown /p")
It makes some sort of a shutdown but remains on the turn-on Windows control pannel (where the user can switch the computer users).
Is there a way to fully shutdown the computer?
I've tried other os.system("shutdown ___") methods with no success.
Is there another method which might help?

import os
os.system('shutdown -s')
This will work for you.

For Linux:
import os
os.system('sudo shutdown now')
or: if you want immediate shutdown without sudo prompt for password, use the following for Ubuntu and similar distro's:
os.system('systemctl poweroff')

Using ctypes you could use the ExitWindowsEx function to shutdown the computer.
Description from MSDN:
Logs off the interactive user, shuts down the system, or shuts down and restarts the system.
First some code:
import ctypes
user32 = ctypes.WinDLL('user32')
user32.ExitWindowsEx(0x00000008, 0x00000000)
Now the explanation line by line:
Get the ctypes library
The ExitWindowsEx is provided by the user32.dll and needs to be loaded via WinDLL()
Call the ExitWindowsEx() function and pass the necessary parameters.
Parameters:
All the arguments are hexadecimals.
The first argument I selected:
shuts down the system and turns off the power. The system must support the power-off feature.
There are many other possible functions see the documentation for a complete list.
The second argument:
The second argument must give a reason for the shutdown, which is logged by the system. In this case I set it for Other issue but there are many to choose from. See this for a complete list.
Making it cross platform:
This can be combined with other methods to make it cross platform. For example:
import sys
if sys.platform == 'win32':
import ctypes
user32 = ctypes.WinDLL('user32')
user32.ExitWindowsEx(0x00000008, 0x00000000)
else:
import os
os.system('sudo shutdown now')
This is a Windows dependant function (although Linux/Mac will have an equivalent), but is a better solution than calling os.system() since a batch script called shutdown.bat will not conflict with the command (and causing a security hazard in the meantime).
In addition it does not bother users with a message saying "You are about to be signed out in less than a minute" like shutdown -s does, but executes silently.
As a side note use subprocess over os.system() (see Difference between subprocess.Popen and os.system)
As a side note: I built WinUtils (Windows only) which simplifies this a bit, however it should be faster (and does not require Ctypes) since it is built in C.
Example:
import WinUtils
WinUtils.Shutdown(WinUtils.SHTDN_REASON_MINOR_OTHER)

The only variant that really workes for me without any problem is:
import os
os.system('shutdown /p /f')

Python doc recommends using subprocess instead of os.system. It intends to replace old modules like os.system and others.
So, use this:
import subprocess
subprocess.run(["shutdown", "-s"])
And for linux users -s is not required, they can just use
import subprocess
subprocess.run(["shutdown"])

Try this code snippet:
import os
shutdown = input("Do you wish to shutdown your computer ? (yes / no): ")
if shutdown == 'no':
exit()
else:
os.system("shutdown /s /t 1")

This Python code may do the deed:
import os
os.system('sudo shutdown -h now')

Here's a sample to power off Windows:
import os
os.system("shutdown /s /t 1")
Here's a sample to power off Linux (by root permission):
import os
os.system("shutdown now -h")

As you wish, winapi can be used.
import win32api,win32security,winnt
win32security.AdjustTokenPrivilege(win32security.OpenProcessHandle(win32api.GetCurrentProcess(),win32security.ADJUST_TOKEN_PRIVILEGE | win32security.TOKEN_QUERY),False,[(win32security.LookupPrivilegeValue(None,winnt.SE_SHUTDOWN_NAME),winnt.SE_PRIVILEGE_ENABLE)])
win32api.InitateSystemShutdown(None,"Text",second,Force_Terminate_Apps_as_boolean,Restart_as_boolean)

pip install schedule
In case you also want to schedule it:
import schedule
import time
import os
when= "20:11"
print("The computer will be shutdown at " + when + "")
def job():
os.system("shutdown /s /t 1")
schedule.every().day.at(when).do(job)
while True:
schedule.run_pending()
time.sleep(1)

Related

How to run multiple servers with a python script? [duplicate]

I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.
While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated.
Example for your case:
import subprocess
subprocess.Popen(["rm","-r","some.file"])
This will run rm -r some.file in the background. Note that calling .communicate() on the object returned from Popen will block until it completes, so don't do that if you want it to run in the background:
import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate() # Will block for 30 seconds
See the documentation here.
Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.
Note: This answer is less current than it was when posted in 2009. Using the subprocess module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system() and call it in the same way your shell script did, or you can spawn it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT flag).
See the documentation here.
You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system function, e.g.:
import os
os.system("some_command &")
Basically, whatever you pass to the system function will be executed the same as if you'd passed it to the shell in a script.
I found this here:
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
Use subprocess.Popen() with the close_fds=True parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocess
if len(sys.argv) == 2:
time.sleep(5)
print 'track end'
if sys.platform == 'darwin':
subprocess.Popen(['say', 'hello'])
else:
print 'main begin'
subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
print 'main end'
Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout= and then try to read(), then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
import threading
def output_reader(proc, file):
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
file.buffer.write(byte)
else:
break
with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
open('log1.log', 'w') as file1, \
open('log2.log', 'w') as file2:
t1 = threading.Thread(target=output_reader, args=(proc1, file1))
t2 = threading.Thread(target=output_reader, args=(proc2, file2))
t1.start()
t2.start()
t1.join()
t2.join()
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(4):
print(i + int(sys.argv[1]))
sys.stdout.flush()
time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
0
10
1
11
2
12
3
13
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.
You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:
pid = os.fork()
except OSError, e:
## some debug output
sys.exit(1)
if pid == 0:
## eventually use os.putenv(..) to set environment variables
## os.execv strips of args[0] for the arguments
os.execv(args[0], args)
You can use
import os
pid = os.fork()
if pid == 0:
Continue to other code ...
This will make the python process run in background.
I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.

What is a pythonic way of running a Python routine as another user (e.g. root)?

In the spirit of this answer ("Don't call shell commands from Python. You can do everything in Python that shell commands can."), which is rather common advise given to many people, how do I run a selected individual routine (and ONLY this routine) within a Python program as another user, e.g. root, in a pythonic (and somewhat secure) manner?
For example, I have a routine like this in some code:
import os
import signal
import subprocess
def kill_proc(pid, k_signal = signal.SIGINT, sudo = False):
if sudo:
subprocess.Popen(['sudo', 'kill', '-%d' % k_signal, '%d' % pid]).wait()
else:
os.kill(pid, k_signal)
If I do not need to be root, I can just call os.kill(pid, k_signal) in this example. However, if I need super user privileges for sending a signal in my example, I must send the signal though a command in a subprocess. How could I use os.kill instead?
You cannot run a Python function as another user.
A Unix-like OS associates each process with a particular user. One cannot reassign a process to a different user unless the original owner was root to begin with. A Python function is not a process and cannot have its own user.

How do I make my script do something when it's killed by the user?

I want my python script to do something when it's killed by the user.
I tried using atexit, but it didn't work.
Sorry for my bad English and thanks.
On POSIX based systems (OS X, Linux) You can catch a user SIGINT (when they press ctrl-c) and SIGTERM with the following code:
import signal
import sys
def signal_term_handler(signal, frame):
print 'got SIGTERM/SIGINT'
sys.exit(0)
signal.signal(signal.SIGTERM, signal_term_handler)
signal.signal(signal.SIGINT, signal_term_handler)
On Windows you should be able to do
signal.signal(signal.CTRL_C_EVENT, signal_term_handler)
signal.signal(signal.CTRL_BREAK_EVENT, signal_term_handler)
Note that by design there are some signals you cannot catch (SIGKILL on POSIX systems) - this is to protect the OS from misbehaving programs.
You want to install a signal handler. The specific signals you want to handle depend on what OS you're running on: Windows uses a different set of signals than *nix/Mac.
import atexit
def sayHelloWorld():
print("Hello, World!")
atexit.register(sayHelloWorld)
atexit does work, you just have to use it right. Head over to https://docs.python.org/2/library/atexit.html for more information about it

How to Close a program using python?

Is there a way that python can close a windows application (example: Firefox) ?
I know how to start an app, but now I need to know how to close one.
# I have used subprocess comands for a while
# this program will try to close a firefox window every ten secounds
import subprocess
import time
# creating a forever loop
while 1 :
subprocess.call("TASKKILL /F /IM firefox.exe", shell=True)
time.sleep(10)
If you're using Popen, you should be able to terminate the app using either send_signal(SIGTERM) or terminate().
See docs here.
in windows you could use taskkill within subprocess.call:
subprocess.call(["taskkill","/F","/IM","firefox.exe"])
/F forces process termination. Omitting it only asks firefox to close, which can work if the app is responsive.
Cleaner/more portable solution with psutil (well, for Linux you have to drop the .exe part or use .startwith("firefox"):
import psutil,os
for pid in (process.pid for process in psutil.process_iter() if process.name()=="firefox.exe"):
os.kill(pid)
that will kill all processes named firefox.exe
By the way os.kill(pid) is "overkill" (no pun intended). process has a kill() method, so:
for process in (process for process in psutil.process_iter() if process.name()=="firefox.exe"):
process.kill()
You want probably use os.kill http://docs.python.org/library/os.html#os.kill
In order to kill a python tk window named MyappWindow under MS Windows:
from os import system
system('taskkill /F /FI "WINDOWTITLE eq MyappWindow" ')
stars maybe used as wildcard:
from os import system
system('taskkill /F /FI "WINDOWTITLE eq MyappWind*" ')
Please, refer to "taskkill /?" for additional options.
On OS X:
Create a shell script and put:
killall Application
Replace Application with a running app of your choice.
In the same directory as this shell script, make a python file.
In the python file, put these two lines of code:
from subprocess import Popen
Popen('sh shell.sh', shell=True)
Replace shell.sh with the name of your created shell script.
An app(a running process) can be closed by it's name using it's PID(Process ID) and by using psutil module. Install it in cmd using the command:
pip install psutil
After installing, run the code given below in any .py file:
import psutil
def close_app(app_name):
running_apps=psutil.process_iter(['pid','name']) #returns names of running processes
found=False
for app in running_apps:
sys_app=app.info.get('name').split('.')[0].lower()
if sys_app in app_name.split() or app_name in sys_app:
pid=app.info.get('pid') #returns PID of the given app if found running
try: #deleting the app if asked app is running.(It raises error for some windows apps)
app_pid = psutil.Process(pid)
app_pid.terminate()
found=True
except: pass
else: pass
if not found:
print(app_name+" not found running")
else:
print(app_name+'('+sys_app+')'+' closed')
close_app('chrome')
After running the code above you may see the following output if google chrome was running:
>>> chrome(xyz) closed
Feel free to comment in case of any error

How to start a background process in Python?

I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.
While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated.
Example for your case:
import subprocess
subprocess.Popen(["rm","-r","some.file"])
This will run rm -r some.file in the background. Note that calling .communicate() on the object returned from Popen will block until it completes, so don't do that if you want it to run in the background:
import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate() # Will block for 30 seconds
See the documentation here.
Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.
Note: This answer is less current than it was when posted in 2009. Using the subprocess module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system() and call it in the same way your shell script did, or you can spawn it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT flag).
See the documentation here.
You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system function, e.g.:
import os
os.system("some_command &")
Basically, whatever you pass to the system function will be executed the same as if you'd passed it to the shell in a script.
I found this here:
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
Use subprocess.Popen() with the close_fds=True parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocess
if len(sys.argv) == 2:
time.sleep(5)
print 'track end'
if sys.platform == 'darwin':
subprocess.Popen(['say', 'hello'])
else:
print 'main begin'
subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
print 'main end'
Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout= and then try to read(), then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
import threading
def output_reader(proc, file):
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
file.buffer.write(byte)
else:
break
with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
open('log1.log', 'w') as file1, \
open('log2.log', 'w') as file2:
t1 = threading.Thread(target=output_reader, args=(proc1, file1))
t2 = threading.Thread(target=output_reader, args=(proc2, file2))
t1.start()
t2.start()
t1.join()
t2.join()
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(4):
print(i + int(sys.argv[1]))
sys.stdout.flush()
time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
0
10
1
11
2
12
3
13
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.
You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:
pid = os.fork()
except OSError, e:
## some debug output
sys.exit(1)
if pid == 0:
## eventually use os.putenv(..) to set environment variables
## os.execv strips of args[0] for the arguments
os.execv(args[0], args)
You can use
import os
pid = os.fork()
if pid == 0:
Continue to other code ...
This will make the python process run in background.
I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.

Categories

Resources