shutting down computer (linux) using python - python

I am trying to write a script that will shut down the computer if a few requirements are filled with the command
os.system("poweroff")
also tried
os.system("shutdown now -h")
and a few others. but nothing happens when I run it, the computer goes through the code without crashing or producing any error messages and terminates the script normally, without shutting down the computer.
How does one shutdown the computer in python?
edit:
Seems that the commands I have tried requires root access. Is there any way to shut down the machine from the script without elevated privileges?

import os
os.system("shutdown now -h")
execute your script with root privileges.

Many of the linux distributions out there require super user privileges to execute shutdown or halt, but then, how come that if you're sitting on your computer you can power it off without being root? You open a menu, hit Shutdown and it shutdowns without you becoming root, right?
Well... the rationale behind this is that if you have physical access to the computer, you could pretty much pull the power cord and power it off anyways, so nowadays, many distributions allow power-off though access to the local System Bus accessible through dbus. Problem with dbus (or the services exposed through it, rather)? It's constantly changing. I'd recommend installing a dbus viewer tool such as D-feet (be advised: it's still pretty hard to visualize, but it may help)
Take a look to these Dbus shutdown scripts.
If you still have HAL in your distrubution (is on the way to being deprecated) try this:
import dbus
sys_bus = dbus.SystemBus()
hal_srvc = sys_bus.get_object('org.freedesktop.Hal',
'/org/freedesktop/Hal/devices/computer')
pwr_mgmt = dbus.Interface(hal_srvc,
'org.freedesktop.Hal.Device.SystemPowerManagement')
shutdown_method = pwr_mgmt.get_dbus_method("Shutdown")
shutdown_method()
This works on a Ubuntu 12.04 (I just powered off my computer to make sure it worked). If you have something newer... well, it may not work. It's the downside of this method: it is very distribution specific.
You might have to install the dbus-python package for this to work (http://dbus.freedesktop.org/doc/dbus-python/doc/tutorial.html)
UPDATE 1:
I've been doing a little bit of research and it looks like this is done in newer Ubuntu versions through ConsoleKit. I've tested the code below in my Ubuntu 12.04 (which has the deprecated HAL and the newer ConsoleKit) and it did shut my computer off:
>>> import dbus
>>> sys_bus = dbus.SystemBus()
>>> ck_srv = sys_bus.get_object('org.freedesktop.ConsoleKit',
'/org/freedesktop/ConsoleKit/Manager')
>>> ck_iface = dbus.Interface(ck_srv, 'org.freedesktop.ConsoleKit.Manager')
>>> stop_method = ck_iface.get_dbus_method("Stop")
>>> stop_method()
UPDATE 2:
Probably why can you do this without being root deserves a bit of a wider explanation. Let's focus on the newer ConsoleKit (HAL is way more complicated and messy, IMHO).
The ConsoleKit is a service running as root in your system:
borrajax#borrajax:/tmp$ ps aux|grep console-kit
root 1590 0.0 0.0 1043056 3876 ? Sl Dec05 0:00 /usr/sbin/console-kit-daemon --no-daemon
Now, d-bus is just a message passing system. You have a service, such as ConsoleKit that exposes an interface to d-bus. One of the methods exposed is the Stop (shown above). ConsoleKit's permissions are controlled with PolKit, which (despite on being based on regular Linux permissions) offers a bit of a finer grain of control for "who can do what". For instance, PolKit can say things like "If the user is logged into the computer, then allow him to do something. If it's remotely connected, then don't.". If PolKit determines that your user is allowed to call ConsoleKit's Stop method, that request will be passed by (or through) d-bus to ConsoleKit (which will subsequently shutdown your computer because it can... because it worth's it... because it's root)
Further reading:
What are ConsoleKit and PolicyKit? How do they work?
ArchWiki PolKit
To summarize: You can't switch a computer off without being root. But you can tell a service that is running as root to shutdown the system for you.
UPDATE 3:
On December 2021, seven years after the original answer was written I had to do this again. This time, in a Ubuntu 18.04.
Unsurprisingly, things seem to have changed a bit:
The PowerOff functionality seems to be handled via a new org.freedesktop.login1 service, which is part of the """new""" (cough! cough!) SystemD machinery.
The dbus Python package seems to have been deprecated and/or considered "legacy". There is, however, a new PyDbus library to be used instead.
So we can still power off machines with an unprivileged script:
#! /usr/bin/python3
from pydbus import SystemBus
bus = SystemBus()
proxy = bus.get('org.freedesktop.login1', '/org/freedesktop/login1')
if proxy.CanPowerOff() == 'yes':
proxy.PowerOff(False) # False for 'NOT interactive'
Update 3.1:
It looks like it's not as new as I thought X-D
There's already an answer by #Roeften in this very same thread.
BONUS:
I read in one of your comments that you wanna switch the computer off after a time consuming task to prevent it from overheating... Did you know that you can probably power it on at a given time using RTC? (See this and this) Pretty cool, uh? (I got so excited when I found out I could do this... ) :-D

The best way to shutdown a system is to use the following codes
import os
os.system('systemctl poweroff')

any way to shut down...without elevated privileges?
No, there isn't (fortunately!).
Keep in mind that you can use several system features to make privilege escalation for normal users easier:
sudo
setuid
setcap

Just to add to #BorrajaX 's answer for those using logind (newer fedora systems):
import dbus
sys_bus = dbus.SystemBus()
lg = sys_bus.get_object('org.freedesktop.login1','/org/freedesktop/login1')
pwr_mgmt = dbus.Interface(lg,'org.freedesktop.login1.Manager')
shutdown_method = pwr_mgmt.get_dbus_method("PowerOff")
shutdown_method(True)

linux:
import subprocess
cmdCommand = "shutdown -h now"
process = subprocess.Popen(cmdCommand.split(), stdout=subprocess.PIPE)
Windows:
import subprocess
cmdCommand = "shutdown -s"
process = subprocess.Popen(cmdCommand.split(), stdout=subprocess.PIPE)

In case you want to execute it from root, #Rahul R Dhobi answer works great:
import os
os.system("shutdown now -h")
Execution: sudo python_script.py
In case you need to execute it without root privileges:
import subprocess
import shlex
cmd = shlex.split("sudo shutdown -h now")
subprocess.call(cmd)
Execution: $ python_script.py # no sudo required

try this
import os
os.system("sudo shutdown now -h")
this worked for me

I'am using Fedora and it works with me. all what i need to do is Writing this line in terminal :
$ sudo python yourScript.py

There is a way to shut down the computer without using elevated permissions.
import os
subprocess.call(['osascript', '-e', 'tell app "system events" to shut down'])

Related

how can I use polkit in python to run some shell command as root without using pkexec?

I have to call some shell command in my python script as root and I do not want to run the entire python script as root. I can of course prefix those command with sudo in Popen, however it does not really show what commands ask for sudo. Also I heard the recommanded way to do such things are through polkit. We were using pkexec. However, there were nasty bugs resulting security holes. And the pkexec seems on the brink of deprecation.
I knew we could somehow wrap the commands into some our own dbus service. Then run our script as dbus client. However, this solution is rather annoying and I do not know much about dbus. I wonder if there is a way to just call shell command as root without creating our own dbus/systemd service. Is there already some dbus service allow us run shell command?
If someone can give me an example similar to this but also call commands like echo test > /root/test.txt, we will be highly appreciated.

How to use python subprocess.check_output with root / sudo

I'm writing a Python script that will run on a Raspberry that will read the temperature from a sensor and log to Thingspeak. I have this working with a bash script but wan't to do it with Python since it will be easier to manipulate and check the read values. The sensor reading is done with a library called loldht. I was trying to do it like this:
from subprocess import STDOUT, check_output
output = check_output("/home/pi/bin/lol_dht22/loldht", timeout=10)
The problem is that I have to run the library with sudo to be able to access the pins. I will run the script as a cron. Is it possible to run this with sudo?
Or could I create a bash script that executes 'sudo loldht' and then run the bash script from python?
I will run the script as a cron. Is it possible to run this with sudo?
You can put python script.py in the cron of a user with sufficient privileges (e.g. root or a user with permissions to files and devices in question)
I don't know which OS you're using, but if Raspbian is close to Debian, there is no need for sudo or root, just use a user with sufficient permissions.
It seems I can also do this check_output check_output(["sudo", "/home/pi/bin/lol_dht22/loldht", "7"], timeout=10)
Sure but the unix user that's going to invoke that Python script will need the sudo privilege (Otherwise can't call the sudo from subprocess). In which case you might as well do as above, run the cron from a user with the required permissions.
You can run sudo commands with cron. Just use sudo crontab -e to set the cron and it should work fine.
You should very careful with running things as root. Since root has access to everything, a simple error can potentially render the system unusable.
The proper way to have access to the hardware as a normal user is to change the permissions on the required device files.
It seems that the utility you mention uses the WiringPi library. Some digging in the source code indicates that it uses the /dev/gpiomem (or /dev/mem) devices.
On raspbian, device permissions are set with udev. See here and also here.
You could give every user access to /dev/gpiomem and other gpio devices by creating a file e.g. /etc/udev/rules.d/local.rules and putting the following text in it:
ACTION=="add", KERNEL=="gpio*", MODE="0666"
ACTION=="add", KERNEL=="i2c-[0-9]*", MODE="0666"
The first line makes the gpio devices available, the second one I2C devices.

restart local computer from python

Is there any way to make the computer a Python program is running on restart? Generic solution is good, but in particular I'm on windows.
There is no generic way of doing this, afaik.
For Windows, you need to access the Win32 API. Like so:
import win32api
win32api.InitiateSystemShutdown()
The win32api module is a part of pywin32.
For linux/os x, I guess calling the "reboot" command is the easiest.
import os
os.system('reboot now')
Or something like that.
(Note to downvoters: os.system() has not been deprecated. The text is "The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function." For simple cases like this, when you aren't interested in retrieving the results, nor in multiprocessing, os.system() works just fine).
Why don't you just call the shutdown command using subprocess?
You could reboot a Windows system by using: os.system("shutdown -t 0 -r -f")
Example:
import os
print("REBOOTING")
os.system("shutdown -t 0 -r -f")
Change the number in front of -t to change the number of seconds before shutdown.
There is nothing in the standard library that would directly allow you to do this, and I am unaware of any modules that provide a cross platform method, but if you are on Windows and don't want to install anything (like the win32api) then you could use ctypes default module and interact with the WinAPI directly.
You could use the ExitWindowsEx() function to restart the computer (it is almost the same as my other answer How to shudown a computer using Python; however the hexadecimal value has to be changed to restart and not shutdown).
Process:
First you need ctypes:
import ctypes
Next get the user32.dll as described in the documentation:
DLL | User32.dll
So:
user32 = ctypes.WinDLL('user32')
Next you need to call the ExitWindowsEx() function and insert the correct hexadecimal values:
user32.ExitWindowsEx(0x00000002, 0x00000000)
The first argument (0x00000002) shuts down the system and then restarts (see documentation).
The second argument (0x00000000) gives a reason to be logged by the system. A complete list can be found here
Complete Code:
import ctypes
user32 = ctypes.WinDLL('user32')
user32.ExitWindowsEx(0x00000002, 0x00000000)
About os.system() method on Windows:
The win32api or ctypes answers both will execute silently. os.system("shutdown -t 0 -r -f") will leave a message saying "You are about to be signed out in less than a minute" which may be undesirable in some cases.
A file alongside the script called shutdown.bat/shutdown.exe/shutdown.cmd will cause the command shutdown -t 0 -r -f to break, calling that file and not the system command. The same goes for described wmic.exe above.
As a side note: I built WinUtils (Windows only) which simplifies this a bit, however it should be faster (and does not require Ctypes) since it is built in C.
Example:
import WinUtils
WinUtils.Restart(WinUtils.SHTDN_REASON_MINOR_OTHER)

Run pdb without stdin/stdout using FIFO

I am developing FUSE filesystem with python. The problem is that after mounting a filesystem I have no access to stdin/stdout/stderr from my fuse script. I don't see anything, even tracebacks. I am trying to launch pdb like this:
import pdb
pdb.Pdb(None, open('pdb.in', 'r'), open('pdb.out', 'w')).set_trace()
All works fine but very inconvenient. I want to make pdb.in and pdb.out as fifo files but don't know how to connect it correctly. Ideally I want to type commands and see output in one terminal, but will be happy even with two terminals (in one put commands and see output in another). Questions:
1) Is it better/other way to run pdb without stdin/stdout?
2) How can I redirect stdin to pdb.in fifo (All what I type must go to pdb.in)? How can I redirect pdb.out to stdout (I had strange errors with "cat pdb.out" but maybe I don't understand something)
Ok. Exactly what I want, has been done in http://pypi.python.org/pypi/rpdb/0.1.1 .
Before starting the python app
mkfifo pdb.in
mkfifo pdb.out
Then when pdb is called you can interact with it using these two cat commands, one running in the background
cat pdb.out & cat > pdb.in
Note the readline support does not work (i.e. up arrow)
I just ran into a similar issue in a much simpler use-case:
debug a simple Python program running from the command line that had a file piped into sys.stdin, meaning, no way to use the console for pdb.
I ended up solving it by using wdb.
Quick rundown for my use-case. In the shell, install both the wdb server and the wdb client:
pip install wdb.server wdb
Now launch the wdb server with:
wdb.server.py
Now you can navigate to localhost:1984 with your browser and see an interface listing all Python programs running. The wdb project page above has instructions on what you can do if you want to debug any of these running programs.
As for a program under your control, you can you can debug it from the start with:
wdb myscript.py --script=args < and/stdin/redirection
Or, in your code, you can do:
import wdb; wdb.set_trace()
This will pop up an interface in your browser (if local) showing the traced program.
Or you can navigate to the wdb.server.py port to see all ongoing debugging sessions on top of the list of running Python programs, which you can then use to access the specific debugging session you want.
Notice that the commands for navigating the code during the trace are different from the standard pdb ones, for example, to step into a function you use .s instead of s and to step over use .n instead of n. See the wdb README in the link above for details.

How can I make a fake "active session" for gconf?

I've automated my Ubuntu installation - I've got Python code that runs automatically (after a clean install, but before the first user login - it's in a temporary /etc/init.d/ script) that sets up everything from Apache & its configuration to my personal Gnome preferences. It's the latter that's giving me trouble.
This worked fine in Ubuntu 8.04 (Hardy), but when I use this with 8.10 (Intrepid), the first time I try to access gconf, I get this exception:
Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://www.gnome.org/projects/gconf/ for information. (Details - 1: Not running within active session)
Yes, right, there's no Gnome session when this is running, because the user hasn't logged in yet - however, this worked before; this appears to be new with Intrepid's Gnome (2.24?).
Short of modifying the gconf's XML files directly, is there a way to make some sort of proxy Gnome session? Or, any other suggestions?
(More details: this is python code that runs as root, but setuid's & setgid's to be me before setting my preferences using the "gconf" module from the python-gconf package.)
I can reproduce this by installing GConf 2.24 on my machine. GConf 2.22 works fine, but 2.24 breaks it.
GConf is failing to launch because D-Bus is not running. Manually spawning D-Bus and the GConf daemon makes this work again.
I tried to spawn the D-Bus session bus by doing the following:
import dbus
dummy_bus = dbus.SessionBus()
...but got this:
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.Spawn.ExecFailed: dbus-launch failed to autolaunch D-Bus session: Autolaunch error: X11 initialization failed.
Weird. Looks like it doesn't like to come up if X isn't running. To work around that, start dbus-launch manually (IIRC use the os.system() call):
$ dbus-launch
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-eAmT3q94u0,guid=c250f62d3c4739dcc9a12d48490fc268
DBUS_SESSION_BUS_PID=15836
You'll need to parse the output somehow and inject them into environment variables (you'll probably want to use os.putenv). For my testing, I just used the shell, and set the environment vars manually with export DBUS_SESSION_BUS_ADDRESS=blahblah..., etc.
Next, you need to launch gconftool-2 --spawn with those environment variables you received from dbus-launch. This will launch the GConf daemon. If the D-Bus environment vars are not set, the daemon will not launch.
Then, run your GConf code. Provided you set the D-Bus session bus environment variables for your own script, you will now be able to communicate with the GConf daemon.
I know it's complicated.
gconftool-2 provides a --direct option that enables you to set GConf variables without needing to communicate with the server, but I haven't been able to find an equivalent option for the Python bindings (short of outputting XML manually).
Edit: For future reference, if anybody wants to run dbus-launch from within a normal bash script (as opposed to a Python script, as this thread is discussing), it is quite easy to retrieve the session bus address for use within the script:
#!/bin/bash
eval `dbus-launch --sh-syntax`
export DBUS_SESSION_BUS_ADDRESS
export DBUS_SESSION_BUS_PID
do_other_stuff_here
Well, I think I understand the question. Looks like your script just needs to start the dbus daemon, or make sure its started. I believe "session" here refers to a dbus session. (here is some evidence), not a Gnome session. Dbus and gconf both run fine without Gnome.
Either way, faking an "active session" sounds like a pretty bad idea. It would only look for it if it needed it.
Perhaps we could see the script in a pastebin? I should have really seen it before making any comment.
Thanks, Ali & Jeremy - both your answers were a big help. I'm still working on this (though I've stopped for the evening).
First, I took the hint from Ali and was trying part of Jeremy's suggestion: I was using dbus-launch to run "gconftool-2 --spawn". It didn't work for me; I now understand why (thx, Jeremy) -- I was trying to use gconf from within the same python program that was launching dbus & gconftool, but its environment didn't have the environment variables - duh.
I set that strategy aside when I noticed gconftool-2's --direct option; internally, gconftool-2 is using API that isn't exposed by the gconf python bindings. So, I modified python-gconf to expose the extra method, and once that builds (I had some unrelated problems getting this to work), we'll see if that fixes things - if it doesn't (and maybe if it does, because building those bindings seems to build all of gnome!), I'll find a better way to manage the environment variables in that first strategy.
(I'll add another answer here tomorrow either way)
And it's the next day: I ran into a little trouble with my modified python-gconf, which inspired me to try Jeremy's simpler idea, which worked fine - before doing the first gconf operation, I simply ran "dbus-launch", parsed the resulting name-value pairs, and added them directly to python's environment. Having done that, I ran "gconftool-2 --spawn". Problem solved.

Categories

Resources