From what I've read, any changes to the environment variables in a Python instance are only available within that instance, and disappear once the instance is closed. Is there any way to make them stick by committing them to the system?
The reason I need to do this is because at the studio where I work, tools like Maya rely heavily on environment variables to configure paths across multiple platforms.
My test code is
import os
os.environ['FAKE'] = 'C:\\'
Opening another instance of Python and requesting os.environ['FAKE'] yields a KeyError.
NOTE: Portability will be an issue, but the small API I'm writing will be able to check OS version and trigger different commands if necessary.
That said, I've gone the route of using the Windows registry technique and will simply write alternative methods that will call shell scripts on other platforms as they become requirements.
You can using SETX at the command-line.
By default these actions go on the USER env vars.
To set and modify SYSTEM vars use the /M flag
import os
env_var = "BUILD_NUMBER"
env_val = "3.1.3.3.7"
os.system("SETX {0} {1} /M".format(env_var,env_val))
make them stick by committing them to
the system?
I think you are a bit confused here. There is no 'system' environment. Each process has their own environment as part its memory. A process can only change its own environment. A process can set the initial environment for processes it creates.
If you really do think you need to set environment variables for the system you will need to look at changing them in the location they get initially loaded from like the registry on windows or your shell configuration file on Linux.
Under Windows it's possible for you to make changes to environment variables persistent via the registry with this recipe, though it seems like overkill.
To echo Brian's question, what are you trying to accomplish? There is probably an easier way.
Seems like there is simplier solution for Windows
import subprocess
subprocess.call(['setx', 'Hello', 'World!'], shell=True)
I don't believe you can do this; there are two work-arounds I can think of.
The os.putenv function sets the environment for processes you start with, i.e. os.system, popen, etc. Depending on what you're trying to do, perhaps you could have one master Python instance that sets the variable, and then spawns new instances.
You could run a shell script or batch file to set it for you, but that becomes much less portable. See this article:
http://code.activestate.com/recipes/159462/
Think about it this way.
You're not setting shell environment variables.
You're spawning a subshell with some given environment variable settings; this subshell runs your application with the modified environment.
According to this discussion, you cannot do it. What are you trying to accomplish?
You are forking a new process and cannot change the environment of the parent process as you cannot do if you start a new shell process from the shell
You might want to try Python Win32 Extensions, developed by Mark Hammond, which is included in the ActivePython (or can be installed separately). You can learn how to perform many Windows related tasks in Hammond's and Robinson's book.
Using PyWin32 to access windows COM objects, a Python program can use the Environment Property (a collection of environment variables) of the WScript.Shell object.
Try to use py-setenv that will allow you to set variable via registry
python -m pip install py-setenv
From within Python? No, it can't be done!
If you are not bound to Python, you should consider using shell scripts (sh, bash, etc). The "source" command allows you to run a script that modifies the environment and will "stick" like you want to the shell you "sourced" the script in. What's going on here is that the shell executes the script directly rather creating a sub-process to execute the script.
This will be quite portable - you can use cygwin on windows to do this.
In case someone might need this info. I realize this was asked 7 yrs ago, but even I forget how sometimes. .
Yes there is a way to make them "stick" in windows. Simply go control panel, system, advanced system settings,when the system properties window opens you should see an option (button) for Environment Variables. .The process for getting to this is a little different depending on what OS you're using (google it).
Choose that (click button), then the Environment Variables window will open. It has 2 split windows, the top one should be your "User Variables For yourusername". . .choose "new", then simply set the variable. For instance one of mine is "Database_Password = mypassword".
Then in your app you can access them like this: import os, os.environ.get('Database_Password'). You can do something like pass = os.environ.get('Database_Password').
I have to call some shell command in my python script as root and I do not want to run the entire python script as root. I can of course prefix those command with sudo in Popen, however it does not really show what commands ask for sudo. Also I heard the recommanded way to do such things are through polkit. We were using pkexec. However, there were nasty bugs resulting security holes. And the pkexec seems on the brink of deprecation.
I knew we could somehow wrap the commands into some our own dbus service. Then run our script as dbus client. However, this solution is rather annoying and I do not know much about dbus. I wonder if there is a way to just call shell command as root without creating our own dbus/systemd service. Is there already some dbus service allow us run shell command?
If someone can give me an example similar to this but also call commands like echo test > /root/test.txt, we will be highly appreciated.
(Note: I’ve Linux in mind, but the problem may apply on other platforms.)
Problem: Linux doesn’t do suid on #! scripts nor does it activate “Linux capabilities” on them.
Why dow we have this problem? Because during the kernel interpreter setup to run the script, an attacker may have replaced that file. How? The formerly trusted suid/capability-enabled script file may be in a directory he has control over (e.g. can delete the not-owned trusted file, or the file is actually a symbolic link he owns).
Proper solution: make the kernel allow suid/cap scripts if: a) it is clear that the caller has no power over the script file -or- like a couple of other operating systems do b) pass the script as /dev/fd/x, referring to the originally kernel-opened trusted file.
Answer I’m looking for: for kernels which can’t do this (all Linux), I need a safe “now” solution.
What do I have in mind? A binary wrapper, which does what the kernel does not, in a safe way.
I would like to
hear from established wrappers for (Python) scripts that pass Linux capabilities and possibly suid from the script file to the interpreter to make them effective.
get comments on my wrapper proposed below
Problems with sudo: sudo is not a good wrapper, because it doesn’t help the kernel to not fall for that just explained “script got replaced” trap (“man sudo” under caveats says so).
Proposed wrapper
actually, I want a little program, which generates the wrapper
command line, e.g.: sudo suid_capability_wrapper ./script.py
script.py has already the suid bit and capabilites set (no function, just information)
the generator suid_capability_wrapper does
generate C(?) source and compile
compile output into: default: basename script.py .py, or argument -o
set the wrapper owner, group, suid like script.py
set the permitted capabilities like script.py, ignore inheritable and effective caps
warn if the interpreter (e.g. /usr/bin/python) does not have the corresponding caps in its inheritable set (this is a system limitation: there is no way to pass on capabilites without suid-root otherwise)
the generated code does:
check if file descriptors 0, 1 and 2 are open, abort otherwise (possibly add more checks for too crazy environment conditions)
if compiled-in target script is compiled-in with relative path, determine self’s location via /proc/self/exe
combine own path with relative path to the script to find it
check if target scripts owner, group, permissions, caps, suid are still like the original (compiled-in) [this is the only non-necessary safety-check I want to include: otherwise I trust that script]
set the set of inherited capabilities equal to the set of permitted capabilities
execve() the interpreter similar to how the kernel does, but use the script-path we know, and the environment we got (the script should take care of the environment)
A bunch of notes and warnings may be printed by suid_capability_wrapper to educate the user about:
make sure nobody can manipulate the script (e.g. world writable)
be aware that suid/capabilities come from the wrapper, nothing cares about suid/xattr mounts for the script file
the interpreter (python) is execve()ed, it will get a dirty environment from here
it will also get the rest of the standard process environment passed through it, which is ... ... ... (read man-pages for exec to begin with)
use #!/usr/bin/python -E to immunize the python interpreter from environment variables
clean the environment yourself in the script or be aware that there is a lot of code you run as side-effect which does care about some of these variables
You don't want to use a shebang at all, on any file - you want to use a binary which invokes the Python interpreter, then tells it to start the script file for which you asked.
It needs to do three things:
Start a Python interpreter (from a trusted path, breaking chroot jails and so on). I suggest statically linking libpython and using the CPython API for this, but it's up to you.
Open the script file FD and atomically check that it is both suid and owned by root. Don't allow the file to be altered between the check and the execution - be careful.
Tell CPython to execute the script from the FD you opened earlier.
This will give you a binary which will execute all owned-by-root-and-suid scripts under Python only. You only need one such program, not one per script. It's your "suidpythonrunner".
As you surmised, you must clear the environment before running Python. LD_LIBRARY_PATH is taken care of by the kernel, but PYTHONPATH could be deadly.
I have been playing around with subprocess lately. As I do more and more; I find myself needing root access. I was wondering if there is an easy way to enter the root password for a command that needs it with subprocess module. So when I am prompted for the password my script and provide it and run the command. I know this is bad practice by where the code will be running is sandboxed and separate from the rest of the system; I also dont want to be running as root.
I would really appreciate small example if possible. I know you can do this with expect, but i am looking something more python centric. I know pexpect exsists but its a bit overkill for this simple task.
Thanks.
It would probably be best to leverage sudo for the user running the Python program. You can specify specific commands and arguments that can be run from sudo without requiring a password. Here is an example:
There are many approaches but I prefer the one that assigns command sets to groups. So let's say we want to create a group to allow people to run tcpdump as root. So let's call that group tcpdumpers.
First you would create a group called tcpdumpers. Then modify /etc/sudoers (using the visudo command):
# Command alias for tcpdump
Cmnd_Alias TCPDUMP = /usr/sbin/tcpdump
# This is the group that is allowed to run tcpdump as root with no password prompt
%tcpdumpers ALL=(ALL) NOPASSWD: TCPDUMP
Now any user added to the tcpdumpers group will be able to run tcpdump like this:
% sudo tcpdump
From there you could easily run this command as a subprocess.
This eliminates the need to hard-code the root password into your program code, and it enables granular control over who can run what with root privileges on your system.
I need a better way to prevent normal users from executing my python script. I'm doing something like that:
if __name__ == '__main__':
if os.getenv('USER') == 'root':
addUser = addUser()
else:
print 'Only root can run that!'
It's working, but it's pretty ugly!
My script is about user management in a Debian system.
Python code can be viewed and edited to circumvent any protection you put in, your best bet is to restrict executable access by user in debian so only root can execute/view/edit.
See chmod
It's more normal to restrict access to the resources an executable needs to work than to enforce permissions at the level of the executable. For example, the mount(8) command can normally be run by any user, but the device files needed to actually mount real volumes are restricted to certain users or groups, and the mount command checks to see if the operation would be possible before even attempting to make the syscalls to perform the device operations.
This works as well with regular files. For instance, many linux package managers require a database of installed programs. Before installing anything, the package manager will check the permissions on the database file to see if the calling user could write to it, and also checks the destination directories to see if the user could modify those. even if the package manager does not perform these checks, they can't make those changes when they try, the kernel simply prevents the program from performing an action the owning user is not permitted to make.