Prevent normal users to execute - python

I need a better way to prevent normal users from executing my python script. I'm doing something like that:
if __name__ == '__main__':
if os.getenv('USER') == 'root':
addUser = addUser()
else:
print 'Only root can run that!'
It's working, but it's pretty ugly!
My script is about user management in a Debian system.

Python code can be viewed and edited to circumvent any protection you put in, your best bet is to restrict executable access by user in debian so only root can execute/view/edit.
See chmod

It's more normal to restrict access to the resources an executable needs to work than to enforce permissions at the level of the executable. For example, the mount(8) command can normally be run by any user, but the device files needed to actually mount real volumes are restricted to certain users or groups, and the mount command checks to see if the operation would be possible before even attempting to make the syscalls to perform the device operations.
This works as well with regular files. For instance, many linux package managers require a database of installed programs. Before installing anything, the package manager will check the permissions on the database file to see if the calling user could write to it, and also checks the destination directories to see if the user could modify those. even if the package manager does not perform these checks, they can't make those changes when they try, the kernel simply prevents the program from performing an action the owning user is not permitted to make.

Related

Modify .bash_aliases with Python

So I've been trying to modify .bash_aliases programatically for a while now, and I've been running into issues with every method I've tried.
Running my script using sudo python3 myscript.py causes the script to modify the .bash_aliases file of the root user. I can't find a way to determine what user ran the script to modify their file.
Trying to use a shell command such as sudo echo "my string" >> ~/.bash_aliases gets an error: sh: 1: cannot create /home/migue/.bash_aliases: Permission denied, presumably because sudo can't display its password prompt when I call it programatically.
I can't find a way to temporarily get root permissions after determining the full path (ie expanding ~) of the file.
Basically, I'd love to know any reasonable method to modify and append to .bash_aliases through a Python script. I haven't found any questions on this where the solutions worked for me.
I'd prefer for this method to not require any non-standard modules, as installing them will just make the process less seamless for people who use the script.
I can't find a way to determine what user ran the script to modify their file.
You can reference the file ~/.bash_aliases in your script and run it without sudo, unless your current user is root.
EDIT:
You simply need to add write privileges to .bash_aliases for every user it belongs to.

python: sudo context manager?

Is there any possible way to implement a sudo context manager which runs the enclosing scope as another user, using the sudoers system?
system('whoami') # same result as echo $USER
with sudo():
system('whoami') # root
I doubt that the sudo(8) executable will help me here, but maybe there is some C-level interface that I can bind to?
Motivation: I can almost port this shell script entirely to python without even any subprocesses, except I currently have to system('sudo sh -c "echo %i > /dev/thatfile"' % value). It would be so elegant if I could with sudo(), open('/dev/thatfile', 'w') as thatfile: thatfile.write(str(value)).
I suspect this is not possible in any simple way. Programs that escalate their permissions like sudo must have a flag set in their file system permissions (the is the "setuid" bit) in order to tell the operating system to run them as a different user than the one that started them up. Unless you want your whole Python interpreter to be setuid root, there's no direct way to do something equivalent for just some small part of your Python code.
It might conceivably be possible to implement a sudo style context manager not by making your regular Python code run privileged, but rather by temporarily replacing the library code that makes various OS calls (such as opening a file) with some kind of proxy that connects it to a setuid helper program. But it would be a lot of work to get something like that to work, and a lot more work to make sure it was secure enough to use anywhere in production.
An idea, if you don't like your current solution of using a shell script from a system call: Write the file using regular Python code, with your regular user permissions. Then chown it (and move it, if necessary) with a sudo call.

Changing user within python shell

I am using ubuntu in my server.
I have two users, let them be user1 and user2
Each user have their own project folder with permissions set due to their needs. But user1 needs to run a python script which is in the other user's project folder. I use subprocess.Popen for this. The python file have required access permissions, so i do not have problem in calling that script. But log files (which have permission for user2) causes permision denied error (since they belong to other user, not the one i need to use).
So i tried to change the user with
Popen("exit", shell=True, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE) #exit from current user, and be root again
Popen(["sudo", "user2"], shell=True, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)
Popen("/usr/bin/python /some/file/directory/somefile.py param1 param2 param3", shell=True, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)
But second Popen fails with
su: must be run from a terminal
Is there any way to change user within python shell?
PS: Due to some reasons, i can not change the user permissions about log files. I need to find a way to switch to other user...
EDIT: I need to make some explenaiton to make things clear...
I have two different django projects running. each projects have a user, have their own user folders where each project codes and logs are kept.
Now, in some cases, i need to transfer some data from project1 to projet2. Easiest way looks to me is writing a python script that accept parameters and do relevant job [data insertion] on the second project.
So, when some certain functions are called within project1, i wish to call py script that is in the project2 project folder, so i can do my data update on the second prject.
But, since this i call Popen within project1, currnt user is user1. But my script (which in in project2) have some log files which denied me because of access permissions...
So, somehow, i need to switch from user1 to user2 so i will not have permission problem, and call my python file. Or, do find another way to do this..
Generally, there is no way to masquerade as another user without having root permission or knowing the second user's login details. This is by design, and no amount of python will circumvent it.
Python-ey ways of solving the problem:
If you have the permission, you can use os.setuid(x), (where x is a numerical userid) to change your effective user id. However, this will require your script to be run as root. Have a look at the os module documentation.
If you have the permission, you can also try su -c <command> instead of sudo. su -c will require you to provide the password on stdin (for which you can use the pexpect library)
Unix-ey ways of solving the problem:
Set the group executable permission on the script and have both users in the same group.
Add the setuid bit on the script so that user1 can run it effectively as user2:
$ chmod g+s script
This will allow group members to run the script as user2. (same could be done for 'all', but that probably wouldn't be a good idea...)
[EDIT]: Revised question
The answer is that you're being far too promiscuous. Project A and project B probably shouldn't interact by messing aground with each other's files via Popening each other's scripts. Instead, the clean solution is to have a web interface on project A that gives you the functionality that you need and call it from project B. That way, if in the future you want to move A on a different host, it's not a problem.
If you insist, you might be able to trick sudo (if you have permission) by running it inside a shell instead. for example, have a script:
#!/bin/sh
sudo my/problematic/script.sh
And then chmod it +x and execute it from python. This will get around the problem with not being able to run sudo outside a terminal.

Is this a safe suid/capability wrapper for (Python) scripts?

(Note: I’ve Linux in mind, but the problem may apply on other platforms.)
Problem: Linux doesn’t do suid on #! scripts nor does it activate “Linux capabilities” on them.
Why dow we have this problem? Because during the kernel interpreter setup to run the script, an attacker may have replaced that file. How? The formerly trusted suid/capability-enabled script file may be in a directory he has control over (e.g. can delete the not-owned trusted file, or the file is actually a symbolic link he owns).
Proper solution: make the kernel allow suid/cap scripts if: a) it is clear that the caller has no power over the script file -or- like a couple of other operating systems do b) pass the script as /dev/fd/x, referring to the originally kernel-opened trusted file.
Answer I’m looking for: for kernels which can’t do this (all Linux), I need a safe “now” solution.
What do I have in mind? A binary wrapper, which does what the kernel does not, in a safe way.
I would like to
hear from established wrappers for (Python) scripts that pass Linux capabilities and possibly suid from the script file to the interpreter to make them effective.
get comments on my wrapper proposed below
Problems with sudo: sudo is not a good wrapper, because it doesn’t help the kernel to not fall for that just explained “script got replaced” trap (“man sudo” under caveats says so).
Proposed wrapper
actually, I want a little program, which generates the wrapper
command line, e.g.: sudo suid_capability_wrapper ./script.py
script.py has already the suid bit and capabilites set (no function, just information)
the generator suid_capability_wrapper does
generate C(?) source and compile
compile output into: default: basename script.py .py, or argument -o
set the wrapper owner, group, suid like script.py
set the permitted capabilities like script.py, ignore inheritable and effective caps
warn if the interpreter (e.g. /usr/bin/python) does not have the corresponding caps in its inheritable set (this is a system limitation: there is no way to pass on capabilites without suid-root otherwise)
the generated code does:
check if file descriptors 0, 1 and 2 are open, abort otherwise (possibly add more checks for too crazy environment conditions)
if compiled-in target script is compiled-in with relative path, determine self’s location via /proc/self/exe
combine own path with relative path to the script to find it
check if target scripts owner, group, permissions, caps, suid are still like the original (compiled-in) [this is the only non-necessary safety-check I want to include: otherwise I trust that script]
set the set of inherited capabilities equal to the set of permitted capabilities
execve() the interpreter similar to how the kernel does, but use the script-path we know, and the environment we got (the script should take care of the environment)
A bunch of notes and warnings may be printed by suid_capability_wrapper to educate the user about:
make sure nobody can manipulate the script (e.g. world writable)
be aware that suid/capabilities come from the wrapper, nothing cares about suid/xattr mounts for the script file
the interpreter (python) is execve()ed, it will get a dirty environment from here
it will also get the rest of the standard process environment passed through it, which is ... ... ... (read man-pages for exec to begin with)
use #!/usr/bin/python -E to immunize the python interpreter from environment variables
clean the environment yourself in the script or be aware that there is a lot of code you run as side-effect which does care about some of these variables
You don't want to use a shebang at all, on any file - you want to use a binary which invokes the Python interpreter, then tells it to start the script file for which you asked.
It needs to do three things:
Start a Python interpreter (from a trusted path, breaking chroot jails and so on). I suggest statically linking libpython and using the CPython API for this, but it's up to you.
Open the script file FD and atomically check that it is both suid and owned by root. Don't allow the file to be altered between the check and the execution - be careful.
Tell CPython to execute the script from the FD you opened earlier.
This will give you a binary which will execute all owned-by-root-and-suid scripts under Python only. You only need one such program, not one per script. It's your "suidpythonrunner".
As you surmised, you must clear the environment before running Python. LD_LIBRARY_PATH is taken care of by the kernel, but PYTHONPATH could be deadly.

Creating Read-only logs with python

I am writing a python script that needs to make a log entry whenever it's invoked. The log created by the script must not be changeable by the user (except root) who invoked the script. I tried the syslog module and while this does exactly what I want in terms of file permissions, I need to be able to put the resulting log file in an arbitrary location. How would I go about doing this?
I see you are on linux,
Depending on which filesystem you are using, you may be able to use the chattr command. You can make files that are append only by setting the a attribute
Run your script with setuid root.

Categories

Resources