I need a way to execute the os.system() module as different UID's. It would need to behave similar to the following BASH code (note these are not the exact commands I am executing):
su user1
ls ~
mv file1
su user2
ls ~
mv file1
The target platform is GNU Linux Generic.
Of course I could just pass these to the os.system module, but how to send the password? Of course I could run the script as root, but that's sloppy and insecure.
Preferably I would like to do with without requiring any passwords to be in plain text.
I think that's not trivial: you can do that with a shell because each command is launched into its own process, which has its own id. But with python, everything will have the uid of the python interpreted process (of course, assuming you don't launch subprocesses using the subprocess module and co). I don't know a way of changing the user of a process - I don't know if that's even possible - even if it were, you would at least need admin privileges.
What are you trying to do exactly ? This does not sound like the right thing to do for admin purpose, for example. Generally, admin scripts run in a priviledge user - because nobody knows the password of user 2 except user 2 (in theory). Being root means su user always work for a 'normal' user, without requesting password.
maybe sudo can help you here, otherwise you must be root to execute os.setuid
alternatively if you want to have fun you can use pexpect to do things
something like this, you can improve over this
p = pexpect.spawn("su guest")
p.logfile = sys.stdout
p.expect('Password:')
p.sendline("guest")
The function you're looking for is called os.seteuid. I'm afraid you probably won't escape executing the script as root, in any case, but I think you can use the capabilities(7) framework to 'fence in' the execution a little, so that it can change users--but not do any of the other things the superuser can.
Alternatively, you might be able to do this with PAM. But generally speaking, there's no 'neat' way to do this, and David Cournapeau is absolutely right that it's traditional for admin scripts to run with privileges.
Somewhere along the line, some process or other is going to need an effective UID of 0 (root), because only such a process can set the effective UID to an arbitrary other UID.
At the shell, the su command is a SUID root program; it is appropriately privileged (POSIX jargon) and can set the real and effective UID. Similarly, the sudo command can do the same job. With sudo, you can also configure which commands and UID are allowed. The crucial difference is that su requires the target user's password to let you in; sudo requires the password of the user running it.
There is, of course, the issue of whether a user should know the passwords of other users. In general, no user should know any other user's password.
Scripting UID changes is hard. You can do:
su altuser -c "commands to execute as altuser"
sudo -u altuser commands to execute as altuser
However, su will demand a password from the controlling terminal (and will fail if there is no controlling terminal). If you use sudo, it will cache credentials (or can be configured to do so) so you only get asked once for a password - but it will prompt the first time just like su does.
Working around the prompting is hard. You can use tools parallel to expect which handle pseudo-ttys for you. However, you are then faced with storing passwords in scripts (not a good idea) or somehow stashing them out of sight.
The tool I use for the job is one I wrote, called asroot. It allows me to control precisely the UID and GID attributes that the child process should have. But it is designed to only allow me to use it - that is, at compile time, the authorized username is specified (of course, that can be changed). However, I can do things like:
asroot -u someone -g theirgrp -C -A othergrp -m 022 -- somecmd arg1 ...
This sets the real and effective UID to 'someone', sets the primary group to 'theirgrp', removes all auxilliary groups, and adds 'othergrp' (so the process belongs to just two groups) and sets the umask to 0222; it then executes 'somecmd' with the arguments given.
For a specific user who needs limited (or not so limited) access to other user accounts, this works well. As a general solution, it is not so hot; sudo is better in most respects, but still requires a password (which asroot does not).
Related
I am trying to use perl to SSH in to a machine and issue a few commands...
my $ssh = Net::SSH::Perl->new($host);
$ssh->login($user,$pass);
my($stdout, $sterr, $exit) = $ssh->cmd($cmd);
print "$stdout \n"
This is the general idea, but I am still stuck at the prompt for the password. I have also attempted to use python, and also can not get past the prompt for password. Expect, on the other hand, handles the password just fine. Does anyone have any idea what would cause this?
Edit: additional info
I am using putty to a linux machine and then ssh into numerous more linux machines. Using the perl code above, the user name gets set correctly, but I end up having to manually entering the password each time.
use strict;
use warnings;
use Expect;
$exp= Expect->spawn("ssh $host -l $user");
sleep(1);
$exp->expect($timeout,"Password:");
$exp->send("$pass\r");
$exp->expect($timeout,-re,'>');
$exp->send("ls -l\r");
$exp->expect($timeout,-re,'>');
$exp->send("mkdir aDir\r");
$exp->expect($timeout,-re,'>');
$exp->send("chmod 777 aDir\r");
$exp->expect($timeout,-re,'>');
$exp->send("exit\r");
left out variable declarations for obvious reasons... Not exact answer to question but a viable work around using only the "Expect Module".
I have the situation where I am doing some computation in Python, and based on the outcomes I have a list of target files that are candidates to be passed to 2nd program.
For example, I have 50,000 files which contain ~2000 items each. I want to filter for certain items and call a command line program to do some calculation on some of those.
This Program #2 can be used via shell command line, but requires also a lengthy set of arguments. Because of performance reasons I would have to run Program #2 on a cluster.
Right now, I am running Program #2 via
'subprocess.call("...", shell=True)
But I'd like to run it via qsub in future.
I have not much experience of how exactly this could be done in a reasonably efficient manner.
Would it make sense to write temporary 'qsub' files and run them via subprocess() directly from the Python script? Is there a better, maybe more pythonic solution?
Any ideas and suggestions are very welcome!
It makes perfect sense, although I would go for another solution.
As far as I understand, you have programme #1 that determines which of your 50,000 files needs to be computed by programme #2.
Both programme #1 and #2 are written in Python. Excellent choice.
Incidentally, I have a Python module that might come in handy: https://gist.github.com/stefanedwards/8841307
If you are running the same qsub-system as I have (no idea what ours is called), you cannot use command arguments on the submitted scripts. Instead, any options are submitted via the -v option, that puts them into environment variables, e.g.:
[me#local ~] $ python isprime.py 1
1: True
[me#local ~] $ head -n 5 isprime.py
#!/usr/bin/python
### This is a python script ...
import os
os.chdir(os.environ.get('PBS_O_WORKDIR','.'))
[me#local ~] $ qsub -v isprime='1 2 3' isprime.py
123456.cluster.control.com
[me#local ~]
Here, isprime.py could handle command line arguments using argparse. Then you just need to check whether the script is running as a submitted job, and then retrieve said arguments from the environment variables (os.environ).
When programme #2 is modified to be run on the cluster, programme #1 can submit jobs by using subprocess.call(['qsub','-v options=...','programme2.py'], shell=FALSE)
Another approach would be to queue all the files in a database (say, an SQLite database). Then you could have programme #1 check all non-processed entries in the database, determine the outcome (run, not run, run with special options).
You now have the opportunity to run programme #2 in parallel on the cluster, which simply checks for the database for files to analyse.
Edit: When Programme #2 is an executable
Instead of a python script, we use a bash script that takes environment variables and puts them on a command line for the programme:
#!/bin/bash
cd .
# put options into context/flags etc.
if [ -n $option1 ]; then _opt1="--opt1 $option1"; fi
# we can even define our own defaults
_opt2='--no-verbose'
if [ -n $opt2 ]; then _opt2="-o $opt2"; fi
/path/to/exe $_opt1 $opt2
If you are going for the database solution, then have a python script that checks the database for unprocessed files, mark file as being processed (do these to in a single transaction), get options, call executable with subprocess, when done, mark file as done, check for a new file, etc.
You obviously have built yourself a string cmd containing a command that you could enter in a shell for running the 2nd program. You are currently using subprocess.call(cmd, shell=True) for executing the 2nd program from a Python script (it then becomes executed within a process on the same machine as the calling script).
I understand that you are asking how to submit a job to a cluster so that this 2nd program is run on the cluster instead of the calling machine. Well, this is pretty easy and the method is independent of Python, so there is no 'pythonic' solution, just an obvious one :-) : replace your current cmd with a command that defers the heavy work to the cluster.
First of all, dig into the documentation of your cluster's qsub command (the underlying batch system might be SGE or LSF, or whatever, you need to get the corresponding docs) and try to find the shell command line that properly submits an example job of yours to the cluster. It might look as simple as qsub ...args... cmd, whereas cmd here is the content of the original cmd string. I assume that you now have the entire qsub command needed, let's call it qsubcmd (you have to come up with that on your own, we can't help there). Now all you need to do in your original Python script is calling
subprocess.call(qsubcmd, shell=True)
instead of
subprocess.call(cmd, shell=True)
Note that qsub likely only works on very few machines, typically known as your cluster 'head node(s)'. This means that your Python script that wants to submit these jobs should run on this machine (if that is not possible, you need to add an ssh login procedure to the submission process that we don't want to discuss here).
Please also note that, if you have the time, you should look into the shell=True implications of your subprocess usage. If you can circumvent shell=True, this will be the more secure solution. This might however not be an issue in your environment.
I'm writing a GUI program, that configures your systems settings. For this purpose, the whole program should not be ran as root, otherwise it would configure the system for the root user. However, there is a subprocess command that needs to be ran as root, and I'm not sure how to safely, and properly incorporate this into my GUI for the following reasons.
The user would almost have to enter it into the GUI frontend.
I'm not sure how to verify that the users password was indeed correct. How to add error proofing to alert the user that the password is incorrect, without just letting the command fail miserably.
How to run this safely, since the users password is going to be involved.
I've been reccomended to create a daemon, and pass commands to that. This seems like a bit overkill, since it's just one command that needs to be ran. And since the user can't just type this into the terminal, it needs to be handled by the frontend of the GUI.
Does anyone have any other ideas on how to incorporate this feature?
You can use pkexec.
For example:
proc = subprocess.Popen(['/usr/bin/pkexec', command])
How can I "su -" and pass the root password with fabric? My current job doesn't give us sudoers, but instead uses su - to root(stupid in my opinion). On googling I haven't found a simple(or any working) answer to this.
My normal code for fabric is like:
from fabric.api import *
env.host_string="10.10.10.10"
env.user="mahuser"
env.password="mahpassword"
run('whoami')
Need to be able to
run('su -')
and have it pass my password.
I hear you saying that your policy does not permit use of "sudo" command. Understood.
But what HAPPENS when you try using Fabric sudo()? Please try it and report back.
I don't think sudo() "requires" a sudo prompt at the other end. What sudo() is is a run() command which anticipates a password prompt and attempts to respond to it. That's all.
So in your case, "sudo('su -')". If it fails, try "sudo('su - -c whoami') to see if you have any temporary success at all.
The point I want to make is run() and sudo()sudo() and run() are nearly identical EXCEPT that sudo() will anticipate a server prompt, then answer it. That's the difference.
Conversely, I had a different problem recently, where I was trying to suppress the prompt for sudo() using SSH keys. I couldn't get my head around why Fabric was prompting for the password when bash+ssh wasn't. The docs weren't clear, but eventually I realized that the prompt was MY doing because I thought that sudo level commands required sudo(). Untrue. If your command requires no prompt, use run() and if your command requires password input, use sudo().
Worst case, if sudo() doesn't work for you, it will have still created an AttributeObject of the SSH connection. You may or may not be able to then push some "input" into the stdin attribute of that object (I'm not sure this is correct, it's untested. But that's what you'd do with Paramiko, blindly send text down the connection's STDIN and it gets picked up by the prompt).
Absolute worst case, call sudo()/run() on "expect" command which WILL work, but may not be the simplest cleanest solution.
The documentation for os.getuid() says:
Return the current process’s user id.
And of os.geteuid() says:
Return the current process’s effective user id.
So what is the difference between user id and effective user id?
For me both works same (on both 2.x and 3.x). I am using it to check if script is being run as root.
To understand how os.getuid and os.geteuid differ, you need to understand that they're are not Python specific functions (other than the os module prefix). Those functions are wrapping the getuid and geteuid system calls that are provided by essentially all Unix-like operating systems.
So, rather than looking at Python docs (which are not likely to give a lot of details), you should look at the docs for your operating system. Here is the relevant documentation for Linux, for example. Wikipedia also has a good article on Unix User IDs.
The difference between the regular UID and the Effective UID is that only the EUID is checked when you do something that requires special access (such as reading or writing a file, or making certain system calls). The UID indicates the actual user who is performing the action, but it is (usually) not considered when examining permissions. In normal programs they will be the same. Some programs change their EUID to add or subtract from the actions they are allowed to take. A smaller number also change their UID, to effectively "become" another user.
Here's an example a program that changes its EUID: The passwd program (which is used to change your password) must write to the system's password file, which is owned by the root user. Regular users can't write to that file, since if they could, they could change everyone else's password too. To resolve this, the passwd program has a bit set in its file permissions (known as the setuid bit) that indicates to the OS that it should be run with the EUID of the program's owner (e.g. root) even when it is launched by another user. The passwd program would then see its UID as the launching user, and its EUID as root. Writing to the system password file requires the EUID to be privileged. The UID is useful too, since passwd needs to know which user it's changing the password for.
There are a few other cases where the UID and EUID won't match, but they're not too common. For instance, a file server running as the super user might change its EUID to match a specific user who is requesting some file manipulations. Using the user's EUID allows the server to avoid accessing things that the user is not allowed to touch.
Function os.getuid() returns ID of a user who runs your program. Function os.geteuid() of a user your program use permissions of. In most cases this will be the same. Well known case when these values will be different is when setuid bit is set for your program executable file, and user that runs your program is different from user that own program executable. In this case os.getuid() will return ID of user who runs program, while os.geteuid() will return ID of user who own program executable.