Modify .bash_aliases with Python - python

So I've been trying to modify .bash_aliases programatically for a while now, and I've been running into issues with every method I've tried.
Running my script using sudo python3 myscript.py causes the script to modify the .bash_aliases file of the root user. I can't find a way to determine what user ran the script to modify their file.
Trying to use a shell command such as sudo echo "my string" >> ~/.bash_aliases gets an error: sh: 1: cannot create /home/migue/.bash_aliases: Permission denied, presumably because sudo can't display its password prompt when I call it programatically.
I can't find a way to temporarily get root permissions after determining the full path (ie expanding ~) of the file.
Basically, I'd love to know any reasonable method to modify and append to .bash_aliases through a Python script. I haven't found any questions on this where the solutions worked for me.
I'd prefer for this method to not require any non-standard modules, as installing them will just make the process less seamless for people who use the script.

I can't find a way to determine what user ran the script to modify their file.
You can reference the file ~/.bash_aliases in your script and run it without sudo, unless your current user is root.
EDIT:
You simply need to add write privileges to .bash_aliases for every user it belongs to.

Related

How to implement multiple commands with root permissions with only one password prompt?

I'm working on a GUI applications which calls two system commands respectively.
Those two commands require root permissions to be executed.
The first approach I made, is to call gksu <command_1>, then gksu <command_2>.
This works fine but the user must enter his password twice respectively, and I believe this is not good idea from a UX perspective.
I tried to call gksu with the first command and sudo with the second, but I get this error:
sudo: no tty present and no askpass program specified
So I tried to separate those command in a python file and call a command from the original file that looks like gksu python3 commands.py.
I'm not sure whether this would be executed after I release a compiled version of the whole project, as I intend to use pyinstaller --onefile on it !
So, what I need exactly is to make the app be able to run a specific script with super user privileges considering the final state of the app which would be an executable-binary file and that doesn't include running the whole app with root permissions .
Thanks to Itz Wam, His answer guided me to the correct solution which is Using pkexec instead of gksu like this:
pkexec bash -c "command_1;command_2"
You could execute this :
gksu -- bash -c 'command1; command2; command3'
It will ask your password one time and execute the 3 commands as root
Source : https://askubuntu.com/questions/183608/gksudo-2-commands-with-one-pw-entry

Handling permissions and data in a distributed python script

I'm trying to distribute a python script through Pypi. This script takes input from a user and stores it in a text file. However, because the script is creating/writing to a text file, it requires a "sudo" every time it runs. I.e.:
$ my_script
Permission error
$ sudo my_script
Success
I've run into this problem while working on another script and solved it by chmod-ing a newly created file. This way, sudo was required only once to create a file with lowered permissions (that could be written to w/o extra privileges). However, I can't believe that this is the best answer to such a problem— requiring users to give privileges to a no-name script seems awfully suspicious. Is there really not a cleaner way to handle recording data when distributing through Pypi?

subprocess.check_output ignoring euid

I have the following bash script:
echo "$(id -u)"
mkdir test
My own user id is 1000. Now, when I run python3.5 without root rights and invoke the script via subprocess.check_output the script works as expected and creates a folder which is owned by me.
However, when I start python with sudo but then use os.setegid(1000); os.setuid(1000), the script outputs 0 and the folder "test" is owned by root. While I get that echo "$(id -u)"may be desired behavior, I do not get why this folder is owned by root. Shouldn't the os.seteuid() function prevent that?
My exact call is:
>>> os.setegid(1000)
>>> os.seteuid(1000)
>>> subprocess.check_output(["/.script.sh"])
Which results in the folder "test" being owned by root. Is this desired behavior and if so, is there any way I can start the script as a normal user while still being able to back to root rights in the python script (i.e., not setting the "real" uid?)
setegid only sets the effective group id of the current process. Same with seteuid.
check_output spawns a new process, which is still apparently still run as root.
You might have more luck if you attempt to create your folder using python instead of shelling out to do it, but I imagine this is a simpified example so that may not be appropriate. Is it possible to run the python script as the expected user? If not you might need to do something like this;
subprocess.check_output(["sudo", "-uexpected_user", "./script.sh"])

Changing user within python shell

I am using ubuntu in my server.
I have two users, let them be user1 and user2
Each user have their own project folder with permissions set due to their needs. But user1 needs to run a python script which is in the other user's project folder. I use subprocess.Popen for this. The python file have required access permissions, so i do not have problem in calling that script. But log files (which have permission for user2) causes permision denied error (since they belong to other user, not the one i need to use).
So i tried to change the user with
Popen("exit", shell=True, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE) #exit from current user, and be root again
Popen(["sudo", "user2"], shell=True, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)
Popen("/usr/bin/python /some/file/directory/somefile.py param1 param2 param3", shell=True, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)
But second Popen fails with
su: must be run from a terminal
Is there any way to change user within python shell?
PS: Due to some reasons, i can not change the user permissions about log files. I need to find a way to switch to other user...
EDIT: I need to make some explenaiton to make things clear...
I have two different django projects running. each projects have a user, have their own user folders where each project codes and logs are kept.
Now, in some cases, i need to transfer some data from project1 to projet2. Easiest way looks to me is writing a python script that accept parameters and do relevant job [data insertion] on the second project.
So, when some certain functions are called within project1, i wish to call py script that is in the project2 project folder, so i can do my data update on the second prject.
But, since this i call Popen within project1, currnt user is user1. But my script (which in in project2) have some log files which denied me because of access permissions...
So, somehow, i need to switch from user1 to user2 so i will not have permission problem, and call my python file. Or, do find another way to do this..
Generally, there is no way to masquerade as another user without having root permission or knowing the second user's login details. This is by design, and no amount of python will circumvent it.
Python-ey ways of solving the problem:
If you have the permission, you can use os.setuid(x), (where x is a numerical userid) to change your effective user id. However, this will require your script to be run as root. Have a look at the os module documentation.
If you have the permission, you can also try su -c <command> instead of sudo. su -c will require you to provide the password on stdin (for which you can use the pexpect library)
Unix-ey ways of solving the problem:
Set the group executable permission on the script and have both users in the same group.
Add the setuid bit on the script so that user1 can run it effectively as user2:
$ chmod g+s script
This will allow group members to run the script as user2. (same could be done for 'all', but that probably wouldn't be a good idea...)
[EDIT]: Revised question
The answer is that you're being far too promiscuous. Project A and project B probably shouldn't interact by messing aground with each other's files via Popening each other's scripts. Instead, the clean solution is to have a web interface on project A that gives you the functionality that you need and call it from project B. That way, if in the future you want to move A on a different host, it's not a problem.
If you insist, you might be able to trick sudo (if you have permission) by running it inside a shell instead. for example, have a script:
#!/bin/sh
sudo my/problematic/script.sh
And then chmod it +x and execute it from python. This will get around the problem with not being able to run sudo outside a terminal.

VirtualEnv initilaized from a bash script

I am trying to write what should be a super simple bash script. Basically activate a virtual env and than change to the working directory. A task i do a lot and condesing to one command just made sense.
Basically ...
#!/bin/bash
source /usr/local/turbogears/pps_beta/bin/activate
cd /usr/local/turbogears/pps_beta/src
However when it runs it just dumps back to the shell and i am still in the directory i ran the script from and the environment isn't activated.
All you need to do is to run your script with the source command. This is because the cd command is local to the shell that runs it. When you run a script directly, a new shell is executed which terminates when it reaches the script's end of file. By using the source command you tell the shell to directly execute the script's instructions.
The value of cd is local to the current script, which ends when you fall off the end of the file.
What you are trying to do is not "super simple" because you want to override this behavior.
Look at exec for replacing the current process with the process of your choice.
For feeding commands into an interactive Bash, look at the --rcfile option.
I imagine you wish your script to be dynamic, however, as a quick fix when working on a new system I create an alias.
begin i.e
the env is called 'py1' located at ~/envs/py1/ with a repository
location at ~/proj/py1/
alias py1='source ~/envs/py1/bin/activate; cd ~/proj/py1/;
end i.e
You can now access your project and virtualenv by typing py1 from anywhere in the CLI.
I know that this is no where near ideal, violates DRY, and many other programming concepts. It is just a quick and dirty way of getting your env and project accessible quickly without having to setup the variables.
I know that I'm late to the game here, but may I suggest using virtualenvwrapper? It provides a nice bash hook that appears to do exactly what you want.
Check out this tutorial: http://blog.fruiapps.com/2012/06/An-introductory-tutorial-to-python-virtualenv-and-virtualenvwrapper

Categories

Resources