I have the following bash script:
echo "$(id -u)"
mkdir test
My own user id is 1000. Now, when I run python3.5 without root rights and invoke the script via subprocess.check_output the script works as expected and creates a folder which is owned by me.
However, when I start python with sudo but then use os.setegid(1000); os.setuid(1000), the script outputs 0 and the folder "test" is owned by root. While I get that echo "$(id -u)"may be desired behavior, I do not get why this folder is owned by root. Shouldn't the os.seteuid() function prevent that?
My exact call is:
>>> os.setegid(1000)
>>> os.seteuid(1000)
>>> subprocess.check_output(["/.script.sh"])
Which results in the folder "test" being owned by root. Is this desired behavior and if so, is there any way I can start the script as a normal user while still being able to back to root rights in the python script (i.e., not setting the "real" uid?)
setegid only sets the effective group id of the current process. Same with seteuid.
check_output spawns a new process, which is still apparently still run as root.
You might have more luck if you attempt to create your folder using python instead of shelling out to do it, but I imagine this is a simpified example so that may not be appropriate. Is it possible to run the python script as the expected user? If not you might need to do something like this;
subprocess.check_output(["sudo", "-uexpected_user", "./script.sh"])
Related
So I've been trying to modify .bash_aliases programatically for a while now, and I've been running into issues with every method I've tried.
Running my script using sudo python3 myscript.py causes the script to modify the .bash_aliases file of the root user. I can't find a way to determine what user ran the script to modify their file.
Trying to use a shell command such as sudo echo "my string" >> ~/.bash_aliases gets an error: sh: 1: cannot create /home/migue/.bash_aliases: Permission denied, presumably because sudo can't display its password prompt when I call it programatically.
I can't find a way to temporarily get root permissions after determining the full path (ie expanding ~) of the file.
Basically, I'd love to know any reasonable method to modify and append to .bash_aliases through a Python script. I haven't found any questions on this where the solutions worked for me.
I'd prefer for this method to not require any non-standard modules, as installing them will just make the process less seamless for people who use the script.
I can't find a way to determine what user ran the script to modify their file.
You can reference the file ~/.bash_aliases in your script and run it without sudo, unless your current user is root.
EDIT:
You simply need to add write privileges to .bash_aliases for every user it belongs to.
I want to a up a cron job to run a python script each day within a virtual environment. So I've tried to set up a cron job but it does not seem to execute.
If I were to run the program from terminal normally, I would type:
source ig/venv/bin/activat enter to activate my virtual environment
cd ig/mybot/src/ navigate to my directory
python ultimate.py run my program
SO FAR this is my cron job. I've set it to 1 to run every minute just so I can see that it is working but nothing happens.
1 * * * * source ig/venv/bin/activate && cd ig/mybot/src/ && python ultimate.py
Edit: I have updated my program so no command line prompts are required. I am just needing to run these three simple commands.
You can wrap this up with another python script itself. Create a new python script and run it instead of cron.
Have a look into subprocess module.
Example:
Your command would become subprocess.call(['source','ig/venv/bin/activate'])
inside the wrapper python script.
Also, input("Enter the value") will prompt you for user input.
With the above two, your problem will be solved pythonically.
I'm not sure if it's a good idea, but you could do a script like this.
#!/usr/bin/env bash
PYTHON_PROJECT_DIR=/path/to/python/project/dir
pushd ${PYTHON_PROJECT_DIR}
VALUES="first line of stream\nsecondline of stream\n"
pipenv run /path/to/your/script.py < (echo -e $VALUES)
popd
pushd and popd are commands to move with the directory stack, so you'll be in the directory in the top of the stack, so by adding one directory, you move to the working directory, and by poping you'll get back to the initial position.
Using pipenv allows you to run the scripts in the virtual enviroment (It's not that hard to configure), that way you'll use the enviroment variables in the .env files for the project, and you'll only use the dependencies of this project. (python related).
If you pass the values like this, the python script when ever it requests a value from stdin it will use the values that you echoed (line by line, first line is first input and so on)
This could be a way.
Personally when ever I do cronjobs I like to run directly bash scripts, because, I could add extra logging, so having the wrapper script doesn't seem that unreasonable.
Another thing you could do, Is to get the python executable path (for the virtual enviroment), and use that as interpreter, by replacing the #!/usr/bin/env python to #!/path/to/pythons/virtual/env/interpreter
but you won't get the .env variables (may be there is a way to actually get them.
I am beginner in programming/scripting and stuck with the following problem. I searched a lot on stack overflow and the net but could not resolve the issue. The detailed situation I face is described below in case somebody has a completely different approach to solve the problem.
Is there a way to use an alias (that I call go_to) with multiple commands but where I can pass an argument to only the first command1?
I want to execute goto argument from something like alias go_to='command1 ; command2'.
command1 should evaluate a path based on the argument and command2 should cd to there.
Situation in detail
I'm executing calculations on a computer facility that uses the slurm batch system to queue and start jobs. Using the command squeue slurm shows all running and pending jobs including their Job_ID. Using sjobs <Job_ID> a bunch of information are displayed, including the path where the calculation was started. My goal is to go to that directory.
Of course I can do that:
squeue to see all jobs and their Job_ID,
Pick one of the Job_IDs,
sjobs <Job_ID> to display information,
Search with my eyes for the line that includes the path, copy the path and
cd path to arrive where I want to go.
That is a lengthy procedure if you want to check multiple calculations. Therefore, I want to use a alias and/or a bash or python script (let's call it go_to) so that I simply need to type go_to <Job_ID> to arrive at the directory.
So far, using a python script I achieved to do python script.py <Job_ID> to call sjobs and extract the path from it (but piping sjobs to grep and sed would also be possible).
I've already read
Why doesn't "cd" work in a shell script? and Location of cd executable and understood that os.chdir(path) or subprocess.call(['cd',path]) will only change directories inside the python subshell and that as soon as the script is finished I will end up in the same directory where I started the script.
From what I understand only an alias can bring me to path. Therefore, my idea was to output the path from the python script into a file, e.g. Path.txt to be usable by an alias. Using the alias
alias go_to='cd $(head -1 /absolute/path/to/Path.txt)'
it is possible to change to the desired path. But that involves letting the python script run beforehand.
The actual problem for me now is to do that in one step as I need to pass the <Job_ID> to evaluation of the path first.
As I said, I am quite new to scripting, so any alternative ways are welcome.
Define a function to call the two commands instead. The argument to the function can be passed to the first command.
go_to () {
command1 "$1"
command2
}
I am using ubuntu in my server.
I have two users, let them be user1 and user2
Each user have their own project folder with permissions set due to their needs. But user1 needs to run a python script which is in the other user's project folder. I use subprocess.Popen for this. The python file have required access permissions, so i do not have problem in calling that script. But log files (which have permission for user2) causes permision denied error (since they belong to other user, not the one i need to use).
So i tried to change the user with
Popen("exit", shell=True, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE) #exit from current user, and be root again
Popen(["sudo", "user2"], shell=True, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)
Popen("/usr/bin/python /some/file/directory/somefile.py param1 param2 param3", shell=True, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)
But second Popen fails with
su: must be run from a terminal
Is there any way to change user within python shell?
PS: Due to some reasons, i can not change the user permissions about log files. I need to find a way to switch to other user...
EDIT: I need to make some explenaiton to make things clear...
I have two different django projects running. each projects have a user, have their own user folders where each project codes and logs are kept.
Now, in some cases, i need to transfer some data from project1 to projet2. Easiest way looks to me is writing a python script that accept parameters and do relevant job [data insertion] on the second project.
So, when some certain functions are called within project1, i wish to call py script that is in the project2 project folder, so i can do my data update on the second prject.
But, since this i call Popen within project1, currnt user is user1. But my script (which in in project2) have some log files which denied me because of access permissions...
So, somehow, i need to switch from user1 to user2 so i will not have permission problem, and call my python file. Or, do find another way to do this..
Generally, there is no way to masquerade as another user without having root permission or knowing the second user's login details. This is by design, and no amount of python will circumvent it.
Python-ey ways of solving the problem:
If you have the permission, you can use os.setuid(x), (where x is a numerical userid) to change your effective user id. However, this will require your script to be run as root. Have a look at the os module documentation.
If you have the permission, you can also try su -c <command> instead of sudo. su -c will require you to provide the password on stdin (for which you can use the pexpect library)
Unix-ey ways of solving the problem:
Set the group executable permission on the script and have both users in the same group.
Add the setuid bit on the script so that user1 can run it effectively as user2:
$ chmod g+s script
This will allow group members to run the script as user2. (same could be done for 'all', but that probably wouldn't be a good idea...)
[EDIT]: Revised question
The answer is that you're being far too promiscuous. Project A and project B probably shouldn't interact by messing aground with each other's files via Popening each other's scripts. Instead, the clean solution is to have a web interface on project A that gives you the functionality that you need and call it from project B. That way, if in the future you want to move A on a different host, it's not a problem.
If you insist, you might be able to trick sudo (if you have permission) by running it inside a shell instead. for example, have a script:
#!/bin/sh
sudo my/problematic/script.sh
And then chmod it +x and execute it from python. This will get around the problem with not being able to run sudo outside a terminal.
I am trying to write what should be a super simple bash script. Basically activate a virtual env and than change to the working directory. A task i do a lot and condesing to one command just made sense.
Basically ...
#!/bin/bash
source /usr/local/turbogears/pps_beta/bin/activate
cd /usr/local/turbogears/pps_beta/src
However when it runs it just dumps back to the shell and i am still in the directory i ran the script from and the environment isn't activated.
All you need to do is to run your script with the source command. This is because the cd command is local to the shell that runs it. When you run a script directly, a new shell is executed which terminates when it reaches the script's end of file. By using the source command you tell the shell to directly execute the script's instructions.
The value of cd is local to the current script, which ends when you fall off the end of the file.
What you are trying to do is not "super simple" because you want to override this behavior.
Look at exec for replacing the current process with the process of your choice.
For feeding commands into an interactive Bash, look at the --rcfile option.
I imagine you wish your script to be dynamic, however, as a quick fix when working on a new system I create an alias.
begin i.e
the env is called 'py1' located at ~/envs/py1/ with a repository
location at ~/proj/py1/
alias py1='source ~/envs/py1/bin/activate; cd ~/proj/py1/;
end i.e
You can now access your project and virtualenv by typing py1 from anywhere in the CLI.
I know that this is no where near ideal, violates DRY, and many other programming concepts. It is just a quick and dirty way of getting your env and project accessible quickly without having to setup the variables.
I know that I'm late to the game here, but may I suggest using virtualenvwrapper? It provides a nice bash hook that appears to do exactly what you want.
Check out this tutorial: http://blog.fruiapps.com/2012/06/An-introductory-tutorial-to-python-virtualenv-and-virtualenvwrapper