Run shell script from python with permissions - python

I have the most simple script called update.sh
#!/bin/sh
cd /home/pi/circulation_of_circuits
git pull
When I call this from the terminal with ./update.sh I get a Already up-to-date or it updates the files like expected.
I also have a python script, inside that scipt is:
subprocess.call(['./update.sh'])
When that calls the same script I get:
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
(I use SSH).
----------------- update --------------------
Someone else had a look for me:
OK so some progress. When I boot your image I can't run git pull in
your repo directory and the bash script also fails. It seems to be
because the bitbucket repository is private and needs authentication
for pull (the one I was using was public so that's why I had no
issues). Presumably git remembers this after you type it in the first
time, bash somehow tricks git into thinking it's you typing the
command subsequently but running it from python isn't the same.
I'm not a git expert but there must be some way of setting this up so
python can provide the authentication.

sounds like you need to give your ssh command a public or private key it can access perhaps:
ssh -i /backup/home/user/.ssh/id_dsa user#unixserver1.nixcraft.com
-i tells it where to look for the key

This problem is caused by the git repo authentication failing. You say you are using SSH, and git is complaining about publickey auth failing. Normally you can use git commands on a private repo without inputting a password. All this would imply that git is using ssh, but in the latter case it cannot find the correct private key.
Since the problem only manifests itself when run through another script, it is very likely caused by something messing with the environment variables. Subprocess.call should pass the environment as is, so there are a couple of usual suspects:
sudo.
if you are using sudo, it will pass a mostly empty environment to the process
the python script itself
if the python script changes its env, those changes will get propagated to the subprocess too.
sh -lor su -
these commands set up a login shell, which means their environment gets reset to defaults.
Any of these reasons could hide the environment variables ssh-agent (or some other key management tool) might need to work.
Steps to diagnose and fix:
Isolate the problem.
Create a minimal python script that does nothing else than runs subprocess.call(['./update.sh']). Run both update.sh and the new script.
Diagnose the problem and fix accordingly:
a) If update.sh works, and the new script doesn't, you are probably experiencing some weird corner case of system misconfiguration. Try upgrading your system and python; if the problem persists, it probably requires additional debugging on the affected system itself.
b) If both update.sh and the new script work, then the problem lies within the outer python script calling the shell script. Look for occurrences of sudo, su -, sh -l, env and os.environ, one of those is the most likely culprit.
c) If neither the update.sh nor the new script work, your problem is likely to be with ssh client configuration; a typical cause would be that you are using a non-default identity, did not configure it in ~/.ssh/config but used ssh-add instead, and after that, ssh-agent's cache expired. In this case, run ssh-add identityfile for the identity you used to authenticate to that git repo, and try again.

I believe this answer will help you: https://serverfault.com/questions/497217/automate-git-pull-stuck-with-keychain?answertab=votes#tab-top
I didn't use ssh-agent and it worked: Change your script to the one that follows and try.
#!/bin/bash
cd /home/pi/circulation_of_circuits
ssh-add /home/yourHomefolderName/.ssh/id_rsa
ssh-add -l
git pull
This assumes that you have configured correctly your ssh key.

It seems like your version control system, need the authentication for the pull so can build the python with use of pexpect,
import pexpect
child = pexpect.spawn('./update.sh')
child.expect('Password:')
child.sendline('SuperSecretPassword')

Try using the sh package instead of using the subprocess call. https://pypi.python.org/pypi/sh
I tried this snippet and it worked for me.
#!/usr/local/bin/python
import sh
sh.cd("/Users/siyer/workspace/scripts")
print sh.git("pull")
Output:
Already up-to-date.

import subprocess
subprocess.call("sh update.sh", shell=True)

With Git 1.7.9 or later, you can just use one of the following credential helpers:
With a timeout
git config --global credential.helper cache
... which tells Git to keep your password cached in memory for (by default) 15 minutes. You can set a longer timeout with:
git config --global credential.helper "cache --timeout=3600"
(That example was suggested in the GitHub help page for Linux.) You can also store your credentials permanently if so desired.
Saving indefinitely
You can use the git-credential-store via
git config credential.helper store
GitHub's help also suggests that if you're on Mac OS X and used Homebrew to install Git, you can use the native Mac OS X keystore with:
git config --global credential.helper osxkeychain
For Windows, there is a helper called Git Credential Manager for Windows or wincred in msysgit.
git config --global credential.helper wincred # obsolete
With Git for Windows 2.7.3+ (March 2016):
git config --global credential.helper manager
For Linux, you can use gnome-keyring(or other keyring implementation such as KWallet).
Finally, after executing one of the suggested command one time manually, you can execute your script without changes in it.

I can reproduce your fault. It has nothing to do with permission, it depends how your ssh are installed on your system. To verify it's the same cause i need the diff output.
Save the following to a file log_shell_env.sh,
#!/bin/bash
log="shell_env"$1
echo "create shell_env"$1
echo "shell_env" > $log
echo "whoami="$(whoami) >> $log
echo "which git="$(which git) >> $log
echo "git status="$(git status 2>&1) >> $log
echo "git pull="$(git pull 2>&1) >> $log
echo "ssh -vT git#github.com="$(ssh -T git#github.com 2>&1) >> $log
echo "ssh -V="$(ssh -V 2>&1) >> $log
echo "ls -al ~/.ssh="$(ls -a ~/.ssh) >> $log
echo "which ssh-askpass="$(which ssh-askpass) >> $log
echo "ps -e | grep [s]sh-agent="$(ps -e | grep [s]sh-agent ) >> $log
echo "ssh-add -l="$(ssh-add -l) >> $log
echo "set=" >> $log
set >> $log
set execute permission and run it twice:
1. From the console without parameter
2. From your python script with parameter '.python'
Please, run it realy from the same python script!
For instance:
try:
output= subprocess.check_output(['./log_shell_env.sh', '.python'], stderr=subprocess.STDOUT)
print(output.decode('utf-8'))
except subprocess.CalledProcessError as cpe:
print('[ERROR] check_output: %s' % cpe)
Do a diff shell_env shell_env.python > shell_env.diff
The resulting shell_env.diff should show not more than the following diffs:
15,16c15,16
< BASH_ARGC=()
< BASH_ARGV=()
---
> BASH_ARGC=([0]="1")
> BASH_ARGV=([0]=".python")
48c48
< PPID=2209
---
> PPID=2220
72c72
< log=shell_env
---
> log=shell_env.python
Come back and comment, if you get more diffs
update your Question with the diff output.

Use the following python code. This will import the os module in python and make a system call with sudo permissions.
#!/bin/python
import os
os.system("sudo ./update.sh")

Related

Cron, execute bash script as root, but one part (Python script) as user

I need to run a bash script periodically on a Jetson Nano (so, Ubuntu 18.04). The script should run system updates, pull some Python code from a repository, and run it as a specified user.
So, I created this script:
#! /bin/bash
## system updates
sudo apt update
sudo apt upgrade
## stop previous instances of the Python code
pkill python3
## move to python script folder
cd /home/user_name/projects/my_folder
## pull updates from repo
git stash
git pull
## create dummy folder to check bash script execution to this point
sudo -u user_name mkdir /home/user_name/projects/dummy_folder_00
## launch python script
sudo -u user_name /usr/bin/python3 python_script.py --arg01 --arg02
## create dummy folder to check bash script execution to this point
sudo -u user_name mkdir /home/user_name/projects/dummy_folder_01
I created a cron job running this script as root, by using
sudo crontab -e
and adding the entry
00 13 * * * /home/user_name/projects/my_folder/script.sh
Now, I can see that at the configured time, both the dummy folders are created, and they actually belong to user_name. However, the Python script isn't launched.
I tried creating the cron job as non root user (crontab -e), but at this point even if the Python script gets exectured, I guess I wouldn't be able to run apt update/upgrade.
How can I fix this?
Well, if the dummy folders did get created, that means the sudo statements work, so i'd say theres a 99%+ chance that python was infact started.
I'm guessing the problem is that you havent specified the path for the python file, and your working directory likely isn't what you're expecting it to be.
change:
sudo -u user_name /usr/bin/python3 python_script.py --arg01 --arg02
to something like
sudo -u user_name /usr/bin/python3 /path/to/your/python_script.py --arg01 --arg02
then test.
If that didn't solve the problem , then enable some logging, change the line to:
sudo -u user_name /usr/bin/python3 /path/to/your/python_script.py --arg01 --arg02 \
1> /home/user_name/projects/dummy_folder_00/log.txt 2>&1 ;
and test again, it should log STDOUT and STDERR to that file then.

script does not switch to another user

I am working on a script that at a certain point needs to switch to the root user (executing "sudo rootsh" is the only accepted way to switch to root on our servers,) after which it will execute a certain command.
I am not sure what I am missing, but the script simply ignores the part when it should switch to root and continues executing the commands with the user that started the script.
If you check the generated whoami.txt file, you will notice that the user is not root. Please keep in mind that the user executing the script can switch to root without any issue while executing the sudo rootsh command.
Here is the code I am using:
import subprocess
def switch_user():
commands = '''
sudo rootsh
whoami > whoami.txt
sysctl -a | grep kernel.msgmni'''
process = subprocess.Popen('/bin/bash', stdin=subprocess.PIPE,
stdout=subprocess.PIPE)
out, err = process.communicate(commands.encode('utf-8'))
switch_user()
Any idea what I am doing wrong? Thanks.
Instead of Popening a subprocess to run bash, and from that opening a separate privileged shell, Popen the command sudo rootsh directly. If that succeeds (requires that the user be permitted to sudo rootsh without providing a password) then deliver the rest of the commands by communicating with the subprocess.
That would be something along these lines:
import subprocess
def switch_user():
# These shell commands will be used as input to the root shell
commands = '''whoami > whoami.txt
sysctl -a | grep kernel.msgmni'''
# Launch the root shell
process = subprocess.Popen('/usr/bin/sudo rootsh',
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# Send the shell's input to it and receive back its output
out, err = process.communicate(commands.encode('utf-8'))
switch_user()
You may need to modify that for your purposes. In particular, if your sudo command lives at a different location then you may need to modify the path to it. And I emphasize again that this approach depends on being able to obtain a root shell without providing a password. Sudo can be configured that way, but it is not the default.
I finally managed to make this work after doing a more thorough investigation with the guys from the OS team. I'll post this, maybe it would be useful for somebody in the future:
import os
os.system("sudo rootsh -i -u root 'sysctl -a | grep kernel.msgmni' > parameter_value.txt")
The key was to insert the -i and -u options:
-i [command]
The -i (simulate initial login) option runs the shell specified by the password database entry of the target user as a login
shell.
This means that login-specific resource files such as .profile or .login will be read by the shell. If a command is
specified, it is
passed to the shell for execution via the shell's -c option. If no command is specified, an interactive shell is executed.
sudo
attempts to change to that user's home directory before running the shell. The security policy shall initialize the
environment to a
minimal set of variables, similar to what is present when a user logs in. The Command Environment section in the
sudoers(5) manual documents how the -i option affects the environment in which a command is run when the sudoers policy is in use.
-u user
The -u (user) option causes sudo to run the specified command as a user other than root. To specify a uid instead
of a user name, #uid.
When running commands as a uid, many shells require that the # be escaped with a backslash ('\'). Security policies may
restrict uids
to those listed in the password database. The sudoers policy allows uids that are not in the password database as
long as the targetpw
option is not set. Other security policies may not support this.
Thank you all for your answers :)

ec2 run scripts every boot

I have followed a few posts on here trying to run either a python or shell script on my ec2 instance after every boot not just the first boot.
I have tried the:
[scripts-user, always] to /etc/cloud/cloud.cfg file
Added script to ./scripts/per-boot folder
and
adding script to /etc/rc.local
Yes the permissions were changed to 755 for /etc/rc.local
I am attempting to pipe the output of the file into a file located in the /home/ubuntu/ directory and the file does not contain anything after boot.
If I run the scripts (.sh or .py) manually they work.
Any suggestions or request for additional info to help?
So the current solution appears to be a method I wrote off in my initial question post as I may have not performed the setup exactly as outline in the link below...
This link -->
How do I make cloud-init startup scripts run every time my EC2 instance boots?
The link shows how to modify the /etc/cloud/cloud.cfg file to update scripts-user to [scripts-user, always]
Also that link says to add your *.sh file to /var/lib/cloud/scripts/per-boot directory.
Once you reboot your system your script should have executed and you can verify this in: sudo cat /var/log/cloud-init.log
if your script still fails to execute try to erase the instance state of your server with the following command: sudo rm -rf /var/lib/cloud/instance/*
--NOTE:--
It appears print commands from a python script do not pipe (>>) as expected but echo commands pipe easily
Fails to pipe
sudo python test.py >> log.txt
Pipes successfully
echo "HI" >> log.txt
Is this something along the lines that you want?
It copies the script to the instance, connects to the instance, and runs the script right away.
ec2 scp ~/path_to_script.py : instance_name -y && ec2 ssh instance_name -yc "python script_name.py" 1>/dev/null
I read that the use of rc.local is getting deprecated. One thing to try is a line in /etc/crontab like this:
#reboot full-path-of-script
If there's a specific user you want to run the script as, you can list it after #reboot.

GitPython and SSH Keys?

How can I use GitPython along with specific SSH Keys?
The documentation isn't very thorough on that subject. The only thing I've tried so far is Repo(path).
Following worked for me on gitpython==2.1.1
import os
from git import Repo
from git import Git
git_ssh_identity_file = os.path.expanduser('~/.ssh/id_rsa')
git_ssh_cmd = 'ssh -i %s' % git_ssh_identity_file
with Git().custom_environment(GIT_SSH_COMMAND=git_ssh_cmd):
Repo.clone_from('git#....', '/path', branch='my-branch')
I'm on GitPython==3.0.5 and the below worked for me.
from git import Repo
from git import Git
git_ssh_identity_file = os.path.join(os.getcwd(),'ssh_key.key')
git_ssh_cmd = 'ssh -i %s' % git_ssh_identity_file
Repo.clone_from(repo_url, os.path.join(os.getcwd(), repo_name),env=dict(GIT_SSH_COMMAND=git_ssh_cmd))
Using repo.git.custom_environment to set the GIT_SSH_COMMAND won't work for the clone_from function. Reference: https://github.com/gitpython-developers/GitPython/issues/339
Please note that all of the following will only work in GitPython v0.3.6 or newer.
You can use the GIT_SSH environment variable to provide an executable to git which will call ssh in its place. That way, you can use any kind of ssh key whenever git tries to connect.
This works either per call using a context manager ...
ssh_executable = os.path.join(rw_dir, 'my_ssh_executable.sh')
with repo.git.custom_environment(GIT_SSH=ssh_executable):
repo.remotes.origin.fetch()
... or more persistently using the set_environment(...) method of the Git object of your repository:
old_env = repo.git.update_environment(GIT_SSH=ssh_executable)
# If needed, restore the old environment later
repo.git.update_environment(**old_env)
As you can set any amount of environment variables, you can use some to pass information along to your ssh-script to help it pick the desired ssh key for you.
More information about the becoming of this feature (new in GitPython v0.3.6) you will find in the respective issue.
In case of a clone_from in GitPython, the answer by Vijay doesn't work. It sets the git ssh command in a new Git() instance but then instantiates a separate Repo call. What does work is using the env argument of clone_from, as I learned from here:
Repo.clone_from(url, repo_dir, env={"GIT_SSH_COMMAND": 'ssh -i /PATH/TO/KEY'})
I've found this to make things a bit more like the way git works in the shell by itself.
import os
from git import Git, Repo
global_git = Git()
global_git.update_environment(
**{ k: os.environ[k] for k in os.environ if k.startswith('SSH') }
)
It basically is copying the SSH environment variables to GitPython's "shadow" environment. It then uses the common SSH-AGENT authentication mechanisms so you don't have to worry about specifying exactly which key it is.
For a quicker alternative which carries probably a lot of cruft with it, but it works too:
import os
from git import Git
global_git = Git()
global_git.update_environment(**os.environ)
That mirrors your entire environment, more like the way a subshell works in bash.
Either way, any future call to create a repo or clone picks up the 'adjusted' environment and does the standard git authentication.
No shim scripts necessary.
With Windows be careful where you place the quotes. Say you have
git.Repo.clone_from(bb_url, working_dir, env={"GIT_SSH_COMMAND": git_ssh_cmd})
then this works:
git_ssh_cmd = f'ssh -p 6022 -i "C:\Users\mwb\.ssh\id_rsa_mock"'
but this does not:
git_ssh_cmd = f'ssh -p 6022 -i C:\Users\mwb\.ssh\id_rsa_mock'
Reason:
https://github.com/git-lfs/git-lfs/issues/3131
https://github.com/git-lfs/git-lfs/issues/1895
Here are the steps to clone gitlab repository using GitPython
Create SSH key in Gitlab
Run the following script after adding necessary details
from git import Repo
Repo.clone_from("gitlab_ssh_url",
"path_where_you_want_to_clone_repo",
env={"GIT_SSH_COMMAND": 'ssh -i path_to_ssh_private_key'})
"""
Example
Repo.clone_from("git#gitlab.com:some_group/some_repo.git",
"empty_dir/some_repo",
env={"GIT_SSH_COMMAND": 'ssh -i /home/some_user/.ssh/id_rsa'})
"""
This is what the latest documentation says:
ssh_cmd = 'ssh -i id_deployment_key'
with repo.git.custom_environment(GIT_SSH_COMMAND=ssh_cmd):
repo.remotes.origin.fetch()
From: https://gitpython.readthedocs.io/en/stable/tutorial.html#handling-remotes

Supervisor and perlbrew

I try to use supervisor with perlbrew, but I can not make it work. For perlbrew I just tried to set the environment variable that go well, but perhaps it is better to make a script that launches perlbrew and plackup, this my configuration file:
[program:MahewinSimpleBlog]
command = perlbrew use perl-5.14.2 && plackup -E deployment -s Starman --workers=10 -p 4000 -a bin/app.pl -D
directory = /home/hobbestigrou/MahewinSimpleBlog
environment = PERL5LIB ='/home/hobbestigrou/MahewinBlogEngine/lib',PERLBREW_ROOT='/home/hobbestigrou/perl5/perlbrew',PATH='/home/hobbestigrou/perl5/perlbrew/bin:/home/hobbestigrou/perl5/perlbrew/perls/perl-5.14.2/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games',MANPATH='/home/hobbestigrou/perl5/perlbrew/perls/perl-5.14.2/man:',PERLBREW_VERSION='0.43',PERLBREW_PERL='perl-5.14.2',PERLBREW_MANPATH='/home/hobbestigrou/perl5/perlbrew/perls/perl-5.14.2/man',PERLBREW_SKIP_INIT='1',PERLBREW_PATH='/home/hobbestigrou/perl5/perlbrew/bin:/home/hobbestigrou/perl5/perlbrew/perls/perl-5.14.2/bin',SHLVL='2'
user = hobbestigrou
stdout_file = /home/hobbestigrou/mahewinsimpleblog.log
autostart = true
In the log I see it's not looking at the right place:
Error while loading bin/app.pl: Can't locate Type/Params.pm in #INC (#INC contains: /home/hobbestigrou/MahewinSimpleBlog/lib /home/hobbestigrou/MahewinBlogEngine/lib /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /home/hobbestigrou/MahewinBlogEngine/lib/MahewinBlogEngine/Article.pm line 5.
I do not see the problem, maybe perlbrew use done other things
When you installed perlbrew, you added a command to your .bashrc. You're getting that message because that command wasn't run for the shell in question because it's not an interactive shell.
Why don't you explicitly use /home/hobbestigrou/perl5/perlbrew/perls/perl-5.14.2/bin/perl instead of using perlbrew use?

Categories

Resources