python3 - subprocess with sudo to >> append to /etc/hosts - python

I've been wrestling with solutions from "How do I use sudo to redirect output to a location I don't have permission to write to?" and "append line to /etc/hosts file with shell script" with no luck.
I want to "append 10.10.10.10 puppetmaster" at the end of /etc/hosts. (Oracle/Red-Hat linux).
Been trying variations of:
subprocess.call("sudo -s", shell=True)
subprocess.call('sudo sh -c" "10.10.10.10 puppetmaster" >> /etc/hosts"', shell=True)
subprocess.call(" sed -i '10.10.10.10 puppetmaster' /etc/hosts", shell=True)
But /etc/hosts file stands still.
Can someone please point out what I'm doing wrong?

Simply use dd:
subprocess.Popen(['sudo', 'dd', 'if=/dev/stdin',
'of=/etc/hosts', 'conv=notrunc', 'oflag=append'],
stdin=subprocess.PIPE).communicate("10.10.10.10 puppetmaster\n")

You can do it in python quite easily once you run the script with sudo:
with open("/etc/hosts","a") as f:
f.write('10.10.10.10 puppetmaster\n')
opening with a will append.

The problem you are facing lies within the scope of the sudo.
The code you are trying calls sudo with the arguments sh and -c" "10.10.10.10 puppetmaster". The redirection of the >> operator, however, is done by the surrounding shell, of course with its permissions.
To achieve the effect you want, try starting a shell using sudo which then is given the command:
sudo bash -c 'sh -c" "10.10.10.10 puppetmaster" >> /etc/hosts"'
This will do the trick because the bash you started with sudo has superuser permissions and thus will not fail when it tries to perform the output redirection with >>.
To do this from within Python, use this:
subprocess.call("""sudo bash -c 'sh -c" "10.10.10.10 puppetmaster" >> /etc/hosts"'""", shell=True)
But of course, if you run your Python script with superuser permissions (start it with sudo) already, all this isn't necessary and the original code will work (without the additional sudo in the call):
subprocess.call('sh -c" "10.10.10.10 puppetmaster" >> /etc/hosts"', shell=True)

If you weren't escalating privileges for the entire script, I'd recommend the following:
p = subprocess.Popen(['sudo', 'tee', '-a', '/etc/hosts'],
stdin=subprocess.PIPE, stdout=subprocess.DEVNULL)
p.stdin.write(b'10.10.10.10 puppetmaster\n')
p.stdin.close()
p.wait()
Then you can write arbitrary content to the process's stdin (p.stdin).

Related

bash command wont run in python3

I made a python3 script and i need to run a bash command to make it work. i have tried os.system and subprocess but neither of them fully work to run the whole command, but when i run the command by itself in the terminal then it works perfect. what am i doing wrong?
os.system("fswebcam -r 640x480 --jpeg 85 -D 1 picture.jpg &> /dev/null")
os.system("echo -e "From: abc#gmail.com\nTo: abc1#gmail.com\nSubject: package for ryan\n\n"package for ryan|uuenview -a -bo picture.jpg|sendmail -t")
or
subprocess.run("fswebcam -r 640x480 --jpeg 85 -D 1 picture.jpg &> /dev/null")
subprocess.run("echo -e "From: abc#gmail.com\nTo: abc1#gmail.com\nSubject: package for ryan\n\n"package for ryan|uuenview -a -bo picture.jpg|sendmail -t")
This is supposed to take a picture and email it to me. With os.command it gives an error "the recipient has not been specified "(even though it works perfect in terminal by itself) and with subprocess it doesnt run anything
Best Practice: Completely Replacing the Shell with Python
The best approach is to not use a shell at all.
subprocess.run([
'fswebcam',
'-r', '640x480',
'--jpeg', '85',
'-D', '1',
'picture.jpg'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
Doing this with a pipeline is more complicated; see https://docs.python.org/3/library/subprocess.html#replacing-shell-pipeline, and many duplicates already on this site.
Second Choice: Using sh-compatible syntax
echo is poorly defined by the POSIX sh standard (the standard document itself advises against using it, and also fully disallows -e), so the reliable thing to do is to use printf instead.
Passing the text to be sent as a literal command-line argument ($1) gets us out of the business of figuring out how to escape it for the shell. (The preceding '_' is to fill in $0).
subprocess.run("fswebcam -r 640x480 --jpeg 85 -D 1 picture.jpg >/dev/null 2>&1",
shell=True)
string_to_send = '''From: abc#gmail.com
To: abc1#gmail.com
Subject: package for ryan
package for ryan
'''
p = subprocess.run(
[r'''printf '%s\n' "$1" | uuenview -a -bo picture.jpg | sendmail -t''',
"_", string_to_send],
shell=True)

Enable safari driver in terminal from python

I would like to enable safari driver in each session separately.
import os
os.system("sudo safaridriver --enable")
Previous code asks for password.
My question is basically how to provide the asked password? I tried something like
import subprocess
p = subprocess.Popen(["sudo safaridriver --enable"], shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p.stdin.write(b"password\n") # Assume that the password is indeed "password"
result = p.stdout.read() # The program's output
print(result)
This code doesn't work. Throwing the following error:
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
So I'm not sure how to handle it.
Additionally it would be great if the password would not be in clear text. But for start clear text is ok.
Thanks for any ideas
ps related text without useful answer here
It seems sudo doesn't read from stdin - probably for security reason.
But error shows that you could use option -S in sudo to read from stdin .
Documentation man sudo shows
-S, --stdin
Write the prompt to the standard error and read the password
from the standard input instead of using the terminal device.
I don't have safaridriver so I tested on something simpler like ls
In bash in console works for me
echo password | sudo -S ls
or
sudo -S ls <<< password
And first method works for me in code
import subprocess
p = subprocess.Popen(["echo password | sudo -S ls"], shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
result = p.communicate()[0].decode()
print(result)
But version with <<< has problem because subprocess uses sh instead of bash.
But it works with executable='/bin/bash'
import subprocess
p = subprocess.Popen(["sudo -S ls <<< password"], executable='/bin/bash', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE,)
result = p.communicate()[0].decode()
print(result)

How to run the bash command as a system user without giving that user the right to run commands as any user

I have written a python script which includes this line:
response = subprocess.check_output(['/usr/bin/sudo /bin/su - backup -c "/usr/bin/ssh -q -o StrictHostKeyChecking=no %s bash -s" <<\'EOF\'\nPATH=/usr/local/bin:$PATH\nmvn --version|grep -i Apache|awk \'{print $3}\'|tr -d \'\n\'\nEOF' % i], shell=True)
This is in a for loop that goes through a list of hostnames and each one I want to check the result of the command on it. This works fine when I run it myself, however, this script is to be run by a system user (shinken - a nagios fork) and at that point I hit an issue.
shinken ALL=(ALL) NOPASSWD: ALL
However, I wanted to restrict the user to only allow it to run as the backup user:
shinken ALL=(backup) NOPASSWD: ALL
But when I run the script I get:
sudo: no tty present and no askpass program specified
I have read around this and tried a few things to fix it. I tried adding -t to my ssh command, but that didn't help. I believe I should be able to run the command with something similar to:
response = subprocess.check_output(['/usr/bin/sudo -u backup """ "/usr/bin/ssh -q -o StrictHostKeyChecking=no %s bash -s" <<\'EOF\'\nPATH=/usr/local/bin:$PATH\njava -version|grep -i version|awk \'{print $3}\'|tr -d \'\n\'\nEOF""" ' % i], shell=True)
But then I get this response:
subprocess.CalledProcessError: Command '['/usr/bin/sudo -u backup """ "/usr/bin/ssh -q -o StrictHostKeyChecking=no bamboo-agent-01 bash -s" <<\'EOF\'\nPATH=/usr/local/bin:$PATH\njava -version|grep -i version|awk \'{print $3}\'|tr -d \'\n\'\nEOF""" ']' returned non-zero exit status 1
If I run the command manually I get:
sudo: /usr/bin/ssh: command not found
Which is strange because that's where it lives.... I've no idea if what I'm trying is even possible. Thanks for any suggestions!
As for sudo:
shinken ALL=(backup) NOPASSWD: ALL
...only works when you switch directly from shinken to backup. You aren't doing that here. sudo su - backup is telling sudo to switch to root, and to run the command su - backup as root. Obviously, then, if you're going to use sudo su (which I've advised against elsewhere), you need your /etc/sudoers configuration to support that.
Because your /etc/sudoers isn't allowing direct the switch to root you're requesting, it's trying to prompt for a password, which requires a TTY, which is thus causing a failure.
Below, I'm rewriting the script to switch directly from shinken to backup, without going through root and running su:
As for the script:
import subprocess
remote_script='''
PATH=/usr/local/bin:$PATH
mvn --version 2>&1 | awk '/Apache/ { print $3 }'
'''
def maven_version_for_host(hostname):
# storing the command lets us pass it when constructing a CalledProcessError later
# could move it directly into the Popen creation if you don't need that.
cmd = [
'sudo', '-u', 'backup', '-i', '--',
'ssh', '-q', '-o', 'StrictHostKeyChecking=no', str(hostname),
'bash -s' # arguments in remote-command position to ssh all get concatenated
# together, so passing them as one command aids clarity.
]
proc = subprocess.Popen(cmd,
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
response, error_string = proc.communicate(remote_script)
if proc.returncode != 0:
raise subprocess.CalledProcessError(proc.returncode, cmd, error_string)
return response.split('\n', 1)[0]

python popen rsync with rsh option

I'm trying to execute a rsync command via subrocess & popen. Everything's ok until I don't put the rsh subcommand where things go wrong.
from subprocess import Popen
args = ['-avz', '--rsh="ssh -C -p 22 -i /home/bond/.ssh/test"', 'bond#localhost:/home/bond/Bureau', '/home/bond/data/user/bond/backups/']
p = Popen(['rsync'] + args, shell=False)
print p.wait()
#just printing generated command:
print ' '.join(['rsync']+args)
I've tried to escape the '--rsh="ssh -C -p 22 -i /home/bond/.ssh/test"' in many ways, but it seems that it's not the problem.
I'm getting the error
rsync: Failed to exec ssh -C -p 22 -i /home/bond/.ssh/test: No such file or directory (2)
If I copy/paste the same args that I output at the time, I'm getting a correct execution of the command.
Thanks.
What happens if you use '--rsh=ssh -C -p 22 -i /home/bond/.ssh/test' instead (I removed the double quotes).
I suspect that this should work. What happens when you cut/paste your line into the commandline is that your shell sees the double quotes and removes them but uses them to prevent -C -p etc. from being interpreted as separate arguments. when you call subprocess.Popen with a list, you've already partitioned the arguments without the help of the shell, so you no longer need the quotes to preserve where the arguments should be split.
Having the same problem, I googled this issue extensively. It would seem you simply cannot pass arguments to ssh with subprocess. Ultimately, I wrote a shell script to run the rsync command, which I could pass arguments to via subprocess.call(['rsyncscript', src, dest, sshkey]). The shell script was: /usr/bin/rsync -az -e "ssh -i $3" $1 $2
This fixed the problem.

How to start a background process with nohup using Fabric?

Through Fabric, I am trying to start a celerycam process using the below nohup command. Unfortunately, nothing is happening. Manually using the same command, I could start the process but not through Fabric. Any advice on how can I solve this?
def start_celerycam():
'''Start celerycam daemon'''
with cd(env.project_dir):
virtualenv('nohup bash -c "python manage.py celerycam --logfile=%scelerycam.log --pidfile=%scelerycam.pid &> %scelerycam.nohup &> %scelerycam.err" &' % (env.celery_log_dir,env.celery_log_dir,env.celery_log_dir,env.celery_log_dir))
I'm using Erich Heine's suggestion to use 'dtach' and it's working pretty well for me:
def runbg(cmd, sockname="dtach"):
return run('dtach -n `mktemp -u /tmp/%s.XXXX` %s' % (sockname, cmd))
This was found here.
As I have experimented, the solution is a combination of two factors:
run process as a daemon: nohup ./command &> /dev/null &
use pty=False for fabric run
So, your function should look like this:
def background_run(command):
command = 'nohup %s &> /dev/null &' % command
run(command, pty=False)
And you can launch it with:
execute(background_run, your_command)
This is an instance of this issue. Background processes will be killed when the command ends. Unfortunately on CentOS 6 doesn't support pty-less sudo commands.
The final entry in the issue mentions using sudo('set -m; service servicename start'). This turns on Job Control and therefore background processes are put in their own process group. As a result they are not terminated when the command ends.
For even more information see this link.
you just need to run
run("(nohup yourcommand >& /dev/null < /dev/null &) && sleep 1")
DTACH is the way to go. It's a software you need to install like a lite version of screen.
This is a better version of the "dtach"-method found above, it will install dtach if necessary. It's to be found here where you can also learn how to get the output of the process which is running in the background:
from fabric.api import run
from fabric.api import sudo
from fabric.contrib.files import exists
def run_bg(cmd, before=None, sockname="dtach", use_sudo=False):
"""Run a command in the background using dtach
:param cmd: The command to run
:param output_file: The file to send all of the output to.
:param before: The command to run before the dtach. E.g. exporting
environment variable
:param sockname: The socket name to use for the temp file
:param use_sudo: Whether or not to use sudo
"""
if not exists("/usr/bin/dtach"):
sudo("apt-get install dtach")
if before:
cmd = "{}; dtach -n `mktemp -u /tmp/{}.XXXX` {}".format(
before, sockname, cmd)
else:
cmd = "dtach -n `mktemp -u /tmp/{}.XXXX` {}".format(sockname, cmd)
if use_sudo:
return sudo(cmd)
else:
return run(cmd)
May this help you, like it helped me to run omxplayer via fabric on a remote rasberry pi!
You can use :
run('nohup /home/ubuntu/spider/bin/python3 /home/ubuntu/spider/Desktop/baidu_index/baidu_index.py > /home/ubuntu/spider/Desktop/baidu_index/baidu_index.py.log 2>&1 &', pty=False)
nohup did not work for me and I did not have tmux or dtach installed on all the boxes I wanted to use this on so I ended up using screen like so:
run("screen -d -m bash -c '{}'".format(command), pty=False)
This tells screen to start a bash shell in a detached terminal that runs your command
You could be running into this issue
Try adding 'pty=False' to the sudo command (I assume virtualenv is calling sudo or run somewhere?)
This worked for me:
sudo('python %s/manage.py celerycam --detach --pidfile=celerycam.pid' % siteDir)
Edit: I had to make sure the pid file was removed first so this was the full code:
# Create new celerycam
sudo('rm celerycam.pid', warn_only=True)
sudo('python %s/manage.py celerycam --detach --pidfile=celerycam.pid' % siteDir)
I was able to circumvent this issue by running nohup ... & over ssh in a separate local shell script. In fabfile.py:
#task
def startup():
local('./do-stuff-in-background.sh {0}'.format(env.host))
and in do-stuff-in-background.sh:
#!/bin/sh
set -e
set -o nounset
HOST=$1
ssh $HOST -T << HERE
nohup df -h 1>>~/df.log 2>>~/df.err &
HERE
Of course, you could also pass in the command and standard output / error log files as arguments to make this script more generally useful.
(In my case, I didn't have admin rights to install dtach, and neither screen -d -m nor pty=False / sleep 1 worked properly for me. YMMV, especially as I have no idea why this works...)

Categories

Resources