bash command wont run in python3 - python

I made a python3 script and i need to run a bash command to make it work. i have tried os.system and subprocess but neither of them fully work to run the whole command, but when i run the command by itself in the terminal then it works perfect. what am i doing wrong?
os.system("fswebcam -r 640x480 --jpeg 85 -D 1 picture.jpg &> /dev/null")
os.system("echo -e "From: abc#gmail.com\nTo: abc1#gmail.com\nSubject: package for ryan\n\n"package for ryan|uuenview -a -bo picture.jpg|sendmail -t")
or
subprocess.run("fswebcam -r 640x480 --jpeg 85 -D 1 picture.jpg &> /dev/null")
subprocess.run("echo -e "From: abc#gmail.com\nTo: abc1#gmail.com\nSubject: package for ryan\n\n"package for ryan|uuenview -a -bo picture.jpg|sendmail -t")
This is supposed to take a picture and email it to me. With os.command it gives an error "the recipient has not been specified "(even though it works perfect in terminal by itself) and with subprocess it doesnt run anything

Best Practice: Completely Replacing the Shell with Python
The best approach is to not use a shell at all.
subprocess.run([
'fswebcam',
'-r', '640x480',
'--jpeg', '85',
'-D', '1',
'picture.jpg'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
Doing this with a pipeline is more complicated; see https://docs.python.org/3/library/subprocess.html#replacing-shell-pipeline, and many duplicates already on this site.
Second Choice: Using sh-compatible syntax
echo is poorly defined by the POSIX sh standard (the standard document itself advises against using it, and also fully disallows -e), so the reliable thing to do is to use printf instead.
Passing the text to be sent as a literal command-line argument ($1) gets us out of the business of figuring out how to escape it for the shell. (The preceding '_' is to fill in $0).
subprocess.run("fswebcam -r 640x480 --jpeg 85 -D 1 picture.jpg >/dev/null 2>&1",
shell=True)
string_to_send = '''From: abc#gmail.com
To: abc1#gmail.com
Subject: package for ryan
package for ryan
'''
p = subprocess.run(
[r'''printf '%s\n' "$1" | uuenview -a -bo picture.jpg | sendmail -t''',
"_", string_to_send],
shell=True)

Related

How to run the bash command as a system user without giving that user the right to run commands as any user

I have written a python script which includes this line:
response = subprocess.check_output(['/usr/bin/sudo /bin/su - backup -c "/usr/bin/ssh -q -o StrictHostKeyChecking=no %s bash -s" <<\'EOF\'\nPATH=/usr/local/bin:$PATH\nmvn --version|grep -i Apache|awk \'{print $3}\'|tr -d \'\n\'\nEOF' % i], shell=True)
This is in a for loop that goes through a list of hostnames and each one I want to check the result of the command on it. This works fine when I run it myself, however, this script is to be run by a system user (shinken - a nagios fork) and at that point I hit an issue.
shinken ALL=(ALL) NOPASSWD: ALL
However, I wanted to restrict the user to only allow it to run as the backup user:
shinken ALL=(backup) NOPASSWD: ALL
But when I run the script I get:
sudo: no tty present and no askpass program specified
I have read around this and tried a few things to fix it. I tried adding -t to my ssh command, but that didn't help. I believe I should be able to run the command with something similar to:
response = subprocess.check_output(['/usr/bin/sudo -u backup """ "/usr/bin/ssh -q -o StrictHostKeyChecking=no %s bash -s" <<\'EOF\'\nPATH=/usr/local/bin:$PATH\njava -version|grep -i version|awk \'{print $3}\'|tr -d \'\n\'\nEOF""" ' % i], shell=True)
But then I get this response:
subprocess.CalledProcessError: Command '['/usr/bin/sudo -u backup """ "/usr/bin/ssh -q -o StrictHostKeyChecking=no bamboo-agent-01 bash -s" <<\'EOF\'\nPATH=/usr/local/bin:$PATH\njava -version|grep -i version|awk \'{print $3}\'|tr -d \'\n\'\nEOF""" ']' returned non-zero exit status 1
If I run the command manually I get:
sudo: /usr/bin/ssh: command not found
Which is strange because that's where it lives.... I've no idea if what I'm trying is even possible. Thanks for any suggestions!
As for sudo:
shinken ALL=(backup) NOPASSWD: ALL
...only works when you switch directly from shinken to backup. You aren't doing that here. sudo su - backup is telling sudo to switch to root, and to run the command su - backup as root. Obviously, then, if you're going to use sudo su (which I've advised against elsewhere), you need your /etc/sudoers configuration to support that.
Because your /etc/sudoers isn't allowing direct the switch to root you're requesting, it's trying to prompt for a password, which requires a TTY, which is thus causing a failure.
Below, I'm rewriting the script to switch directly from shinken to backup, without going through root and running su:
As for the script:
import subprocess
remote_script='''
PATH=/usr/local/bin:$PATH
mvn --version 2>&1 | awk '/Apache/ { print $3 }'
'''
def maven_version_for_host(hostname):
# storing the command lets us pass it when constructing a CalledProcessError later
# could move it directly into the Popen creation if you don't need that.
cmd = [
'sudo', '-u', 'backup', '-i', '--',
'ssh', '-q', '-o', 'StrictHostKeyChecking=no', str(hostname),
'bash -s' # arguments in remote-command position to ssh all get concatenated
# together, so passing them as one command aids clarity.
]
proc = subprocess.Popen(cmd,
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
response, error_string = proc.communicate(remote_script)
if proc.returncode != 0:
raise subprocess.CalledProcessError(proc.returncode, cmd, error_string)
return response.split('\n', 1)[0]

python3 - subprocess with sudo to >> append to /etc/hosts

I've been wrestling with solutions from "How do I use sudo to redirect output to a location I don't have permission to write to?" and "append line to /etc/hosts file with shell script" with no luck.
I want to "append 10.10.10.10 puppetmaster" at the end of /etc/hosts. (Oracle/Red-Hat linux).
Been trying variations of:
subprocess.call("sudo -s", shell=True)
subprocess.call('sudo sh -c" "10.10.10.10 puppetmaster" >> /etc/hosts"', shell=True)
subprocess.call(" sed -i '10.10.10.10 puppetmaster' /etc/hosts", shell=True)
But /etc/hosts file stands still.
Can someone please point out what I'm doing wrong?
Simply use dd:
subprocess.Popen(['sudo', 'dd', 'if=/dev/stdin',
'of=/etc/hosts', 'conv=notrunc', 'oflag=append'],
stdin=subprocess.PIPE).communicate("10.10.10.10 puppetmaster\n")
You can do it in python quite easily once you run the script with sudo:
with open("/etc/hosts","a") as f:
f.write('10.10.10.10 puppetmaster\n')
opening with a will append.
The problem you are facing lies within the scope of the sudo.
The code you are trying calls sudo with the arguments sh and -c" "10.10.10.10 puppetmaster". The redirection of the >> operator, however, is done by the surrounding shell, of course with its permissions.
To achieve the effect you want, try starting a shell using sudo which then is given the command:
sudo bash -c 'sh -c" "10.10.10.10 puppetmaster" >> /etc/hosts"'
This will do the trick because the bash you started with sudo has superuser permissions and thus will not fail when it tries to perform the output redirection with >>.
To do this from within Python, use this:
subprocess.call("""sudo bash -c 'sh -c" "10.10.10.10 puppetmaster" >> /etc/hosts"'""", shell=True)
But of course, if you run your Python script with superuser permissions (start it with sudo) already, all this isn't necessary and the original code will work (without the additional sudo in the call):
subprocess.call('sh -c" "10.10.10.10 puppetmaster" >> /etc/hosts"', shell=True)
If you weren't escalating privileges for the entire script, I'd recommend the following:
p = subprocess.Popen(['sudo', 'tee', '-a', '/etc/hosts'],
stdin=subprocess.PIPE, stdout=subprocess.DEVNULL)
p.stdin.write(b'10.10.10.10 puppetmaster\n')
p.stdin.close()
p.wait()
Then you can write arbitrary content to the process's stdin (p.stdin).

Running interactive commands in docker in Python subprocess

When I use docker run in interactive mode I am able to run the commands I want to test some python stuff.
root#pydock:~# docker run -i -t dockerfile/python /bin/bash
[ root#197306c1b256:/data ]$ python -c "print 'hi there'"
hi there
[ root#197306c1b256:/data ]$ exit
exit
root#pydock:~#
I want to automate this from python using the subprocess module so I wrote this:
run_this = "print('hi')"
random_name = ''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(20))
command = 'docker run -i -t --name="%s" dockerfile/python /bin/bash' % random_name
subprocess.call([command],shell=True,stderr=subprocess.STDOUT)
command = 'cat <<\'PYSTUFF\' | timeout 0.5 python | head -n 500000 \n%s\nPYSTUFF' % run_this
output = subprocess.check_output([command],shell=True,stderr=subprocess.STDOUT)
command = 'exit'
subprocess.call([command],shell=True,stderr=subprocess.STDOUT)
command = 'docker ps -a | grep "%s" | awk "{print $1}" | xargs --no-run-if-empty docker rm -f' % random_name
subprocess.call([command],shell=True,stderr=subprocess.STDOUT)
This is supposed to create the container, run the python command on the container and exit and remove the container. It does all this except the command is run on the host machine and not the docker container. I guess docker is switching shells or something like that. How do I run python subprocess from a new shell?
It looks like you are expecting the second command cat <<... to send input to the first command. But the two subprocess commands have nothing to do with each other, so this doesn't work.
Python's subprocess library, and the popen command that underlies it, offer a way to get a pipe to stdin of the process. This way, you can send in the commands you want directly from Python and don't have to attempt to get another subprocess to talk to it.
So, something like:
from subprocess import Popen, PIPE
p = Popen("docker run -i -t --name="%s" dockerfile/python /bin/bash", stdin=PIPE)
p.communicate("timeout 0.5 python | head -n 500000 \n" % run_this)
(I'm not a Python expert; apologies for errors in string-forming. Adapted from this answer)
You actually need to spawn a new child on the new shell you are opening.So after docker creation run docker enter or try the same operation with pexpect instead of subprocess.`pexpect spawns a new child and that way you can send commands.

How to save Iperf result in an output file

I am running iperf between a set of hosts that are read from a txt file, here's how I am running it:
h1,h2 = net.getNodeByName(node_id_1, node_id_2)
net.iperf((h1, h2))
It runs well and displays the results. But, I want to save the output of iperf result in a separate txt file. Does anyone know how I can apply it on the above code?
In order to store the results of iperf test in a file , add | tee followed by the filename.txt to your command line for example :
iperf -c ipaddress -u -t 10 -i 1 | tee result.txt
Do you already try:
--output test.log
(in newer versions --logfile)
or using
youriperfexpr > test.log
I had this problem as well. Although the manpage specifies "-o" or "--output" to save your output to a file, this does not actually work.
It seems that this was marked as "WontFix":
https://code.google.com/p/iperf/issues/detail?id=24:
Looks like -o/--output existed in a previous version but in not in the
current version. The consensus in yesterday's meeting was that if
--output existed then we should fix it, otherwise people should just use shell redirection and we'll mark this WontFix. So, WontFix.
So maybe just use typescript or ">test.log" as suggested by Paolo
I think the answer is given by Chiara Contoli in here: iperf result in output file
In summary:
h1.cmd('iperf -s > server_output.txt &')
h2.cmd('iperf -t 5 -c ', h1.IP() + ' > client_output.txt &')
Since you are running it on python, another method to save the result is to use popen:
popen( '<command> > <filename>', shell=True)
For example:
popen('iperf -s -u -i 1 > outtest.txt', shell=True)
You can check this for further information:
https://github.com/mininet/mininet/wiki/Introduction-to-Mininet#popen
If you need to save a file in the txt format.
On the client machine run cmd(adm) and after that you need to write this:
cd c:\iperf3
iperf3.exe -c "you server address" -p "port" -P 10 -w 32000 -t 0 >> c:\iperf3\text.txt
(-t 0) - infinity
On the client machine, you will see a black screen in cmd. It's normal. You will see all the process in the server machine. After your test, on the client machine in cmd need push ctrl+ c and after (y).
Your file in directory c:\iperf3\text.txt after that collect all information about this period.
If you push close in cmd this file text.txt will be empty.
Recommended open this file in NotePad or WordPad for the correct view.
Server
iperf3 -s -p -B >> &
Client
iperf3 -p -c <server_ip> -B <client_ip> -t 5 >>
Make sure you kill the iperf process on the server when done

python popen rsync with rsh option

I'm trying to execute a rsync command via subrocess & popen. Everything's ok until I don't put the rsh subcommand where things go wrong.
from subprocess import Popen
args = ['-avz', '--rsh="ssh -C -p 22 -i /home/bond/.ssh/test"', 'bond#localhost:/home/bond/Bureau', '/home/bond/data/user/bond/backups/']
p = Popen(['rsync'] + args, shell=False)
print p.wait()
#just printing generated command:
print ' '.join(['rsync']+args)
I've tried to escape the '--rsh="ssh -C -p 22 -i /home/bond/.ssh/test"' in many ways, but it seems that it's not the problem.
I'm getting the error
rsync: Failed to exec ssh -C -p 22 -i /home/bond/.ssh/test: No such file or directory (2)
If I copy/paste the same args that I output at the time, I'm getting a correct execution of the command.
Thanks.
What happens if you use '--rsh=ssh -C -p 22 -i /home/bond/.ssh/test' instead (I removed the double quotes).
I suspect that this should work. What happens when you cut/paste your line into the commandline is that your shell sees the double quotes and removes them but uses them to prevent -C -p etc. from being interpreted as separate arguments. when you call subprocess.Popen with a list, you've already partitioned the arguments without the help of the shell, so you no longer need the quotes to preserve where the arguments should be split.
Having the same problem, I googled this issue extensively. It would seem you simply cannot pass arguments to ssh with subprocess. Ultimately, I wrote a shell script to run the rsync command, which I could pass arguments to via subprocess.call(['rsyncscript', src, dest, sshkey]). The shell script was: /usr/bin/rsync -az -e "ssh -i $3" $1 $2
This fixed the problem.

Categories

Resources