broken pipe when executing bash in python - python

everyone, I encouter the error of Broken pipe when trying to executing bash in python.
Here is my bash file, run.sh
INPUT=`python -c "print 'uid='+'A'*0x4"`
TEST=$INPUT
LEN=$(echo -n "$INPUT" | wc -c)
cp $(which qemu-mipsel-static) ./qemu
echo "$INPUT" | chroot . ./qemu -E CONTENT_LENGTH=$LEN -E CONTENT_TYPE="application/x-www-form-urlencoded" -E REQUEST_METHOD="POST" -E HTTP_COOKIE=$TEST -E REQUEST_URI="/authentication.cgi" -E REMOTE_ADDR="192.168.1.1" htdocs/web/authentication.cgi 2>/dev/null
echo 'run ok'
rm -f ./qemu
Here is how I tried to call execute the bash in python:
bash_file_path =run.sh
op = commands.getstatusoutput("bash %s" % (bash_file_path) )
print op[1]
However, I encouter the error in the line 5 of the run.sh:
run.sh: line 5: echo: write error: Broken pipe
I also tried the subprocess, however, got the same errors:
p = subprocess.Popen(["bash", bash_file_path], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE )
print p.stdout.readlines()
print p.stderr.readlines()
the results:
[]
[run.sh: line 5: echo: write error: Broken pipe]

Related

How to use os.system() to run `> >(tee -a log)`?

I run the following command in the terminal.
sh -c "echo out; echo err 2>&1" >>(tee -a stdout.log) 2>>(tee -a stdout.log >&2)
output:
out
err
Using os.system in Python will report an error.
import os
cmd = """
sh -c "echo out; echo err 2>&1" > >(tee -a stdout.log) 2> >(tee -a stdout.log >&2)
"""
os.system(cmd)
sh: -c: line 1: syntax error near unexpected token `>'
sh: -c: line 1: `sh -c "echo out" > >(tee -a stdout.log) 2> >(tee -a stdout.log >&2)'
>(...) is bash-specific syntax. Make that bash -c instead of sh -c.
Also you should enclose the entire command in quotes since -c expects a single argument.
cmd = """
bash -c 'echo out > >(tee -a stdout.log) 2> >(tee -a stdout.log >&2)'
"""
To test writing to both stdout and stderr like your original example, try like this with curly braces:
cmd = """
bash -c '{ echo out; echo err 2>&1; } > >(tee -a stdout.log) 2> >(tee -a stdout.log >&2)'
"""

Call docker command line to remove all containers from python

I am trying to port:
https://coderwall.com/p/ewk0mq/stop-remove-all-docker-containers
to a python script. So far I have:
def remove_all_containers():
subprocess.call(['docker', 'stop','$(docker ps -a -q)'])
subprocess.call(['docker', 'rm','$(docker ps -a -q)'])
return;
But get:
Error response from daemon: No such container: $(docker ps -a -q)
I have also tried:
def remove_all_containers():
subprocess.call(['docker', 'stop',$(docker ps -a -q)])
subprocess.call(['docker', 'rm',$(docker ps -a -q)])
return;
But that gives:
subprocess.call(['docker', 'stop',$(docker ps -a -q)])
SyntaxError: invalid syntax
it seems I need to nest another subprocess call into the parent subprocess call. Or is there a simpler way to do this?
TL;DR: Command substitution $(...) is a shell feature, therefore you must run your commands on a shell:
subprocess.call('docker stop $(docker ps -a -q)', shell=True)
subprocess.call('docker rm $(docker ps -a -q)', shell=True)
Additional improvements:
It's not required, but I would suggest using check_call (or run(..., check=True), see below) instead of call(), so that if an error occurs it doesn't go unnoticed:
subprocess.check_call('docker stop $(docker ps -a -q)', shell=True)
subprocess.check_call('docker rm $(docker ps -a -q)', shell=True)
You can also go another route: parse the output of docker ps -a -q and then pass to stop and rm:
container_ids = subprocess.check_output(['docker', 'ps', '-aq'], encoding='ascii')
container_ids = container_ids.strip().split()
if container_ids:
subprocess.check_call(['docker', 'stop'] + container_ids])
subprocess.check_call(['docker', 'rm'] + container_ids])
If you're using Python 3.5+, you can also use the newer run() function:
# With shell
subprocess.run('docker stop $(docker ps -a -q)', shell=True, check=True)
subprocess.run('docker rm $(docker ps -a -q)', shell=True, check=True)
# Without shell
proc = subprocess.run(['docker', 'ps', '-aq'], check=True, stdout=PIPE, encoding='ascii')
container_ids = proc.stdout.strip().split()
if container_ids:
subprocess.run(['docker', 'stop'] + container_ids], check=True)
subprocess.run(['docker', 'rm'] + container_ids], check=True)
There is nice official library for python, that helps with Docker.
https://docker-py.readthedocs.io/en/stable/index.html
import docker
client = docker.DockerClient(Config.DOCKER_BASE_URL)
docker_containers = client.containers.list(all=True)
for dc in docker_containers:
dc.remove(force=True)
We've received all containers and remove them all doesn't matter container status is 'started' or not.
The library could be useful if you can import it into code.

using awk with subprocess Popen

import subprocess
gpus_raw=subprocess.Popen(['sinfo -pmain -Ogres:100,nodelist -N -h -r -tidle,mix,alloc|
grep -v drng | awk -F " " '{print $1,$2}''],
stdout=subprocess.PIPE,
shell=True).communicate()[0].strip().split('\n')
I use subprocess to generate a file with two columns after a grep and awk to remove the third colunn
but I get an error
File "/usr/bin/savail", line 10
gpus_raw = subprocess.Popen(['sinfo -pmain -Ogres:100,nodelist -N -h -r -tidle,mix,alloc|grep -v drng | awk -F " " '{print $1,$2}''], stdout=subprocess.PIPE,
shell=True).communicate()[0].strip().split('\n')
^ SyntaxError: invalid syntax

How to run the bash command as a system user without giving that user the right to run commands as any user

I have written a python script which includes this line:
response = subprocess.check_output(['/usr/bin/sudo /bin/su - backup -c "/usr/bin/ssh -q -o StrictHostKeyChecking=no %s bash -s" <<\'EOF\'\nPATH=/usr/local/bin:$PATH\nmvn --version|grep -i Apache|awk \'{print $3}\'|tr -d \'\n\'\nEOF' % i], shell=True)
This is in a for loop that goes through a list of hostnames and each one I want to check the result of the command on it. This works fine when I run it myself, however, this script is to be run by a system user (shinken - a nagios fork) and at that point I hit an issue.
shinken ALL=(ALL) NOPASSWD: ALL
However, I wanted to restrict the user to only allow it to run as the backup user:
shinken ALL=(backup) NOPASSWD: ALL
But when I run the script I get:
sudo: no tty present and no askpass program specified
I have read around this and tried a few things to fix it. I tried adding -t to my ssh command, but that didn't help. I believe I should be able to run the command with something similar to:
response = subprocess.check_output(['/usr/bin/sudo -u backup """ "/usr/bin/ssh -q -o StrictHostKeyChecking=no %s bash -s" <<\'EOF\'\nPATH=/usr/local/bin:$PATH\njava -version|grep -i version|awk \'{print $3}\'|tr -d \'\n\'\nEOF""" ' % i], shell=True)
But then I get this response:
subprocess.CalledProcessError: Command '['/usr/bin/sudo -u backup """ "/usr/bin/ssh -q -o StrictHostKeyChecking=no bamboo-agent-01 bash -s" <<\'EOF\'\nPATH=/usr/local/bin:$PATH\njava -version|grep -i version|awk \'{print $3}\'|tr -d \'\n\'\nEOF""" ']' returned non-zero exit status 1
If I run the command manually I get:
sudo: /usr/bin/ssh: command not found
Which is strange because that's where it lives.... I've no idea if what I'm trying is even possible. Thanks for any suggestions!
As for sudo:
shinken ALL=(backup) NOPASSWD: ALL
...only works when you switch directly from shinken to backup. You aren't doing that here. sudo su - backup is telling sudo to switch to root, and to run the command su - backup as root. Obviously, then, if you're going to use sudo su (which I've advised against elsewhere), you need your /etc/sudoers configuration to support that.
Because your /etc/sudoers isn't allowing direct the switch to root you're requesting, it's trying to prompt for a password, which requires a TTY, which is thus causing a failure.
Below, I'm rewriting the script to switch directly from shinken to backup, without going through root and running su:
As for the script:
import subprocess
remote_script='''
PATH=/usr/local/bin:$PATH
mvn --version 2>&1 | awk '/Apache/ { print $3 }'
'''
def maven_version_for_host(hostname):
# storing the command lets us pass it when constructing a CalledProcessError later
# could move it directly into the Popen creation if you don't need that.
cmd = [
'sudo', '-u', 'backup', '-i', '--',
'ssh', '-q', '-o', 'StrictHostKeyChecking=no', str(hostname),
'bash -s' # arguments in remote-command position to ssh all get concatenated
# together, so passing them as one command aids clarity.
]
proc = subprocess.Popen(cmd,
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
response, error_string = proc.communicate(remote_script)
if proc.returncode != 0:
raise subprocess.CalledProcessError(proc.returncode, cmd, error_string)
return response.split('\n', 1)[0]

read -a unknown option

I'm executing shell commands using python script. This is the command:
ntpservlist=( $OMC_NTPSERV ) && IFS=',' read -ra ntplist <<< "$ntpservlist" && for i in "${ntplist[#]}" ; do echo "server $i" >> /etc/inet/ntp.conf ; done
When I execute the command using a script, I get the following error:
/bin/sh[1]: read: -a: unknown option
Usage: read [-ACprsv] [-d delim] [-u fd] [-t timeout] [-n count] [-N count]
[var?prompt] [var ...]
But if I execute the same command using the command line, it executes correctly without any errors.
I'm using:
proc = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
(out, err) = proc.communicate()
to execute the command.
Your interactive shell is bash, but your system shell, used by Popen, is some flavor of ksh. To use bash instead, use the executable option:
proc = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True,
executable="/bin/bash") # or whatever the right path is
(out, err) = proc.communicate()
Most of your command appears to be valid ksh, but one difference is that read -A, not read -a, is used to populate an array.

Categories

Resources