How to pass Python variable into bash commands? - python

I've seen some answers about subprocess.call and popen, but I have a list of commands and I think it's not a good idea to have multiple calls or etc. Also I don't want to have a separate script.sh with these commands.
My code looks like
bash_code=r'''
echo "/common_home/{context['nickname']} /tmp/back/{context['nickname']} none bind 0 0" | sudo tee --append /etc/fstab
sudo mkdir /tmp/{context['nickname']} /tmp/back/{context['nickname']}
'''
subprocess.run(['bash', '-c', bash_code], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
But it has much more line with {context['nickname']} and I don't know best way how to parse this variable into bash commands.

You can use a so-called "f-string" to replace the variable references with their values.
context = {'nickname': 'foobar'}
bash_code = f'''
echo "/common_home/{context['nickname']} /tmp/back/{context['nickname']} none bind 0 0" | sudo tee --append /etc/fstab
sudo mkdir /tmp/{context['nickname']} /tmp/back/{context['nickname']}
'''
print(bash_code)
subprocess.run(['bash', '-c', bash_code], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
Value printed:
echo "/common_home/foobar /tmp/back/foobar none bind 0 0" | sudo tee --append /etc/fstab
sudo mkdir /tmp/foobar /tmp/back/foobar

Related

bash command wont run in python3

I made a python3 script and i need to run a bash command to make it work. i have tried os.system and subprocess but neither of them fully work to run the whole command, but when i run the command by itself in the terminal then it works perfect. what am i doing wrong?
os.system("fswebcam -r 640x480 --jpeg 85 -D 1 picture.jpg &> /dev/null")
os.system("echo -e "From: abc#gmail.com\nTo: abc1#gmail.com\nSubject: package for ryan\n\n"package for ryan|uuenview -a -bo picture.jpg|sendmail -t")
or
subprocess.run("fswebcam -r 640x480 --jpeg 85 -D 1 picture.jpg &> /dev/null")
subprocess.run("echo -e "From: abc#gmail.com\nTo: abc1#gmail.com\nSubject: package for ryan\n\n"package for ryan|uuenview -a -bo picture.jpg|sendmail -t")
This is supposed to take a picture and email it to me. With os.command it gives an error "the recipient has not been specified "(even though it works perfect in terminal by itself) and with subprocess it doesnt run anything
Best Practice: Completely Replacing the Shell with Python
The best approach is to not use a shell at all.
subprocess.run([
'fswebcam',
'-r', '640x480',
'--jpeg', '85',
'-D', '1',
'picture.jpg'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
Doing this with a pipeline is more complicated; see https://docs.python.org/3/library/subprocess.html#replacing-shell-pipeline, and many duplicates already on this site.
Second Choice: Using sh-compatible syntax
echo is poorly defined by the POSIX sh standard (the standard document itself advises against using it, and also fully disallows -e), so the reliable thing to do is to use printf instead.
Passing the text to be sent as a literal command-line argument ($1) gets us out of the business of figuring out how to escape it for the shell. (The preceding '_' is to fill in $0).
subprocess.run("fswebcam -r 640x480 --jpeg 85 -D 1 picture.jpg >/dev/null 2>&1",
shell=True)
string_to_send = '''From: abc#gmail.com
To: abc1#gmail.com
Subject: package for ryan
package for ryan
'''
p = subprocess.run(
[r'''printf '%s\n' "$1" | uuenview -a -bo picture.jpg | sendmail -t''',
"_", string_to_send],
shell=True)

How to run the docker commands from python?

I want to run a set of docker commands from python.
I tried creating a script like below and run the script from python using paramiko ssh_client to connect to the machine where the docker is running:
#!/bin/bash
# Get container ID
container_id="$(docker ps | grep hello | awk '{print $1}')"
docker exec -it $container_id sh -c "cd /var/opt/bin/ && echo $1 &&
echo $PWD && ./test.sh -q $1"
But docker exec ... never gets executed.
So I tried to run the below python script below, directly in the machine where the docker is running:
import subprocess
docker_run = "docker exec 7f34a9c1b78f /bin/bash -c \"cd
/var/opt/bin/ && ls -a\"".split()
subprocess.call(docker_run, shell=True)
I get a message: "Usage: docker COMMAND..."
But I get the expected results if I run the command
docker exec 7f34a9c1b78f /bin/bash -c "cd /var/opt/bin/ && ls -a"
directly in the machine
How to run multiple docker commands from the python script? Thanks!
You have a mistake in your call to subprocess.call. subprocess.call expects a command with a series of parameters. You've given it a list of parameter pieces.
This code:
docker_run = "docker exec 7f34a9c1b78f /bin/bash -c \"cd
/var/opt/bin/ && ls -a\"".split()
subprocess.call(docker_run, shell=True)
Runs this:
subprocess.call([
'docker', 'exec', '7f34a9c1b78f', '/bin/bash', '-c',
'"cd', '/var/opt/bin/', '&&', 'ls', '-a"'
], shell=True)
Instead, I believe you want:
subprocess.call([
'docker', 'exec', '7f34a9c1b78f', '/bin/bash', '-c',
'"cd /var/opt/bin/ && ls -a"' # Notice how this is only one argument.
], shell=True)
You might need to tweak that second call. I suspect you don't need the quotes ('cd /var/opt/bin/ && ls -a' might work instead of '"cd /var/opt/bin/ && ls -a"'), but I haven't tested it.
Following are a few methods worked:
Remove double quotes:
subprocess.call([
'docker', 'exec', '7f34a9c1b78f', '/bin/bash', '-c',
'cd /opt/teradata/tdqgm/bin/ && ./support-archive.sh -q 6b171e7a-7071-4975-a3ac-000000000241'
])
If you are not sure of how the command should be split up to pass it as an argument of subprocess method, shlex module:
https://docs.python.org/2.7/library/shlex.html#shlex.split

python3 - subprocess with sudo to >> append to /etc/hosts

I've been wrestling with solutions from "How do I use sudo to redirect output to a location I don't have permission to write to?" and "append line to /etc/hosts file with shell script" with no luck.
I want to "append 10.10.10.10 puppetmaster" at the end of /etc/hosts. (Oracle/Red-Hat linux).
Been trying variations of:
subprocess.call("sudo -s", shell=True)
subprocess.call('sudo sh -c" "10.10.10.10 puppetmaster" >> /etc/hosts"', shell=True)
subprocess.call(" sed -i '10.10.10.10 puppetmaster' /etc/hosts", shell=True)
But /etc/hosts file stands still.
Can someone please point out what I'm doing wrong?
Simply use dd:
subprocess.Popen(['sudo', 'dd', 'if=/dev/stdin',
'of=/etc/hosts', 'conv=notrunc', 'oflag=append'],
stdin=subprocess.PIPE).communicate("10.10.10.10 puppetmaster\n")
You can do it in python quite easily once you run the script with sudo:
with open("/etc/hosts","a") as f:
f.write('10.10.10.10 puppetmaster\n')
opening with a will append.
The problem you are facing lies within the scope of the sudo.
The code you are trying calls sudo with the arguments sh and -c" "10.10.10.10 puppetmaster". The redirection of the >> operator, however, is done by the surrounding shell, of course with its permissions.
To achieve the effect you want, try starting a shell using sudo which then is given the command:
sudo bash -c 'sh -c" "10.10.10.10 puppetmaster" >> /etc/hosts"'
This will do the trick because the bash you started with sudo has superuser permissions and thus will not fail when it tries to perform the output redirection with >>.
To do this from within Python, use this:
subprocess.call("""sudo bash -c 'sh -c" "10.10.10.10 puppetmaster" >> /etc/hosts"'""", shell=True)
But of course, if you run your Python script with superuser permissions (start it with sudo) already, all this isn't necessary and the original code will work (without the additional sudo in the call):
subprocess.call('sh -c" "10.10.10.10 puppetmaster" >> /etc/hosts"', shell=True)
If you weren't escalating privileges for the entire script, I'd recommend the following:
p = subprocess.Popen(['sudo', 'tee', '-a', '/etc/hosts'],
stdin=subprocess.PIPE, stdout=subprocess.DEVNULL)
p.stdin.write(b'10.10.10.10 puppetmaster\n')
p.stdin.close()
p.wait()
Then you can write arbitrary content to the process's stdin (p.stdin).

Subprocess.call for cmd

I have the following command that works in the shell:
$ pv itunes20140910.tbz | sudo tar xpj -C /tmp
However, when I try and do it in python, it doesn't work:
>>> import subprocess
>>> import shlex
>>> cmd=shlex.split('pv itunes20140910.tbz | sudo tar xpj -C /tmp')
>>> subprocess.call(cmd)
pv: invalid option -- 'C'
Try `pv --help' for more information.
1
What am I doing wrong here, and what would be the correct command to run in python?
The above answers didn't have the net effect of what I was looking for (the progress bar), though the command would run without error. Here is what worked for me:
>>> import shlex, subprocess
>>> p1 = subprocess.Popen(shlex.split('pv /tmp/itunes20140910.tbz'), stdout=subprocess.PIPE) #Set up the echo command and direct the output to a pipe
>>> subprocess.Popen(shlex.split('sudo tar xpj -C /tmp'), stdin=p1.stdout) #send p1's output to p2
Use shell=True argument. Otherwise | cannot be interpreted.
subprocess.call('pv itunes20140910.tbz | sudo tar xpj -C /tmp', shell=True)

How to do this Python subprocess call without using shell=True?

For example, in /tmp I have files ending in .txt, .doc, and .jpg that I'd like to delete in one step using shred and subprocess.
The following does the job:
subprocess.call('bash -c "shred -n 5 -uz /tmp/{*.txt,*.pdf,*.doc}"', shell=True)
How would I do this command without using shell=True. I've tried the following:
subprocess.call(['bash', '-c', '"shred -n 10 -uz /tmp/{*.txt,*.pdf,*.doc}"'])
subprocess.call(['bash', '-c', 'shred', '-n 10', '-uz', '/tmp/{*.txt,*.pdf,*.doc}'])
Any suggestions?
I believe that other guy is spot on (haven't tried it myself though). However if you ever find yourself having similar issues again shlex.split(s) might be helpful. It takes the string 's' and splits it "using shell-like syntax".
In [3]: shlex.split(s)
Out[3]: ['bash', '-c', 'shred -n 5 -uz /tmp/{*.txt,*.pdf,*.doc}']
subprocess.call(['bash', '-c', 'shred -n 10 -uz /tmp/{*.txt,*.pdf,*.doc}'])
You can tell how a command is expanded and split up with:
$ printf "Argument: %s\n" bash -c "shred -n 5 -uz /tmp/{*.txt,*.pdf,*.doc}"
Argument: bash
Argument: -c
Argument: shred -n 5 -uz /tmp/{*.txt,*.pdf,*.doc}
In the more general case (but overkill here), if you're ever in doubt of what's executed by something with which parameters, you can use strace:
$ cat script
import subprocess
subprocess.call('bash -c "shred -n 5 -uz /tmp/{*.txt,*.pdf,*.doc}"', shell=True)
$ strace -s 1000 -fe execve python script
...
execve("/bin/bash", ["bash", "-c", "shred -n 5 -uz /tmp/{*.txt,*.pdf,*.doc}"], [/* 49 vars */]) = 0
...
$
If the command is coming from a trusted source e.g., it is hardcoded then there is nothing wrong in using shell=True:
#!/usr/bin/env python
from subprocess import check_call
check_call("shred -n 10 -uz /tmp/{*.txt,*.pdf,*.doc}",
shell=True, executable='/bin/bash')
/bin/bash is used to support {} inside the command.
This command doesn't run /bin/sh

Categories

Resources