Running subprocesses command with two string inputs - python

I'm trying to validate a certificate with a CA bundle file. The original Bash command takes two file arguments like this;
openssl verify -CAfile ca-ssl.ca cert-ssl.crt
I'm trying to figure out how to run the above command in python subprocess whilst having ca-ssl.ca and cert-ssl.crt as variable strings (as opposed to files).
If I ran the command with variables (instead of files) in bash then this would work;
ca_value=$(<ca-ssl.ca)
cert_value=$(<cert-ssl.crt)
openssl verify -CAfile <(echo "$ca_value") <(echo "$cert_value")
However, I'm struggling to figure out how to do the above with Python, preferably without needing to use shell=True. I have tried the following but doesn't work and instead prints 'help' commands for openssl;
certificate = ''' cert string '''
ca_bundle = ''' ca bundle string '''
def ca_valid(cert, ca):
ca_validation = subprocess.Popen(['openssl', 'verify', '-CAfile', ca, cert], stdin=subprocess.PIPE, stdout=subprocess.PIPE, bufsize=1)
ca_validation_output = ca_validation.communicate()[0].strip()
ca_validation.wait()
ca_valid(certificate, ca_bundle)
Any guidance/clues on what I need to look further into would be appreciated.

Bash process substitution <(...) in the end is supplying a file path as an argument to openssl.
You will need to make a helper function to create this functionality since Python doesn't have any operators that allow you to inline pipe data into a file and present its path:
import subprocess
def validate_ca(cert, ca):
with filearg(ca) as ca_path, filearg(cert) as cert_path:
ca_validation = subprocess.Popen(
['openssl', 'verify', '-CAfile', ca_path, cert_path],
stdout=subprocess.PIPE,
)
return ca_validation.communicate()[0].strip()
Where filearg is a context manager which creates a named temporary file with your desired text, closes it, hands the path to you, and then removes it after the with scope ends.
import os
import tempfile
from contextlib import contextmanager
#contextmanger
def filearg(txt):
with tempfile.NamedTemporaryFile('w', delete=False) as fh:
fh.write(txt)
try:
yield fh.name
finally:
os.remove(fh.name)
Anything accessing this temporary file(like the subprocess) needs to work inside the context manager.
By the way, the Popen.wait(self) is redundant since Popen.communicate(self) waits for termination.

If you want to use process substitution, you will have to use shell=True. This is unavoidable. The <(...) process substitution syntax is bash syntax; you simply must call bash into service to parse and execute such code.
Additionally, you have to ensure that bash is invoked, as opposed to sh. On some systems sh may refer to an old Bourne shell (as opposed to the Bourne-again shell bash) in which case process substitution will definitely not work. On some systems sh will invoke bash, but process substitution will still not work, because when invoked under the name sh the bash shell enters something called POSIX mode. Here are some excerpts from the bash man page:
...
INVOCATION
... When invoked as sh, bash enters posix mode after the startup files are read. ....
...
SEE ALSO
...
http://tiswww.case.edu/~chet/bash/POSIX -- a description of posix mode
...
From the above web link:
Process substitution is not available.
/bin/sh seems to be the default shell in python, whether you're using os.system() or subprocess.Popen(). So you'll have to specify the argument executable='bash', or executable='/bin/bash' if you want to specify the full path.
This is working for me:
subprocess.Popen('printf \'argument: "%s"\\n\' verify -CAfile <(echo ca_value) <(echo cert_value);',executable='bash',shell=True).wait();
## argument: "verify"
## argument: "-CAfile"
## argument: "/dev/fd/63"
## argument: "/dev/fd/62"
## 0
Here's how you can actually embed the string values from variables:
bashEsc = lambda s: "'"+s.replace("'","'\\''")+"'";
ca_value = 'x';
cert_value = 'y';
cmd = 'printf \'argument: "%%s"\\n\' verify -CAfile <(echo %s) <(echo %s);'%(bashEsc(ca_value),bashEsc(cert_value));
subprocess.Popen(cmd,executable='bash',shell=True).wait();
## argument: "verify"
## argument: "-CAfile"
## argument: "/dev/fd/63"
## argument: "/dev/fd/62"
## 0

Related

Using ssh and sed within a python script with os.system properly

I am trying to run an ssh command within a python script using os.system to add a 0 at the end of a fully matched string in a remote server using ssh and sed.
I have a file called nodelist in a remote server that's a list that looks like this.
test-node-1
test-node-2
...
test-node-11
test-node-12
test-node-13
...
test-node-21
I want to use sed to make the following modification, I want to search test-node-1, and when a full match is found I want to add a 0 at the end, the file must end up looking like this.
test-node-1 0
test-node-2
...
test-node-11
test-node-12
test-node-13
...
test-node-21
However, when I run the first command,
hostname = 'test-node-1'
function = 'nodelist'
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/{hostname}/s/$/ 0/' ~/{function}.txt\"")
The result becomes like this,
test-node-1 0
test-node-2
...
test-node-11 0
test-node-12 0
test-node-13 0
...
test-node-21
I tried adding a \b to the command like this,
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/\b{hostname}\b/s/$/ 0/' ~/{function}.txt\"")
The command doesn't work at all.
I have to manually type in the node name instead of using a variable like so,
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/\btest-node-1\b/s/$/ 0/' ~/{function}.txt\"")
to make my command work.
What's wrong with my command, why can't I do what I want it to do?
This code has serious security problems; fixing them requires reengineering it from scratch. Let's do that here:
#!/usr/bin/env python3
import os.path
import shlex # note, quote is only here in Python 3.x; in 2.x it was in the pipes module
import subprocess
import sys
# can set these from a loop if you choose, of course
username = "whoever"
serverlocation = "whereever"
hostname = 'test-node-1'
function = 'somename'
desired_cmd = ['sed', '-i',
f'/\\b{hostname}\\b/s/$/ 0/',
f'{function}.txt']
desired_cmd_str = ' '.join(shlex.quote(word) for word in desired_cmd)
print(f"Remote command: {desired_cmd_str}", file=sys.stderr)
# could just pass the below direct to subprocess.run, but let's log what we're doing:
ssh_cmd = ['ssh', '-i', os.path.expanduser('~/.ssh/my-ssh-key'),
f"{username}#{serverlocation}", desired_cmd_str]
ssh_cmd_str = ' '.join(shlex.quote(word) for word in ssh_cmd)
print(f"Local command: {ssh_cmd_str}", file=sys.stderr) # log equivalent shell command
subprocess.run(ssh_cmd) # but locally, run without a shell
If you run this (except for the subprocess.run at the end, which would require a real SSH key, hostname, etc), output looks like:
Remote command: sed -i '/\btest-node-1\b/s/$/ 0/' somename.txt
Local command: ssh -i /home/yourname/.ssh/my-ssh-key whoever#whereever 'sed -i '"'"'/\btest-node-1\b/s/$/ 0/'"'"' somename.txt'
That's correct/desired output; the funny '"'"' idiom is how one safely injects a literal single quote inside a single-quoted string in a POSIX-compliant shell.
What's different? Lots:
We're generating the commands we want to run as arrays, and letting Python do the work of converting those arrays to strings where necessary. This avoids shell injection attacks, a very common class of security vulnerability.
Because we're generating lists ourselves, we can change how we quote each one: We can use f-strings when it's appropriate to do so, raw strings when it's appropriate, etc.
We aren't passing ~ to the remote server: It's redundant and unnecessary because ~ is the default place for a SSH session to start; and the security precautions we're using (to prevent values from being parsed as code by a shell) prevent it from having any effect (as the replacement of ~ with the active value of HOME is not done by sed itself, but by the shell that invokes it; because we aren't invoking any local shell at all, we also needed to use os.path.expanduser to cause the ~ in ~/.ssh/my-ssh-key to be honored).
Because we aren't using a raw string, we need to double the backslashes in \b to ensure that they're treated as literal rather than syntactic by Python.
Critically, we're never passing data in a context where it could be parsed as code by any shell, either local or remote.

Execute bash-command with "at" (<<<) via python: syntax error, last token seen

I'm using a radio sender on my RPi to control some light-devices at home. I'm trying to implement a time control and had successfully used the program "at" in the past.
#!/usr/bin/python
import subprocess as sp
##### some code #####
sp.call(['at', varTime, '<<<', '\"sudo', './codesend', '111111\"'])
When I execute the program, i receive the
errmsg:
syntax error. Last token seen: <
Garbled time
This codesnipped works fine with every command by itself (as long every parameter is from type string).
It's neccessary to call "at" in this way: at 18:25 <<< "sudo ./codesend 111111" to hold the command in the queue (viewable in "atq"),
because sudo ./codesend 111111 | at 18:25 just executes the command directly and writes down the execution in "/var/mail/user".
My question ist, how can I avoid the syntax error.
I'm using a lot of other packages in this program, so I have to stay with Python
I hope someone has a solution for this problem or can help to find my mistake.
Many thanks in advance
Preface: Shared Code
Consider the following context to be part of both branches of this answer.
import subprocess as sp
try:
from shlex import quote # Python 3
except ImportError:
from pipes import quote # Python 2
# given the command you want to schedule, as an array...
cmd = ['sudo', './codesend', '111111']
# ...generate a safely shell-escaped string.
cmd_str = ' '.join(quote(x) for x in cmd))
Solution A: Feed Stdin In Python
<<< is shell syntax. It has no meaning to at, and it's completely normal and expected for at to reject it if given as a literal argument.
You don't need to invoke a shell, though -- you can do the same thing directly from native Python:
p = sp.Popen(['at', vartime], stdin=sp.PIPE)
p.communicate(cmd_str)
Solution B: Explicitly Invoke A Shell
Moreover, <<< isn't /bin/sh syntax -- it's an extension honored in bash, ksh, and others; so you can't reliably get it just by adding the shell=True flag (which uses /bin/sh and so guarantees only POSIX-baseline features). If you want it, you need to explicitly invoke a shell with the feature, like so:
bash_script = '''
at "$1" <<<"$2"
'''
sp.call(['bash', '-c', bash_script,
'_', # this is $0 for that script
vartime, # this is its $1
cmd_str, # this is its $2
])
In either case, note that we're using shlex.quote() or pipes.quote() (as appropriate for our Python release) when generating a shell command from an argument list; this is critical to avoid creating shell injection vulnerabilities in our software.

Bash: Tokenize string using shell rules without eval'ing it?

I'm writing a wrapper script. The original program's arguments are in a separate file, args. The script needs to split contents of args using shell parameter rules and then run the program. A partial solution (set + eval) was offered in Splitting a string to tokens according to shell parameter rules without eval:
#!/usr/bin/env bash
STDOUT="$1"
STDERR="$2"
( set -f ; eval "set -- $(cat args)"; exec run_in_container "$#" >"$STDOUT" 2>"$STDERR" )
but in my case args is user-generated. One can easily imagine
args: echo "Hello, 'world'! $(rm -rf /)" (not cool, but harmless: commands are run in a e.g. docker container)
args: bash -c "$JAVA_HOME/<...> > <...> && <...>" (harmful: $JAVA_HOME was intended to be container's value of environment variable JAVA_HOME, but actually will be substituted earlier, when eval'ing the command in the wrapper script's subshell.)
I tried Python, and this works:
#!/usr/bin/env python
import shlex, subprocess, sys
with open('args', 'r') as argsfile:
args = argsfile.read()
with open(sys.argv[1], 'w') as outfile, open(sys.argv[2], 'w') as errfile:
exit(subprocess.call(["run_in_container"] + shlex.split(args), stdout=outfile, stderr=errfile))
Is there a way to do shlex in bash: tokenize the string using shell parameter rules, but don't substitute any variables' values, don't execute $(...) etc.?

Python rsync error in reading remote root-level files

I try to setup a cron job to rsync remote files (contains root-level files) into my local server, if I run the command in shell, it works. But if I run this in Python, I got into strange command not found error:
This works if run it in a shell:
rsync -ave ssh --rsync-path='sudo rsync' --delete root#192.168.1.100:/tmp/test2 ./test
But this Python script doesn't:
#!/usr/bin/python
from subprocess import call
....
for src_dir in backup_list:
call(["rsync", "-ave", "ssh", "--rsync-path='sudo rsync'", "--delete", src_host+src_dir, dst_dir])
It fails with:
local server:$ backup.py
bash: sudo rsync: command not found
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: remote command not found (code 127) at io.c(226) [Receiver=3.1.0]
...
It is most likely a spacing error or something small, the way I debug commands is to make sure to prints out. OS.system is a great alternative thats easier although subprocess is better. I am not around my computer to test it but you can either set your subprocess like that, or use this example. This is assuming your on Linux or Mac.
import os
cmd = ('rsync -ave --delete root' +str(src_host) + str(src_directory) + '' + str(dst_dir)) #variable you can call anytime
os.system(cmd) # actually performs the command
print x # how to test and make sure
Quotes around an argument with spaces like you have in "--rsync-path='sudo rsync'" are needed when the shell splits up a long string into arguments, to avoid treating rsync as a separate argument. In your call(), you're providing the individual arguments, so that splitting of a string into arguments is not performed. With your code as-is, the quotes end up as part of the argument passed to rsync. Just drop them. Here's a working example of the list passed to a call() for a very similar rsync invocation:
['rsync',
'-arvz',
'-delete',
'-e',
'ssh',
'--rsync-path=sudo rsync',
'192.168.0.17:/remote/directory/',
'/local/directory/']
I have been facing the same issue:
This piece of code work for me…
join the command while passing to call or Popen and add shell=True.
from subprocess import call
for src_dir in backup_list:
call( " ".join(["rsync", "-ave", "ssh", "--rsync-path='sudo rsync'", "--delete", src_host+src_dir, dst_dir]) , shell=True)

Why does subprocess.Popen() with shell=True work differently on Linux vs Windows?

When using subprocess.Popen(args, shell=True) to run "gcc --version" (just as an example), on Windows we get this:
>>> from subprocess import Popen
>>> Popen(['gcc', '--version'], shell=True)
gcc (GCC) 3.4.5 (mingw-vista special r3) ...
So it's nicely printing out the version as I expect. But on Linux we get this:
>>> from subprocess import Popen
>>> Popen(['gcc', '--version'], shell=True)
gcc: no input files
Because gcc hasn't received the --version option.
The docs don't specify exactly what should happen to the args under Windows, but it does say, on Unix, "If args is a sequence, the first item specifies the command string, and any additional items will be treated as additional shell arguments." IMHO the Windows way is better, because it allows you to treat Popen(arglist) calls the same as Popen(arglist, shell=True) ones.
Why the difference between Windows and Linux here?
Actually on Windows, it does use cmd.exe when shell=True - it prepends cmd.exe /c (it actually looks up the COMSPEC environment variable but defaults to cmd.exe if not present) to the shell arguments. (On Windows 95/98 it uses the intermediate w9xpopen program to actually launch the command).
So the strange implementation is actually the UNIX one, which does the following (where each space separates a different argument):
/bin/sh -c gcc --version
It looks like the correct implementation (at least on Linux) would be:
/bin/sh -c "gcc --version" gcc --version
Since this would set the command string from the quoted parameters, and pass the other parameters successfully.
From the sh man page section for -c:
Read commands from the command_string operand instead of from the standard input. Special parameter 0 will be set from the command_name operand and the positional parameters ($1, $2, etc.) set from the remaining argument operands.
This patch seems to fairly simply do the trick:
--- subprocess.py.orig 2009-04-19 04:43:42.000000000 +0200
+++ subprocess.py 2009-08-10 13:08:48.000000000 +0200
## -990,7 +990,7 ##
args = list(args)
if shell:
- args = ["/bin/sh", "-c"] + args
+ args = ["/bin/sh", "-c"] + [" ".join(args)] + args
if executable is None:
executable = args[0]
From the subprocess.py source:
On UNIX, with shell=True: If args is a string, it specifies the
command string to execute through the shell. If args is a sequence,
the first item specifies the command string, and any additional items
will be treated as additional shell arguments.
On Windows: the Popen class uses CreateProcess() to execute the child
program, which operates on strings. If args is a sequence, it will be
converted to a string using the list2cmdline method. Please note that
not all MS Windows applications interpret the command line the same
way: The list2cmdline is designed for applications using the same
rules as the MS C runtime.
That doesn't answer why, just clarifies that you are seeing the expected behavior.
The "why" is probably that on UNIX-like systems, command arguments are actually passed through to applications (using the exec* family of calls) as an array of strings. In other words, the calling process decides what goes into EACH command line argument. Whereas when you tell it to use a shell, the calling process actually only gets the chance to pass a single command line argument to the shell to execute: The entire command line that you want executed, executable name and arguments, as a single string.
But on Windows, the entire command line (according to the above documentation) is passed as a single string to the child process. If you look at the CreateProcess API documentation, you will notice that it expects all of the command line arguments to be concatenated together into a big string (hence the call to list2cmdline).
Plus there is the fact that on UNIX-like systems there actually is a shell that can do useful things, so I suspect that the other reason for the difference is that on Windows, shell=True does nothing, which is why it is working the way you are seeing. The only way to make the two systems act identically would be for it to simply drop all of the command line arguments when shell=True on Windows.
The reason for the UNIX behaviour of shell=True is to do with quoting. When we write a shell command, it will be split at spaces, so we have to quote some arguments:
cp "My File" "New Location"
This leads to problems when our arguments contain quotes, which requires escaping:
grep -r "\"hello\"" .
Sometimes we can get awful situations where \ must be escaped too!
Of course, the real problem is that we're trying to use one string to specify multiple strings. When calling system commands, most programming languages avoid this by allowing us to send multiple strings in the first place, hence:
Popen(['cp', 'My File', 'New Location'])
Popen(['grep', '-r', '"hello"'])
Sometimes it can be nice to run "raw" shell commands; for example, if we're copy-pasting something from a shell script or a Web site, and we don't want to convert all of the horrible escaping manually. That's why the shell=True option exists:
Popen(['cp "My File" "New Location"'], shell=True)
Popen(['grep -r "\"hello\"" .'], shell=True)
I'm not familiar with Windows so I don't know how or why it behaves differently.

Categories

Resources