I am trying to run an ssh command within a python script using os.system to add a 0 at the end of a fully matched string in a remote server using ssh and sed.
I have a file called nodelist in a remote server that's a list that looks like this.
test-node-1
test-node-2
...
test-node-11
test-node-12
test-node-13
...
test-node-21
I want to use sed to make the following modification, I want to search test-node-1, and when a full match is found I want to add a 0 at the end, the file must end up looking like this.
test-node-1 0
test-node-2
...
test-node-11
test-node-12
test-node-13
...
test-node-21
However, when I run the first command,
hostname = 'test-node-1'
function = 'nodelist'
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/{hostname}/s/$/ 0/' ~/{function}.txt\"")
The result becomes like this,
test-node-1 0
test-node-2
...
test-node-11 0
test-node-12 0
test-node-13 0
...
test-node-21
I tried adding a \b to the command like this,
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/\b{hostname}\b/s/$/ 0/' ~/{function}.txt\"")
The command doesn't work at all.
I have to manually type in the node name instead of using a variable like so,
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/\btest-node-1\b/s/$/ 0/' ~/{function}.txt\"")
to make my command work.
What's wrong with my command, why can't I do what I want it to do?
This code has serious security problems; fixing them requires reengineering it from scratch. Let's do that here:
#!/usr/bin/env python3
import os.path
import shlex # note, quote is only here in Python 3.x; in 2.x it was in the pipes module
import subprocess
import sys
# can set these from a loop if you choose, of course
username = "whoever"
serverlocation = "whereever"
hostname = 'test-node-1'
function = 'somename'
desired_cmd = ['sed', '-i',
f'/\\b{hostname}\\b/s/$/ 0/',
f'{function}.txt']
desired_cmd_str = ' '.join(shlex.quote(word) for word in desired_cmd)
print(f"Remote command: {desired_cmd_str}", file=sys.stderr)
# could just pass the below direct to subprocess.run, but let's log what we're doing:
ssh_cmd = ['ssh', '-i', os.path.expanduser('~/.ssh/my-ssh-key'),
f"{username}#{serverlocation}", desired_cmd_str]
ssh_cmd_str = ' '.join(shlex.quote(word) for word in ssh_cmd)
print(f"Local command: {ssh_cmd_str}", file=sys.stderr) # log equivalent shell command
subprocess.run(ssh_cmd) # but locally, run without a shell
If you run this (except for the subprocess.run at the end, which would require a real SSH key, hostname, etc), output looks like:
Remote command: sed -i '/\btest-node-1\b/s/$/ 0/' somename.txt
Local command: ssh -i /home/yourname/.ssh/my-ssh-key whoever#whereever 'sed -i '"'"'/\btest-node-1\b/s/$/ 0/'"'"' somename.txt'
That's correct/desired output; the funny '"'"' idiom is how one safely injects a literal single quote inside a single-quoted string in a POSIX-compliant shell.
What's different? Lots:
We're generating the commands we want to run as arrays, and letting Python do the work of converting those arrays to strings where necessary. This avoids shell injection attacks, a very common class of security vulnerability.
Because we're generating lists ourselves, we can change how we quote each one: We can use f-strings when it's appropriate to do so, raw strings when it's appropriate, etc.
We aren't passing ~ to the remote server: It's redundant and unnecessary because ~ is the default place for a SSH session to start; and the security precautions we're using (to prevent values from being parsed as code by a shell) prevent it from having any effect (as the replacement of ~ with the active value of HOME is not done by sed itself, but by the shell that invokes it; because we aren't invoking any local shell at all, we also needed to use os.path.expanduser to cause the ~ in ~/.ssh/my-ssh-key to be honored).
Because we aren't using a raw string, we need to double the backslashes in \b to ensure that they're treated as literal rather than syntactic by Python.
Critically, we're never passing data in a context where it could be parsed as code by any shell, either local or remote.
Related
I'm using a radio sender on my RPi to control some light-devices at home. I'm trying to implement a time control and had successfully used the program "at" in the past.
#!/usr/bin/python
import subprocess as sp
##### some code #####
sp.call(['at', varTime, '<<<', '\"sudo', './codesend', '111111\"'])
When I execute the program, i receive the
errmsg:
syntax error. Last token seen: <
Garbled time
This codesnipped works fine with every command by itself (as long every parameter is from type string).
It's neccessary to call "at" in this way: at 18:25 <<< "sudo ./codesend 111111" to hold the command in the queue (viewable in "atq"),
because sudo ./codesend 111111 | at 18:25 just executes the command directly and writes down the execution in "/var/mail/user".
My question ist, how can I avoid the syntax error.
I'm using a lot of other packages in this program, so I have to stay with Python
I hope someone has a solution for this problem or can help to find my mistake.
Many thanks in advance
Preface: Shared Code
Consider the following context to be part of both branches of this answer.
import subprocess as sp
try:
from shlex import quote # Python 3
except ImportError:
from pipes import quote # Python 2
# given the command you want to schedule, as an array...
cmd = ['sudo', './codesend', '111111']
# ...generate a safely shell-escaped string.
cmd_str = ' '.join(quote(x) for x in cmd))
Solution A: Feed Stdin In Python
<<< is shell syntax. It has no meaning to at, and it's completely normal and expected for at to reject it if given as a literal argument.
You don't need to invoke a shell, though -- you can do the same thing directly from native Python:
p = sp.Popen(['at', vartime], stdin=sp.PIPE)
p.communicate(cmd_str)
Solution B: Explicitly Invoke A Shell
Moreover, <<< isn't /bin/sh syntax -- it's an extension honored in bash, ksh, and others; so you can't reliably get it just by adding the shell=True flag (which uses /bin/sh and so guarantees only POSIX-baseline features). If you want it, you need to explicitly invoke a shell with the feature, like so:
bash_script = '''
at "$1" <<<"$2"
'''
sp.call(['bash', '-c', bash_script,
'_', # this is $0 for that script
vartime, # this is its $1
cmd_str, # this is its $2
])
In either case, note that we're using shlex.quote() or pipes.quote() (as appropriate for our Python release) when generating a shell command from an argument list; this is critical to avoid creating shell injection vulnerabilities in our software.
I'm trying to validate a certificate with a CA bundle file. The original Bash command takes two file arguments like this;
openssl verify -CAfile ca-ssl.ca cert-ssl.crt
I'm trying to figure out how to run the above command in python subprocess whilst having ca-ssl.ca and cert-ssl.crt as variable strings (as opposed to files).
If I ran the command with variables (instead of files) in bash then this would work;
ca_value=$(<ca-ssl.ca)
cert_value=$(<cert-ssl.crt)
openssl verify -CAfile <(echo "$ca_value") <(echo "$cert_value")
However, I'm struggling to figure out how to do the above with Python, preferably without needing to use shell=True. I have tried the following but doesn't work and instead prints 'help' commands for openssl;
certificate = ''' cert string '''
ca_bundle = ''' ca bundle string '''
def ca_valid(cert, ca):
ca_validation = subprocess.Popen(['openssl', 'verify', '-CAfile', ca, cert], stdin=subprocess.PIPE, stdout=subprocess.PIPE, bufsize=1)
ca_validation_output = ca_validation.communicate()[0].strip()
ca_validation.wait()
ca_valid(certificate, ca_bundle)
Any guidance/clues on what I need to look further into would be appreciated.
Bash process substitution <(...) in the end is supplying a file path as an argument to openssl.
You will need to make a helper function to create this functionality since Python doesn't have any operators that allow you to inline pipe data into a file and present its path:
import subprocess
def validate_ca(cert, ca):
with filearg(ca) as ca_path, filearg(cert) as cert_path:
ca_validation = subprocess.Popen(
['openssl', 'verify', '-CAfile', ca_path, cert_path],
stdout=subprocess.PIPE,
)
return ca_validation.communicate()[0].strip()
Where filearg is a context manager which creates a named temporary file with your desired text, closes it, hands the path to you, and then removes it after the with scope ends.
import os
import tempfile
from contextlib import contextmanager
#contextmanger
def filearg(txt):
with tempfile.NamedTemporaryFile('w', delete=False) as fh:
fh.write(txt)
try:
yield fh.name
finally:
os.remove(fh.name)
Anything accessing this temporary file(like the subprocess) needs to work inside the context manager.
By the way, the Popen.wait(self) is redundant since Popen.communicate(self) waits for termination.
If you want to use process substitution, you will have to use shell=True. This is unavoidable. The <(...) process substitution syntax is bash syntax; you simply must call bash into service to parse and execute such code.
Additionally, you have to ensure that bash is invoked, as opposed to sh. On some systems sh may refer to an old Bourne shell (as opposed to the Bourne-again shell bash) in which case process substitution will definitely not work. On some systems sh will invoke bash, but process substitution will still not work, because when invoked under the name sh the bash shell enters something called POSIX mode. Here are some excerpts from the bash man page:
...
INVOCATION
... When invoked as sh, bash enters posix mode after the startup files are read. ....
...
SEE ALSO
...
http://tiswww.case.edu/~chet/bash/POSIX -- a description of posix mode
...
From the above web link:
Process substitution is not available.
/bin/sh seems to be the default shell in python, whether you're using os.system() or subprocess.Popen(). So you'll have to specify the argument executable='bash', or executable='/bin/bash' if you want to specify the full path.
This is working for me:
subprocess.Popen('printf \'argument: "%s"\\n\' verify -CAfile <(echo ca_value) <(echo cert_value);',executable='bash',shell=True).wait();
## argument: "verify"
## argument: "-CAfile"
## argument: "/dev/fd/63"
## argument: "/dev/fd/62"
## 0
Here's how you can actually embed the string values from variables:
bashEsc = lambda s: "'"+s.replace("'","'\\''")+"'";
ca_value = 'x';
cert_value = 'y';
cmd = 'printf \'argument: "%%s"\\n\' verify -CAfile <(echo %s) <(echo %s);'%(bashEsc(ca_value),bashEsc(cert_value));
subprocess.Popen(cmd,executable='bash',shell=True).wait();
## argument: "verify"
## argument: "-CAfile"
## argument: "/dev/fd/63"
## argument: "/dev/fd/62"
## 0
I am trying to run a gerrit cherry pick query in python
query_to_run='git fetch https://gerritserver.com/projectname refs/changes/51/1151/1 ' + '&&' + ' git cherry-pick FETCH_HEAD'
I am getting error:
fatal: Couldn't find remote ref &&
Unexpected end of command stream
My code works with other gerrit queries but not this one, is it the && which is causing problem!
thanks
Pratibha
The && token has no meaning to Git or Gerrit but is interpreted by your shell. By default the subprocess module doesn't pass off commands to the shell but runs the process directly, so the string in query_to_run is sent as a single command. To force subprocess.Popen(), subprocess.check_call() or whatever you're using to pass the command to a shell, pass shell=True:
subprocess.check_call(query_to_run, shell=True)
However, the use of shell=True is discouraged and is unnecessary in this case. What && does is simply run one command and, if successful, run another command. It's basically equivalent to this sequence of Python statements:
subprocess.check_call(command1)
subprocess.check_call(command2)
Alternatively, if you prefer not have exceptions thrown when either of the commands fail:
subprocess.call(command1) != 0 and subprocess.call(command2) != 0
In addition to this, I strongly recommend making a good habit out of passing lists of arguments to process execution functions instead of strings. Passing strings works fine a lot of the time, but when arguments contain spaces you suddenly need to think about quoting.
Putting everything together, this is what I think your code should look like:
try:
subprocess.check_call(['git', 'fetch',
'https://gerritserver.com/projectname',
'refs/changes/51/1151/1'])
subprocess.check_call(['git', 'cherry-pick', 'FETCH_HEAD'])
except (EnvironmentError, subprocess.CalledProcessError):
# Suitable error handling here. I'm not sure about
# the possibility of EnvironmentError exceptions.
Also, a note on terminology: You're talking about Gerrit queries, but using that language might confuse people. By Gerrit query one usually means the Lucene query string entered into the search box in the UI (or the equivalent REST API).
I'm using Python code to run a Hadoop program on a Linux (Cloudera) machine using SSH.
I'm having some trouble with compiling java files to class files. When I'm executing the command:
javac -cp /usr/lib/hadoop/*:/usr/lib/hadoop/client-0.20/* remote_hadoop/javasrc/* from the Linux terminal all the files get compiled successfully.
When I'm executing the same command through my Python SSH client I'm receiving an 'invalid flag' error:
spur.results.RunProcessError: return code: 2
output: b''
stderr output: b'javac: invalid flag: remote_hadoop/javasrc\nUsage: javac \nuse -help for a list of possible options\n'
The python code:
list_of_commands = ["javac", "-cp", r"/usr/lib/hadoop/*:/usr/lib/hadoop/client-0.20/*", input_folder + r"/*"]
print ' '.join(list_of_commands)
self.shell.run(list_of_commands)
The command is getting rendered correctly, since what is getting printed is javac -cp /usr/lib/hadoop/*:/usr/lib/hadoop/client-0.20/* remote_hadoop/javasrc/*.
UPDATE: It's pretty weird. I can compile one file at a time over ssh, but not all of them. Seems like something happens to the "*" over ssh.
You're passing a list of arguments, not a list of commands. It's not even an accurate list of arguments.
If your underlying tool expects a list of arguments, then pass:
['sh', '-c', 'javac -cp /usr/lib/hadoop/*:/usr/lib/hadoop/client-0.20/* remote_hadoop/javasrc/*']
If it expects a list of commands:
['javac -cp /usr/lib/hadoop/*:/usr/lib/hadoop/client-0.20/* remote_hadoop/javasrc/*']
If it expects something else -- read the documentation and determine what that something is!
Note that SSH doesn't provide a way to pass a literal argv array when running an arbitrary command; rather, it expects -- at the protocol level -- a string ready for parsing by the remote shell. If your self.shell.run code is doing shell quoting before joining the argument list given, then it would be passing the last argument as the literal string remote_hadoop/javasrc/* -- not expanding it into a list of filenames as a shell would.
Using the sh -c form forces the remote shell to perform expansion on its end, assuming that contents are being given to it in a form which doesn't have remote expansion performed already.
The problem is the way that spur builds the command list into a command string. It takes every command token and encloses it in single quotes (["ls", "*.txt"]) becomes 'ls' '*.txt'). There is no shell expansion of * inside quotes, so the command doesn't work.
You can see the problem in spur's ssh.py on line 323:
def escape_sh(value):
return "'" + value.replace("'", "'\\''") + "'"
I don't use spur, but it looks like it just doesn't allow you to do such things. The problem with "simplifiers" like spur is that if they simplify in a way you don't want, you can't use them.
I'm trying to write a function that will issue commands via ssh with Popen and return the output.
def remote(cmd):
escaped = escape(cmd)
return subprocess.Popen(escaped, ...).communicate()[0]
My trouble is how to implement the escape function. Is there a Python module (2.6) that has helpers for this? Google shows there's pipes.quote and re.escape but they seem like they only work for locally run commands. With commands passed to ssh, it seems the escaping needs to be more stringent:
For example on a local machine, this is ok:
echo $(hostname)
When passing to ssh it has to be:
ssh server "echo \$(hostname)"
Also, double quotes can be interpreted in different ways depending on the context. For literal quotes, you need the following:
ssh a4ipe511 "echo \\\"hello\\\""
To do variable exansion, double quotes are also used:
ssh a4ipe511 "echo \"\$(hostname)\""
As you can see, the rules for escaping a command that goes into SSH can get pretty complicated. Since this function will be called by anyone, I'm afraid some complex commands will cause the function to return incorrect output.
Is this something that can be solved with a built-in Python module or do I need to implement the escaping myself?
First:
pipes.quote and re.escape have nothing to do with locally run commands. They simply transform a string; what you do with it after that is your business. So either -- in particular pipes.quote -- is suitable for what you want.
Second:
If you want to run the command echo $(hostname) on a remote host using ssh, you don't need to worry about shell escaping, because subprocess.Popen does not pass your commands into a shell by default. So, for example, this works just fine:
>>> import subprocess
>>> subprocess.call([ 'ssh', 'localhost', 'echo $(hostname)'])
myhost.example.com
0
Double quotes also work as you would expect:
>>> subprocess.call([ 'ssh', 'localhost', 'echo "one two"; echo three'])
one two
three
0
It's not clear to me that you actually have a problem.