Unit Test for Bash completion script - python

I would like to write a Unit Test for a (rather complex) Bash completion script, preferrably with Python - just something that gets the values of a Bash completion programmatically.
The test should look like this:
def test_completion():
# trigger_completion should return what a user should get on triggering
# Bash completion like this: 'pbt createkvm<TAB>'
assert trigger_completion('pbt createkvm') == "module1 module2 module3"
How can I simulate Bash completion programmatically to check the completion values inside a testsuite for my tool?

Say you have a bash-completion script in a file called asdf-completion, containing:
_asdf() {
COMPREPLY=()
local cur prev
cur=$(_get_cword)
COMPREPLY=( $( compgen -W "one two three four five six" -- "$cur") )
return 0
}
complete -F _asdf asdf
This uses the shell function _asdf to provide completions for the fictional asdf command. If we set the right environment variables (from the bash man page), then we can get the same result, which is the placement of the potential expansions into the COMPREPLY variable. Here's an example of doing that in a unittest:
import subprocess
import unittest
class BashTestCase(unittest.TestCase):
def test_complete(self):
completion_file="asdf-completion"
partial_word="f"
cmd=["asdf", "other", "arguments", partial_word]
cmdline = ' '.join(cmd)
out = subprocess.Popen(['bash', '-i', '-c',
r'source {compfile}; COMP_LINE="{cmdline}" COMP_WORDS=({cmdline}) COMP_CWORD={cword} COMP_POINT={cmdlen} $(complete -p {cmd} | sed "s/.*-F \\([^ ]*\\) .*/\\1/") && echo ${{COMPREPLY[*]}}'.format(
compfile=completion_file, cmdline=cmdline, cmdlen=len(cmdline), cmd=cmd[0], cword=cmd.index(partial_word)
)],
stdout=subprocess.PIPE)
stdout, stderr = out.communicate()
self.assertEqual(stdout, "four five\n")
if (__name__=='__main__'):
unittest.main()
This should work for any completions that use -F, but may work for others as well.
je4d's comment to use expect is a good one for a more complete test.

bonsaiviking's solution almost worked for me. I had to change the bash string script. I added an extra ';' separator to the executed bash script otherwise the execution wouldn't work on Mac OS X. Not really sure why.
I also generalized the initialization of the various COMP_ arguments a bit to handle the various cases I ended up with.
The final solution is a helper class to test bash completion from python so that the above test would be written as:
from completion import BashCompletionTest
class AdsfTestCase(BashCompletionTest):
def test_orig(self):
self.run_complete("other arguments f", "four five")
def run_complete(self, command, expected):
completion_file="adsf-completion"
program="asdf"
super(AdsfTestCase, self).run_complete(completion_file, program, command, expected)
if (__name__=='__main__'):
unittest.main()
The completion lib is located under https://github.com/lacostej/unity3d-bash-completion/blob/master/lib/completion.py

Related

escape ampersand & in string when send as argument python

I have written two python scripts A.py and B.py So B.py gets called in A.py like this:
config_object = {}
with open(config_filename) as data:
config_object = json.load(data, object_pairs_hook=OrderedDict)
command = './scripts/B.py --config-file={} --token-a={} --token-b={}'.format(promote_config_filename, config_object['username'], config_object['password'])
os.system(command)
In here config_object['password'] contains & in it. Say it is something like this S01S0lQb1T3&BRn2^Qt3
Now when this value get passed to B.py it gets password as S01S0lQb1T3 So after & whatever it is getting ignored.
How to solve this?
os.system runs a shell. You can escape arbitrary strings for the shell with shlex.quote() ... but a much superior solution is to use subprocess instead, like the os.system documentation also recommends.
subprocess.run(
['./scripts/B.py',
'--config-file={}'.format(promote_config_filename),
'--token-a={}'.format(config_object['username']),
'--token-b={}'.format(config_object['password'])])
Because there is no shell=True, the strings are now passed to the subprocess verbatim.
Perhaps see also Actual meaning of shell=True in subprocess
#tripleee has good suggestions. In terms of why this is happening, if you are running Linux/Unix at least, the & would start a background process. You can search "linux job control" for more info on that. The shortest (but not best) solution is to wrap your special characters in single or double quotes in the final command.
See this bash for a simple example:
$ echo foo&bar
[1] 20054
foo
Command 'bar' not found, but can be installed with:
sudo apt install bar
[1]+ Done echo foo
$ echo "foo&bar"
foo&bar

Retrieving the cmd line arguments as it is in Python

I am writing a wrapper tool in python. Invocation of the tool is as below:
<wrapper program> <actual program> <arguments>
The wrapper program just adds one more argument and executes the actual program:
<actual program> <arguments> <additional args added>
The tricky part is that has some strings that are escaped and some are not escaped
Example arguments format: -d \"abc\" -f "xyz" "pqr" and more args
The wrapper tool is generic and it shouldn't know about the actual program and parameters, other than adding an additional argument
I understand that this is related to the shell. Any suggestions on how to implement the wrapper tool.
I tried implementing by escaping all the "". There are some cases in which "" are not escaped in the invocation, so the tool is not able to execute the actual program correctly.
Is it possible to preserve the original arguments as provided by the user ?.
Wrapper.py Source:
import sys
import os
if __name__ == '__main__':
cmd = sys.argv[1] + " "
args = sys.argv[2:]
args.insert(0, "test")
cmd_string = cmd + " ".join(args)
print("Executing:", cmd_string)
os.system(cmd_string)
Output:
wrapper.py tool -d "abc" -f \"pqr\" 123
Executing: tool test -d abc -f "pqr" 123
Expected execution: tool test -d "abc" -f \"pqr\" 123
Use subprocess.call here and then you're not dealing with strings/having to worry about escaping values etc...
import sys
import subprocess
import random
subprocess.call([
sys.argv[1], # the program to call
*sys.argv[2:], # the original arguments to pass through
# do extra args...
'--some-argument', random.randint(1, 100),
'--text-argument', 'some string with "quoted stuff"',
'-o', 'string with no quoted stuff',
'arg_x',
'arg_y',
# etc...
])
If you're after getting the stdout of the call then you can do result = subprocess.check_output(...) (or also pipe the callees stderr to it as well) if you then want to check results... Note from 3.5 onwards, there's also another high level helper subprocess.run that covers the majority of use cases.
It'll be worth checking out all the helper functions in subprocess

autocomplete for test.py like git <tab>

when I issue git with tab , it can auto-complete with a list, I want to write a test.py, when I type test.py followed with tab, it can auto-complete with a given list defined in test.py, is it possible ?
$ git [tab]
add branch column fetch help mv reflog revert stash
am bundle commit filter-branch imap-send name-rev relink rm status
annotate checkout config format-patch init notes remote send-email submodule
apply cherry credential fsck instaweb p4 repack shortlog subtree
archive cherry-pick describe gc log pull replace show tag
bisect clean diff get-tar-commit-id merge push request-pull show-branch whatchanged
blame clone difftool grep mergetool rebase reset stage
The method you are looking for is: readline.set_completer . This method interacts with the readline of the bash shell. It's simple to implement. Examples: https://pymotw.com/2/readline/
That's not a feature of the git binary itself, it's a bash completion 'hack' and as such has nothing to do with Python per-se, but since you've tagged it as such let's add a little twist. Let's say we create a script aware of its acceptable arguments - test.py:
#!/usr/bin/env python
import sys
# let's define some sample functions to be called on passed arguments
def f1():
print("F1 called!")
def f2():
print("F2 called!")
def f3():
print("F3 called!")
def f_invalid(): # a simple invalid placeholder function
print("Invalid command!")
def f_list(): # a function to list all valid arguments
print(" ".join(sorted(arguments.keys())))
if __name__ == "__main__": # make sure we're running this as a script
arguments = { # a simple argument map, use argparse or similar in a real world use
"arg1": f1,
"arg2": f2,
"arg3": f3,
"list_arguments": f_list
}
if len(sys.argv) > 1:
for arg in sys.argv[1:]: # loop through all arguments
arguments.get(arg, f_invalid)() # call the mapped or invalid function
else:
print("At least one argument required!")
NOTE: Make sure you add an executable flag to the script (chmod +x test.py) so its shebang is used for executing instead of providing it as an argument to the Python interpreter.
Apart from all the boilerplate, the important argument is list_arguments - it lists all available arguments to this script and we'll use this output in our bash completion script to instruct bash how to auto-complete. To do so, create another script, let's call it test-completion.bash:
#!/usr/bin/env bash
SCRIPT_NAME=test.py
SCRIPT_PATH=/path/to/your/script
_complete_script()
{
local cursor options
options=$(${SCRIPT_PATH}/${SCRIPT_NAME} list_arguments)
cursor="${COMP_WORDS[COMP_CWORD]}"
COMPREPLY=( $(compgen -W "${options}" -- ${cursor}) )
return 0
}
complete -F _complete_script ${SCRIPT_NAME}
What it does is essentially adding to complete the _complete_script function to be called whenever a completion over test.py is invoked. The _complete_script function itself first calls list_arguments on test.py to retrieve its acceptable arguments, and then uses compgen to create a required structure for complete to be able to print it out.
To test, all you need is to source this script as:
source test-completion.bash
And then your bash will behave as:
$ ./test.py [tab]
arg1 arg2 arg3 list_arguments
And what's more, it's completely controllable from your Python script - whatever gets printed as a list on list_arguments command is what will be shown as auto-completion help.
To make the change permanent, you can simply add the source line to your .bashrc, or if you want more structured solution you can follow the guidelines for your OS. There are a couple of ways described on the git-flow-completion page for example. Of course, this assumes you actually have bash-autocomplete installed and enabled on your system, but your git autocompletion wouldn't work if you didn't.
Speaking of git autocompletion, you can see how it's implemented by checking git-completion.bash source - a word of warning, it's not for the fainthearted.

How to get full command executed using sh module?

I ran into an error while executing one of our devops scripts. The script uses the sh package (for executing common unix commands, pypi link). However, the commands that are executed are truncated in the messages printed by sh. How can I see the whole command that was executed?
example:
import sh
sh.ssh(host,
'rsync -av {src} {dst}'.format(src=src,
dst=dst),
_out=sys.stdout
)
Produces output like:
INFO:sh.command:<Command '/bin/ssh dbw#ny...(77 more)' call_args {'bg': False, 'timeo...(522 more)>: starting process
I'd like to see the full command executed, and all of the call_args.
sh.ssh returns an sh.RunningCommand object, which you can query to find the call args and the cmd:
import sh
a=sh.ssh(host,
'rsync -av {src} {dst}'.format(src=src,
dst=dst),
_out=sys.stdout
)
print(a.cmd)
print(a.call_args)
After peeking into the source code, it looks like this is controlled by the max_len parameter of the friendly_truncate function, so one option may be to edit the sh.py code directly and set a higher int value:
https://github.com/amoffat/sh/blob/master/sh.py#L424
https://github.com/amoffat/sh/blob/master/sh.py#L425
Or, possibly just remove points where that function is called.

Handling specific Python error within Bash call?

I am using the line_profiler, which allows you to drop #profile decorators anywhere in a python codebase and returns line output.
However, if you try to execute python code that contains one such #profile decorator without loading this line_profiler module, the code will fail with a NameError, for such a decorator is defined and injected by this external library.
I'd like a bash command that attempts to run my python script with vanilla python. Then, if and only if the error consists of NameError, I want to give it a second try. This is what I have got so far:
python -u $file || python -m kernprof -l -v --outfile=/dev/null $file"
The problem is of course that if my python code has ANY errors at all, be it ValueError or IndentationError or anything, it tries the profiler. I want to ONLY run the profiler if the error contains a string NameError: name 'profile' is not defined is found within stderr.
Wouldn't be better to monkey patch profile when no line_profiles is present ?
Something like
try:
import line_profiles
except:
import warnings
warnings.warn("Profile disabled")
def profile(fn):
def wrapper(*args, **kw):
return fn(*args, **kw)
return wrapper
This way your code runs in either case without complicating matters.
Here's a usable Bash solution that preserves stdout and stderr as separate streams (with the caveat that stderr appears after stdout) and only checks stderr for the error message (which probably is overkill though).
It goes the easy route and simply saves the stderr output to a file. It also handles script names that contain spaces (by properly quoting variable expansions where needed) and/or start with - (by passing -- before the filename to switch off flag processing) as it's an OCD pet peeve of mine.
On success or if there is an error that is not the expected error, the stderr of the first python command is shown. Otherwise (for the expected error), it is hidden.
Usage is $ ./check <script>.
#!/bin/bash
if [[ $# -ne 1 ]]; then
echo "Expected one argument: the script" >&2
exit 1
fi
script=$1
if [[ ! -e $script ]]; then
echo "'$script' does not exist or is not a regular file" >&2
exit 1
fi
if ! python -- "$script" 2>saved_stderr &&
grep -q "NameError: name 'profile' is not defined" saved_stderr; then
# Try again with the kernprof module.
python -m kernprof -l -v --outfile=/dev/null -- "$script"
else
# Either success or an unexpected error. Show stderr.
cat saved_stderr >&2
fi
rm saved_stderr
To check if the return status of a command is zero (i.e., success), it suffices to do
if <cmd>; then <if successful>; fi`
! negates the exit status, so if ! <cmd> ... can be used to check for failure. ! only applies to the python command above, not all of python ... && grep ....
>&2 redirects stdout to stderr. (It's the same as 1>&2 but saves a single character, which is a bit silly, but I included for illustrative purposes as it's a common idiom.)
Creating a simple Python wrapper would seem a lot more straightforward, because inside Python, you have access to the things which go wrong.
Assuming your $file uses the common __name__ == '__main__' idiom something like this:
if __name__ == '__main__':
main()
you can create a wrapper something like
import yourfile
try:
file.main()
except NameError:
import kernprof
# hack hack, quickly constructed from looking at main() in kernprof.py
prof = kernprof.ContextualProfile()
execfile_ = execfile
ns = locals()
try:
prof.runctx('execfile_(%r, globals())' % (yourfile,), ns, ns)
finally:
prof.print_stats()

Categories

Resources