Add usage help of command line tool to README.rst - python

I wrote a little command line tool, and want to add the "--help" usage message to the docs.
Since I am lazy, I would like to make the update procedure as simple as possible. Here is what I want to the update workflow to look like:
Update code which results in an updates usage message.
Run a script which updates the docs: The new usage message should be visible in the docs.
In other word: I don't want to copy+paste the usage message.
Step1 is comes from my own brain. But want to reuse existing tools for Step2.
Up to now the docs are just a simple README.rst file.
I would like to stick with a simple solution, where the docs can be visible directly via github. Up to now, I don't need the more complicated solution (like readthedocs).
How can I avoid copy+pasting the --help usage message?
Here is the tool I am working on: https://github.com/guettli/reprec

As suggested in the comments, you could use a git pre-commit hook to generate the README.rst file on commit. You could use an existing tool such as cog, or you could just do something very simple with bash.
For example, create a RST "template" file:
README.rst.tmpl
Test Git pre-commit hook project
--------------------------------
>>> INSERTION POINT FOR HELP OUTPUT <<<
.git/hooks/pre-commit
# Sensible to set -e to ensure we exit if anything fails
set -e
# Get the output from your tool.
# Paths are relative to the root of the repo
output=$(tools/my-cmd-line-tool --help)
cat README.rst.tmpl |
while read line
do
if [[ $line == ">>> INSERTION POINT FOR HELP OUTPUT <<<" ]]
then
echo "$output"
else
echo "$line"
fi
done > README.rst
git add README.rst
This gets run before you are prompted for a commit message, if you didn't pass one on the command line. So when the commit takes place if there were any changes to either README.rst.tmpl or the output from your tool, README.rst will be updated with it.
Edit
I believe this should work on Windows too, or something very similar, since git comes with a bash implementation on Windows, but I haven't tested it.

Consider using cog. It's meant for exactly this job.
Here's something that might just work. (untested) And... There's a lot of scope for improvement.
reprec
======
The tool reprec replaces strings in text files:
.. [[[cog
.. import cog
..
.. def indent(text, width=4):
.. return "\n".join((" "*width + line) for line in text.splitlines())
..
.. text = subprocess.check_output("reprec --help", shell=True)
.. cog.out("""
..
.. ::
..
.. ==> reprec --help""",
.. dedent=True
.. )
.. cog.out(indent(text))
.. ]]]
::
===> reprec --help
<all-help-text>
.. [[[end]]]

For getting the usage text at Step 2, you can use the subprocess
usage_text = subprocess.check_output("reprec --help", shell=True)

I would actually approach in a quite different manner, from another side. I think the workflow you described may be greatly simplified if you switch to using argparse instead of getopt you use now. With this you will have:
I personally think, simpler code in your argument parsing function, and probably more safe, because argparse may verify a lot of conditions on given arguments, as long as you declare them (like data types, number of arguments, etc.)
and you can use argparse features to document the arguments directly in the code, right where you declare them (e.g.: help, usage, epilog and others); this effectively means that you could completely delete your own usage function, because argparse will handle this task for you (just run with --help to see the result).
To sum up, basically, arguments, their contracts and help documentation become mostly declarative, and managed altogether in one place only.
OK, OK, I know, the question originally stands how to update the README. I understand that your intention is to take the laziest approach. So, I think, it is lazy enough to:
maintain all your arguments and their documentation once in single place as above
then run something like myprograom --help > README.rst
commit ;)
OK, you will probably need something little bit more complex than just > README.rst. There we can go creative as we want, so the fun starts here. For example:
having README.template.rst (where you actually maintain the README content) and with ## Usage header somewhere in it:
$ myprogram --help > USAGE.rst
$ sed -e '/## Usage/r USAGE.rst' -e '$G' README.template.rst > README.rst
And you get everything working from same source code!
I think it will still need some polishing up, in order to generate valid rst document, but I hope it shows the idea in general.
Gist: Include generated help into README

Related

autocomplete for test.py like git <tab>

when I issue git with tab , it can auto-complete with a list, I want to write a test.py, when I type test.py followed with tab, it can auto-complete with a given list defined in test.py, is it possible ?
$ git [tab]
add branch column fetch help mv reflog revert stash
am bundle commit filter-branch imap-send name-rev relink rm status
annotate checkout config format-patch init notes remote send-email submodule
apply cherry credential fsck instaweb p4 repack shortlog subtree
archive cherry-pick describe gc log pull replace show tag
bisect clean diff get-tar-commit-id merge push request-pull show-branch whatchanged
blame clone difftool grep mergetool rebase reset stage
The method you are looking for is: readline.set_completer . This method interacts with the readline of the bash shell. It's simple to implement. Examples: https://pymotw.com/2/readline/
That's not a feature of the git binary itself, it's a bash completion 'hack' and as such has nothing to do with Python per-se, but since you've tagged it as such let's add a little twist. Let's say we create a script aware of its acceptable arguments - test.py:
#!/usr/bin/env python
import sys
# let's define some sample functions to be called on passed arguments
def f1():
print("F1 called!")
def f2():
print("F2 called!")
def f3():
print("F3 called!")
def f_invalid(): # a simple invalid placeholder function
print("Invalid command!")
def f_list(): # a function to list all valid arguments
print(" ".join(sorted(arguments.keys())))
if __name__ == "__main__": # make sure we're running this as a script
arguments = { # a simple argument map, use argparse or similar in a real world use
"arg1": f1,
"arg2": f2,
"arg3": f3,
"list_arguments": f_list
}
if len(sys.argv) > 1:
for arg in sys.argv[1:]: # loop through all arguments
arguments.get(arg, f_invalid)() # call the mapped or invalid function
else:
print("At least one argument required!")
NOTE: Make sure you add an executable flag to the script (chmod +x test.py) so its shebang is used for executing instead of providing it as an argument to the Python interpreter.
Apart from all the boilerplate, the important argument is list_arguments - it lists all available arguments to this script and we'll use this output in our bash completion script to instruct bash how to auto-complete. To do so, create another script, let's call it test-completion.bash:
#!/usr/bin/env bash
SCRIPT_NAME=test.py
SCRIPT_PATH=/path/to/your/script
_complete_script()
{
local cursor options
options=$(${SCRIPT_PATH}/${SCRIPT_NAME} list_arguments)
cursor="${COMP_WORDS[COMP_CWORD]}"
COMPREPLY=( $(compgen -W "${options}" -- ${cursor}) )
return 0
}
complete -F _complete_script ${SCRIPT_NAME}
What it does is essentially adding to complete the _complete_script function to be called whenever a completion over test.py is invoked. The _complete_script function itself first calls list_arguments on test.py to retrieve its acceptable arguments, and then uses compgen to create a required structure for complete to be able to print it out.
To test, all you need is to source this script as:
source test-completion.bash
And then your bash will behave as:
$ ./test.py [tab]
arg1 arg2 arg3 list_arguments
And what's more, it's completely controllable from your Python script - whatever gets printed as a list on list_arguments command is what will be shown as auto-completion help.
To make the change permanent, you can simply add the source line to your .bashrc, or if you want more structured solution you can follow the guidelines for your OS. There are a couple of ways described on the git-flow-completion page for example. Of course, this assumes you actually have bash-autocomplete installed and enabled on your system, but your git autocompletion wouldn't work if you didn't.
Speaking of git autocompletion, you can see how it's implemented by checking git-completion.bash source - a word of warning, it's not for the fainthearted.

Processing exclusive command line switches using Python optparse

I need to accept one of three switches on the command line, --major, --minor or --patch or none of them in which the default is minor. I'm doing this using optparse due to the limitations of the environment (Python 2.6.x) so I can't change that.
What I'd ideally like to achieve is that optparse does the heavy lifting so I don't have to write code to check that the options are exclusive etc. I'd also appreciate if it can give a neat and understandable output from --help such as, for example:
[--major|minor|patch] whether to build as a new major/minor/patch
version. minor is default.
Or something similar (i.e. ideally all on the same line). I tried the following:
parser.add_option('', '--major', dest='rel_type')
parser.add_option('', '--minor', dest='rel_type')
parser.add_option('', '--patch', dest='rel_type')
but for --help that gave me:
--major=REL_TYPE
--minor=REL_TYPE
--patch=REL_TYPE
I know it's possible to use:
... type='choice', choices=['major', 'minor', 'patch'] ...
but this isn't really what I want as these are value enumerations rather than switch options.
Is this possible, and if so how?

Ignore the rest of the python file

My python scripts often contain "executable code" (functions, classes, &c) in the first part of the file and "test code" (interactive experiments) at the end.
I want python, py_compile, pylint &c to completely ignore the experimental stuff at the end.
I am looking for something like #if 0 for cpp.
How can this be done?
Here are some ideas and the reasons they are bad:
sys.exit(0): works for python but not py_compile and pylint
put all experimental code under def test():: I can no longer copy/paste the code into a python REPL because it has non-trivial indent
put all experimental code between lines with """: emacs no longer indents and fontifies the code properly
comment and uncomment the code all the time: I am too lazy (yes, this is a single key press, but I have to remember to do that!)
put the test code into a separate file: I want to keep the related stuff together
PS. My IDE is Emacs and my python interpreter is pyspark.
Use ipython rather than python for your REPL It has better code completion and introspection and when you paste indented code it can automatically "de-indent" the pasted code.
Thus you can put your experimental code in a test function and then paste in parts without worrying and having to de-indent your code.
If you are pasting large blocks that can be considered individual blocks then you will need to use the %paste or %cpaste magics.
eg.
for i in range(3):
i *= 2
# with the following the blank line this is a complete block
print(i)
With a normal paste:
In [1]: for i in range(3):
...: i *= 2
...:
In [2]: print(i)
4
Using %paste
In [3]: %paste
for i in range(10):
i *= 2
print(i)
## -- End pasted text --
0
2
4
In [4]:
PySpark and IPython
It is also possible to launch PySpark in IPython, the enhanced Python interpreter. PySpark works with IPython 1.0.0 and later. To use IPython, set the IPYTHON variable to 1 when running bin/pyspark:1
$ IPYTHON=1 ./bin/pyspark
Unfortunately, there is no widely (or any) standard describing what you are talking about, so getting a bunch of python specific things to work like this will be difficult.
However, you could wrap these commands in such a way that they only read until a signifier. For example (assuming you are on a unix system):
cat $file | sed '/exit(0)/q' |sed '/exit(0)/d'
The command will read until 'exit(0)' is found. You could pipe this into your checkers, or create a temp file that your checkers read. You could create wrapper executable files on your path that may work with your editors.
Windows may be able to use a similar technique.
I might advise a different approach. Separate files might be best. You might explore iPython notebooks as a possible solution, but I'm not sure exactly what your use case is.
Follow something like option 2.
I usually put experimental code in a main method.
def main ():
*experimental code goes here *
Then if you want to execute the experimental code just call the main.
main()
With python-mode.el mark arbitrary chunks as section - for example via py-sectionize-region.
Than call py-execute-section.
Updated after comment:
python-mode.el is delivered by melpa.
M-x list-packages RET
Look for python-mode - the built-in python.el provides 'python, while python-mode.el provides 'python-mode.
Developement just moved hereto: https://gitlab.com/python-mode-devs/python-mode
I think the standard ('Pythonic') way to deal with this is to do it like so:
class MyClass(object):
...
def my_function():
...
if __name__ == '__main__':
# testing code here
Edit after your comment
I don't think what you want is possible using a plain Python interpreter. You could have a look at the IEP Python editor (website, bitbucket): it supports something like Matlab's cell mode, where a cell can be defined with a double comment character (##):
## main code
class MyClass(object):
...
def my_function():
...
## testing code
do_some_testing_please()
All code from a ##-beginning line until either the next such line or end-of-file constitutes a single cell.
Whenever the cursor is within a particular cell and you strike some hotkey (default Ctrl+Enter), the code within that cell is executed in the currently running interpreter. An additional feature of IEP is that selected code can be executed with F9; a pretty standard feature but the nice thing here is that IEP will smartly deal with whitespace, so just selecting and pasting stuff from inside a method will automatically work.
I suggest you use a proper version control system to keep the "real" and the "experimental" parts separated.
For example, using Git, you could only include the real code without the experimental parts in your commits (using add -p), and then temporarily stash the experimental parts for running your various tools.
You could also keep the experimental parts in their own branch which you then rebase on top of the non-experimental parts when you need them.
Another possibility is to put tests as doctests into the docstrings of your code, which admittedly is only practical for simpler cases.
This way, they are only treated as executable code by the doctest module, but as comments otherwise.

How can I see normal print output created during pytest run?

Sometimes I want to just insert some print statements in my code, and see what gets printed out when I exercise it. My usual way to "exercise" it is with existing pytest tests. But when I run these, I don't seem able to see any standard output (at least from within PyCharm, my IDE).
Is there a simple way to see standard output during a pytest run?
The -s switch disables per-test capturing (only if a test fails).
-s is equivalent to --capture=no.
pytest captures the stdout from individual tests and displays them only on certain conditions, along with the summary of the tests it prints by default.
Extra summary info can be shown using the '-r' option:
pytest -rP
shows the captured output of passed tests.
pytest -rx
shows the captured output of failed tests (default behaviour).
The formatting of the output is prettier with -r than with -s.
When running the test use the -s option. All print statements in exampletest.py would get printed on the console when test is run.
py.test exampletest.py -s
In an upvoted comment to the accepted answer, Joe asks:
Is there any way to print to the console AND capture the output so that it shows in the junit report?
In UNIX, this is commonly referred to as teeing. Ideally, teeing rather than capturing would be the py.test default. Non-ideally, neither py.test nor any existing third-party py.test plugin (...that I know of, anyway) supports teeing – despite Python trivially supporting teeing out-of-the-box.
Monkey-patching py.test to do anything unsupported is non-trivial. Why? Because:
Most py.test functionality is locked behind a private _pytest package not intended to be externally imported. Attempting to do so without knowing what you're doing typically results in the public pytest package raising obscure exceptions at runtime. Thanks alot, py.test. Really robust architecture you got there.
Even when you do figure out how to monkey-patch the private _pytest API in a safe manner, you have to do so before running the public pytest package run by the external py.test command. You cannot do this in a plugin (e.g., a top-level conftest module in your test suite). By the time py.test lazily gets around to dynamically importing your plugin, any py.test class you wanted to monkey-patch has long since been instantiated – and you do not have access to that instance. This implies that, if you want your monkey-patch to be meaningfully applied, you can no longer safely run the external py.test command. Instead, you have to wrap the running of that command with a custom setuptools test command that (in order):
Monkey-patches the private _pytest API.
Calls the public pytest.main() function to run the py.test command.
This answer monkey-patches py.test's -s and --capture=no options to capture stderr but not stdout. By default, these options capture neither stderr nor stdout. This isn't quite teeing, of course. But every great journey begins with a tedious prequel everyone forgets in five years.
Why do this? I shall now tell you. My py.test-driven test suite contains slow functional tests. Displaying the stdout of these tests is helpful and reassuring, preventing leycec from reaching for killall -9 py.test when yet another long-running functional test fails to do anything for weeks on end. Displaying the stderr of these tests, however, prevents py.test from reporting exception tracebacks on test failures. Which is completely unhelpful. Hence, we coerce py.test to capture stderr but not stdout.
Before we get to it, this answer assumes you already have a custom setuptools test command invoking py.test. If you don't, see the Manual Integration subsection of py.test's well-written Good Practices page.
Do not install pytest-runner, a third-party setuptools plugin providing a custom setuptools test command also invoking py.test. If pytest-runner is already installed, you'll probably need to uninstall that pip3 package and then adopt the manual approach linked to above.
Assuming you followed the instructions in Manual Integration highlighted above, your codebase should now contain a PyTest.run_tests() method. Modify this method to resemble:
class PyTest(TestCommand):
.
.
.
def run_tests(self):
# Import the public "pytest" package *BEFORE* the private "_pytest"
# package. While importation order is typically ignorable, imports can
# technically have side effects. Tragicomically, that is the case here.
# Importing the public "pytest" package establishes runtime
# configuration required by submodules of the private "_pytest" package.
# The former *MUST* always be imported before the latter. Failing to do
# so raises obtuse exceptions at runtime... which is bad.
import pytest
from _pytest.capture import CaptureManager, FDCapture, MultiCapture
# If the private method to be monkey-patched no longer exists, py.test
# is either broken or unsupported. In either case, raise an exception.
if not hasattr(CaptureManager, '_getcapture'):
from distutils.errors import DistutilsClassError
raise DistutilsClassError(
'Class "pytest.capture.CaptureManager" method _getcapture() '
'not found. The current version of py.test is either '
'broken (unlikely) or unsupported (likely).'
)
# Old method to be monkey-patched.
_getcapture_old = CaptureManager._getcapture
# New method applying this monkey-patch. Note the use of:
#
# * "out=False", *NOT* capturing stdout.
# * "err=True", capturing stderr.
def _getcapture_new(self, method):
if method == "no":
return MultiCapture(
out=False, err=True, in_=False, Capture=FDCapture)
else:
return _getcapture_old(self, method)
# Replace the old with the new method.
CaptureManager._getcapture = _getcapture_new
# Run py.test with all passed arguments.
errno = pytest.main(self.pytest_args)
sys.exit(errno)
To enable this monkey-patch, run py.test as follows:
python setup.py test -a "-s"
Stderr but not stdout will now be captured. Nifty!
Extending the above monkey-patch to tee stdout and stderr is left as an exercise to the reader with a barrel-full of free time.
According to pytest documentation, version 3 of pytest can temporary disable capture in a test:
def test_disabling_capturing(capsys):
print('this output is captured')
with capsys.disabled():
print('output not captured, going directly to sys.stdout')
print('this output is also captured')
pytest --capture=tee-sys was recently added (v5.4.0). You can capture as well as see the output on stdout/err.
Try pytest -s -v test_login.py for more info in console.
-v it's a short --verbose
-s means 'disable all capturing'
You can also enable live-logging by setting the following in pytest.ini or tox.ini in your project root.
[pytest]
log_cli = True
Or specify it directly on cli
pytest -o log_cli=True
pytest test_name.py -v -s
Simple!
I would suggest using -h command. There're quite interesting commands might be used for.
but, for this particular case: -s shortcut for --capture=no. is enough
pytest <test_file.py> -s
If you are using logging, you need to specify to turn on logging output in addition to -s for generic stdout. Based on Logging within pytest tests, I am using:
pytest --log-cli-level=DEBUG -s my_directory/
If you are using PyCharm IDE, then you can run that individual test or all tests using Run toolbar. The Run tool window displays output generated by your application and you can see all the print statements in there as part of test output.
If anyone wants to run tests from code with output:
if __name__ == '__main__':
pytest.main(['--capture=no'])
The capsys, capsysbinary, capfd, and capfdbinary fixtures allow access to stdout/stderr output created
during test execution. Here is an example test function that performs some output related checks:
def test_print_something_even_if_the_test_pass(self, capsys):
text_to_be_printed = "Print me when the test pass."
print(text_to_be_printed)
p_t = capsys.readouterr()
sys.stdout.write(p_t.out)
# the two rows above will print the text even if the test pass.
Here is the result:
test_print_something_even_if_the_test_pass PASSED [100%]Print me when the test pass.

Using cdrecord through popen won't eject

So, I'm making a cd burning app and I need to eject the drive to let the user put the disk in. It's a little more complicated, but simplest case I run into is this; I can use cdrecord via the command line to eject the cd tray using this command:
cdrecord --eject dev='/dev/sg1'
which should mean that I can do the same thing with subprocess.call, like this:
subprocess.call(["cdrecord", "--eject", "dev='/dev/sg1'"])
however, when I do that, I get this error:
wodim: No such file or directory.
Cannot open SCSI driver!
For possible targets try 'wodim --devices' or 'wodim -scanbus'.
For possible transport specifiers try 'wodim dev=help'.
For IDE/ATAPI devices configuration, see the file README.ATAPI.setup from
the wodim documentation.
and the tray doesn't open.
This is a very similar error to one I got before when trying to run it form the command line, but I fixed that error by loading the sg kernel module.
If I just run:
subprocess.call(["cdrecord", "--eject"])
it opens the tray just fine. However, this needs to work with possibly multiple cd trays, so that won't work.
How can I get this to eject the cd correctly?
Try this:
subprocess.call(["cdrecord", "--eject", "dev=/dev/sg1"])
The shell will take care of interpreting the quotes, but cdrecord will not.
The only reason you need the quotes in the first place is that the dev path might have spaces in it, causing the shell to split things into separate arguments. For example, if you type this:
cdrecord --eject dev=/dev/my silly cd name
The arguments to cdrecord will be --eject, dev=/dev/my, silly, cd, name. But if you do this:
cdrecord --eject dev='/dev/my silly cd name'
The arguments to cdrecord will be --eject, dev=/dev/my silly cd name.
When you're using subprocess.call, there's no shell to pull the arguments apart; you're passing them explicitly. So, if you do this:
subprocess.call(["cdrecord", "--eject", "dev=/dev/my silly cd name"])
The arguments to cdrecord will be --eject, dev=/dev/my silly cd name.
In some cases—e.g., because you get things in a hopelessly confused state in the first place (e.g., you're reading a config file that's meant to be used by your program or executed by the shell)—you really have no recourse but to run through the shell. If that happens, do this:
subprocess.call("cdrecord --eject dev='/dev/sg1'", shell=True)
But this generally isn't what you want, and it isn't what you want in this case.
You are not using cdrecord but a buggy fork called "wodim" that
might be the reason for your problems.
I recommend you to use recent original software from:
ftp://ftp.berlios.de/pub/cdrecord/alpha/

Categories

Resources