Python: Paging of argparse help text? - python

For a Python script that uses argparse and has a very long argument list, is it possible to make argparse page what it prints to the terminal when calling the script with the -h option?

I could not find a quick answer, so I wrote a little something:
# hello.py
import argparse
import os
import shlex
import stat
import subprocess as sb
import tempfile
def get_pager():
"""
Get path to your pager of choice, or less, or more
"""
pagers = (os.getenv('PAGER'), 'less', 'more',)
for path in (os.getenv('PATH') or '').split(os.path.pathsep):
for pager in pagers:
if pager is None:
continue
pager = iter(pager.split(' ', 1))
prog = os.path.join(path, next(pager))
args = next(pager, None) or ''
try:
md = os.stat(prog).st_mode
if md & (stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH):
return '{p} {a}'.format(p=prog, a=args)
except OSError:
continue
class CustomArgParser(argparse.ArgumentParser):
"""
A custom ArgumentParser class that prints help messages
using either your pager, or less or more, if available.
Otherwise, it does what ArgumentParser would do.
Use the PAGER environment variable to force it to use your pager
of choice.
"""
def print_help(self, file=None):
text = self.format_help()
pager = get_pager()
if pager is None:
return super().print_help(file)
fd, fname = tempfile.mkstemp(prefix='simeon_help_', suffix='.txt')
with open(fd, 'w') as fh:
super().print_help(fh)
cmd = shlex.split('{p} {f}'.format(p=pager, f=fname))
with sb.Popen(cmd) as proc:
rc = proc.wait()
if rc != 0:
super().print_help(file)
try:
os.unlink(fname)
except:
pass
if __name__ == '__main__':
parser = CustomArgParser(description='Some little program')
parser.add_argument('--message', '-m', help='Your message', default='hello world')
args = parser.parse_args()
print(args.message)
This snippet does main things. First, it defines a function to get the absolute path to a pager. If you set the environment variable PAGER, it will try and use it to display the help messages. Second, it defines a custom class that inherits pretty much everything from argparse.ArgumentParser. The only method that gets overridden here is print_help. It implements print_help by defaulting to super().print_help() whenever a valid pager is not found. If a valid is found, then it writes the help message to a temporary file and then opens a child process that invokes the pager with the path to the temporary file. When the pager returns, the temporary file is deleted. That's pretty much it.
You are more than welcome to update get_pager to add as many pager programs as you see fit.
Call the script:
python3 hello.py --help ## Uses less
PAGER='nano --view' python3 hello.py --help ## Uses nano
PAGER=more python3 hello.py --help ## Uses more

Related

Using unittest to check "--help" flag output

I have some code that parses command line options using argparse.
For example:
# mycode.py
import argparse
def parse_args():
parser = argparse.ArgumentParser('my code')
# list of arguments
# ...
# ...
return vars(parser.parse_args())
if __name__ == "__main__":
parse_args()
I would like to use unittest to check the output of the help function. I also don't want to change the actual code unless there is no other solution.
The help action has a SystemExit call built into it after printing to stdout, so I have had to try and catch it in the unittest.
Here is my unittest code with the following steps:
1) Set the sys.argv list to include the -h flag.
2) Wrap the function call in a context manager to prevent the SystemExit being viewed as an error.
3) Switch the sys.stdout temporarily to an io.StringIO object so I can inspect it without having it print to screen.
4) Call the function in a try...finally block so the SystemExit isn't fatal.
5) Switch sys.stdout back to the real stdout.
6) Open a file to which I had previously saved the help text (by entering python mycode.py -h > help_out.txt in the terminal) to verify it is the same as the captured output from the StringIO.
import unittest
import mycode
import sys
import io
class TestParams(unittest.TestCase):
def setUp(self):
pass
def test_help(self):
args = ["-h"]
sys.argv[1:] = args
with self.assertRaises(SystemExit):
captured_output = io.StringIO()
sys.stdout = captured_output
try:
mycode.parse_args()
finally:
sys.stdout = sys.__stdout__
with open("help_out.txt", "r") as f:
help_text = f.read()
self.assertEqual(captured_output, help_text)
def tearDown(self):
pass
This code works, but the captured_output StringIO object is empty, so the test fails.
I am looking for an explanation as to what is going wrong with the captured output and/or an alternative solution.
I was very close. The captured_output wasn't actually empty - I just wasn't accessing the contents correctly.
Substitute captured_output.get_value() for captured_value in my example code and it works perfectly.

cli: how-to initialise a dict and group click functions

I would like to initialise a global variable, in this case a dict called DOC, after passing a number of command line arguments and using the click library.
I have tried the following:
#!/usr/bin/python3
import os
import sys
import yaml
import logging
import click
DOC = {}
#click.group()
def cli():
pass
#click.command()
#click.option("--logger-file", required=True, default='{}/blabla/cfg/logger.{}.yml'.format(os.environ['HOME'],os.path.basename(__file__)), show_default=True, help="YAML logging configuration file")
def cli_logger_file(logger_file):
if os.path.exists(logger_file):
try:
with open(logger_file, "rt") as f:
DOC = yaml.safe_load(f.read())
print( "logger" )
except Exception as e:
print( str(e) )
sys.exit()
else:
sys.exit()
if __name__ == '__main__':
cli_logger_file()
print( "hi!" )
print( DOC )
But when I run it, the output is:
$ python3 etc.py --logger-file=/home/blabla/cfg/logger.src.app.component.yml
logger
{}
Could you please help me understand:
Why I do not see hi! being printed?
Why if I replace #click.command() with #cli.command() it does not recognise the command-line option --logger-file?
A couple of misunderstandings about how click works.
Why I do not see hi! being printed?
Click is a framework for writing cli programs. After the framework calls your handlers, it does not return...
What is #click.group()?
This question:
Why if I replace #click.command() with #cli.command() it does not recognize the command-line option --logger-file ?
is related to what #click.group() does. A group is a special processor intended to implement sub commands. So in your case, using a group click will parse any --flags before the subcommand. But you don't have any subcommands so the --flags will be consumed by the group. Just remove the group as you don't need it.
Code:
#click.command()
#click.option("--logger-file",
default=os.path.join(os.path.expanduser("~"),
'blabla/cfg/logger.{}.yml'.format(
os.path.basename(__file__))),
show_default=True,
help="YAML logging configuration file")
def cli(logger_file):
if os.path.exists(logger_file):
try:
with open(logger_file, "rt") as f:
global DOC
DOC = yaml.safe_load(f.read())
except Exception as e:
click.echo(str(e))
sys.exit()
click.echo('DOC: %s' % DOC)
if __name__ == '__main__':
cli()
Notes:
You had set the --loggerfile to required but also specifying a default.
I used os.path.expanduser() instead of directly using an environment variable.
In setting the variable DOC, you need to tell python it is a global.
But, why a global? After you understand the answer to the first question at the top of this post, you will realize that any functionality that this program implements will need to be called from the the same function that you are parsing the yaml in. So, you likely should just pass it as a variable....
Assigning to a global variable from a function requires a global declaration.
Group commands are invoked by name, so when you use #cli.command you need to write:
$ python3 etc.py cli_logger_file --logger-file=foo.yml

Calling a python script with arguments using subprocess

I have a python script which call another python script from another directory. To do that I used subprocess.Popen :
import os
import subprocess
arg_list = [project, profile, reader, file, str(loop)]
where all args are string if not converted implicitely
f = open(project_path + '/log.txt','w')
proc = subprocess.Popen([sys.executable, python_script] + arg_list, stdin=subprocess.PIPE, stdout=f, stderr=f)
streamdata = proc.communicate()[0]
retCode = proc.returncode
f.close()
This part works well, because of the log file I can see errors that occurs on the called script. Here's the python script called:
import time
import csv
import os
class loading(object):
def __init__(self, project=None, profile=None, reader=None, file=None, loop=None):
self.project=project
self.profile=profile
self.reader=reader
self.file=file
self.loop=loop
def csv_generation(self):
f=open(self.file,'a')
try:
writer=csv.writer(f)
if self.loop==True:
writer.writerow((self.project,self.profile,self.reader))
else:
raise('File already completed')
finally:
file.close()
def main():
p = loading(project, profile, reader, file, loop)
p.csv_generation()
if __name__ == "__main__":
main()
When I launch my subprocess.Popen, I have an error from the called script which tell me that 'project' is not defined. It looks the Popen method doesn't pass arguments to that script. I think i'm doing something wrong, someone has an idea ?
When you pass parameters to a new process they are passed positionally, the names from the parent process do not survive, only the values. You need to add:
import sys
def main():
if len(sys.argv) == 6:
project, profile, reader, file, loop = sys.argv[1:]
else:
raise ValueError,("incorrect number of arguments")
p = loading(project, profile, reader, file, loop)
p.csv_generation()
We are testing the length of sys.argv before the assignment (the first element is the name of the program).

Python3: command not found, when running from cli

I am trying to run my python module as a command, however I am always getting the error: command not found.
#!/usr/bin/env python
import sys
import re
from sys import stdin
from sys import stdout
class Grepper(object):
def __init__(self, pattern):
self.pattern = pattern
def pgreper(self):
y = (str(self.pattern))
for line in sys.stdin:
regex = re.compile(y)
x = re.search(regex, line)
if x:
sys.stdout.write(line)
if __name__ == "__main__":
print("hello")
pattern = str(sys.argv[1])
Grepper(pattern).pgreper()
else:
print("nope")
I am sure whether it has something to do with the line:
if __name__ == "__main__":
However I just can't figure it out, this is a new area for me, and it's a bit stressful.
Your script name should have a .py extension, so it should be named something like pgreper.py.
To run it, you need to do either python pgreper.py pattern_string or if it has executable permission, as explained by Gabriel, you can do ./pgreper.py pattern_string. Note that you must give the script path (unless the current directory is in your command PATH); pgreper.py pattern_string will cause bash to print the "command not found" error message.
You can't pass the pattern data to it by piping, IOW, cat input.txt | ./pgreper.py "pattern_string" won't work: the pattern has to be passed as an argument on the command line. I guess you could do ./pgreper.py "$(cat input.txt)" but it'd be better to modify the script to read from stdin if you need that functionality.
Sorry, I didn't read the body of your script properly. :embarrassed:
I now see that your pgreper() method reads data from stdin. Sorry if the paragraph above caused any confusion.
By way of apology for my previous gaffe, here's a slightly cleaner version of your script.
#! /usr/bin/env python
import sys
import re
class Grepper(object):
def __init__(self, pattern):
self.pattern = pattern
def pgreper(self):
regex = re.compile(self.pattern)
for line in sys.stdin:
if regex.search(line):
sys.stdout.write(line)
def main():
print("hello")
pattern = sys.argv[1]
Grepper(pattern).pgreper()
if __name__ == "__main__":
main()
else:
print("nope")
Make sure you have something executable here : /usr/bin/env.
When you try to run your python module as a command, it will call this as an interpreter. You may need to replace it with /usr/bin/python or /usr/bin/python3 if you don't have an env command.
Also, make sure your file is executable : chmod +x my_module.py and try to run it with ./my_module.py.

execute python script with function from command line, Linux

I have my python file called convertImage.py and inside the file I have a script that converts an image to my liking, the entire converting script is set inside a function called convertFile(fileName)
Now my problem is I need to execute this python script from the linux command line while passing the convertFile(fileName) function along with it.
example:
linux user$: python convertImage.py convertFile(fileName)
This should execute the python script passing the appropriate function.
example:
def convertFile(fileName):
import os, sys
import Image
import string
splitName = string.split(fileName, "_")
endName = splitName[2]
splitTwo = string.split(endName, ".")
userFolder = splitTwo[0]
imageFile = "/var/www/uploads/tmp/"+fileName
...rest of the script...
return
What is the right way to execute this python script and properly pass the file name to the function from the liunx command line?
Thank in advanced
This
if __name__ == "__main__":
command= " ".join( sys.argv[1:] )
eval( command )
This will work. But it's insanely dangerous.
You really need to think about what your command-line syntax is. And you need to think about why you're breaking the long-established Linux standards for specifying arguments to a program.
For example, you should consider removing the useless ()'s in your example. Make it this, instead.
python convertImage.py convertFile fileName
Then, you can -- with little work -- use argparse to get the command ("convertFile") and the arguments ("fileName") and work within the standard Linux command line syntax.
function_map = {
'convertFile': convertFile,
'conv': convertFile,
}
parser = argparse.ArgumentParser()
parser.add_argument( 'command', nargs=1 )
parser.add_argument( 'fileName', nargs='+' )
args= parser.parse_args()
function = function_map[args.command]
function( args.fileName )
Quick and dirty way:
linux user$: python convertImage.py convertFile fileName
and then in convertImage.py
if __name__ == '__main__':
import sys
function = getattr(sys.modules[__name__], sys.argv[1])
filename = sys.argv[2]
function(filename)
A more sophisticated approach would use argparse (for 2.7 or 3.2+) or optparse.
Create a top-level executable part of your script that parses command-line argument(s) and then pass it to your function in a call, like so:
import os, sys
#import Image
import string
def convertFile(fileName):
splitName = string.split(fileName, "_")
endName = splitName[2]
splitTwo = string.split(endName, ".")
userFolder = splitTwo[0]
imageFile = "/var/www/uploads/tmp/"+fileName
print imageFile # (rest of the script)
return
if __name__ == '__main__':
filename = sys.argv[1]
convertFile(filename)
Then, from a shell,
$ convertImage.py the_image_file.png
/var/www/uploads/tmp/the_image_file.png

Categories

Resources