Best way to pass command line arguments via file in python - python

I have a lot of arguments to pass to my main.py. It's easier to store them in a txt file. So, i would like to know best way of using "config" files to pass CL args.
Shell script is not what i need, unfortunatly.

If you plan to use argparse, then fromfile_prefix_chars is designed to solve exactly this problem.
In your launching program, put all of the arguments, one per line, into a file. Pass #file.txt to your child program. In your child program, pass a fromfile_prefix_chars parameter to the ArgumentParser() constructor:
parser = argparse.ArgumentParser(fromfile_prefix_chars='#')
argparse takes care of the rest for you.
Here is an example:
from argparse import ArgumentParser
parser = ArgumentParser(fromfile_prefix_chars='#')
parser.add_argument('-f', '--foo')
parser.add_argument('--bar')
parser.add_argument('q', nargs='*')
ns = parser.parse_args()
print(ns)
The contents of foo.txt:
-f
1
--bar=2
q one
q two
The command line and the output:
$ python zz.py #foo.txt
Namespace(bar='2', foo='1', q=['q one', 'q two'])

Use configparser. It uses .ini files and it's really easy to use.
Config file:
[DEFAULT]
KeepAlive = 45
ForwardX11 = yes
Example Code:
>>> config = configparser.ConfigParser()
>>> config.sections()
[]
>>> config.read('example.ini')
>>> for key in config['bitbucket.org']: print(key)
...
keepalive
forwardx11
>>> default = config['default']
>>> default['keepalive']
'45'
>>> default['ForwardX11']
'yes'

Here is a simple function that converts any #foo argument into the contents of foo, one argument per line. After the conversion, you may use sys.argv in any of the normal ways.
import sys
def expand_arg_files(args):
for arg in args:
if arg.startswith('#'):
with open(arg[1:]) as f:
file_args = f.read().splitlines()
yield from expand_arg_files(file_args)
else:
yield arg
sys.argv[:] = expand_arg_files(sys.argv[:])
print(sys.argv)
Notes:
The generator delegation syntax requires Python3.3 or higher.
You may have # args inside the argument file. The expansion is recursive.

Related

Python argparse requiring option, depending on the defined flags

I have a small python script, which uses argparse to let the user define options. It uses two flags for different modes and an argument to let the user define a file. See the simplified example below:
#!/usr/bin/python3
import argparse
from shutil import copyfile
def check_file(f):
# Mock function: checks if file exists, else "argparse.ArgumentTypeError("file not found")"
return f
def main():
aFile = "/tmp/afile.txt"
parser = argparse.ArgumentParser(description="An example",formatter_class=argparse.RawTextHelpFormatter)
parser.add_argument("-f", "--file", help="A file, used with method A.", default=aFile, type=check_file)
parser.add_argument("-a", "--ay", help="Method A, requires file.", action='store_true')
parser.add_argument("-b", "--be", help="Method B, no file required.", action='store_true')
args = parser.parse_args()
f = args.file
a = args.ay
b = args.be
if a:
copyfile(f, f+".a")
elif b:
print("Method B")
if __name__ == "__main__":
main()
Method A requires the file.
Method B does not.
If I run the script with method A, I either use the default file or one that is defined with -f/--file. The script checks if the file exists and everything is fine.
Now, if I run the script with method B, it shouldn't require the file, but the default option is checked and if it doesn't exist the argparse function raises the exception and the script exits.
How can I configure argparse to make -f optional, if -b is defined and require it, if -a is defined?
edit: I just realized that it would be enough for me to make -f and -b mutually exclusive. But then, if I run -b only, the check_file is executed anyways. Is there a way to prevent that?
#!/usr/bin/python3
import argparse
from shutil import copyfile
def check_file(f):
# Mock function: checks if file exists, else "argparse.ArgumentTypeError("file not found")"
print("chk file")
return f
def main():
aFile = "/tmp/afile.txt"
parser = argparse.ArgumentParser(description="An example",formatter_class=argparse.RawTextHelpFormatter)
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument("-f", "--file", help="A file, used with method A.", default=aFile, type=check_file)
parser.add_argument("-a", "--ay", help="Method A, requires file.", action='store_true')
group.add_argument("-b", "--be", help="Method B, no file required.", action='store_true')
args = parser.parse_args()
f = args.file
a = args.ay
b = args.be
if a:
print("File: "+str(f))
elif b:
print("Method B")
print("file: "+str(f))
if __name__ == "__main__":
main()
Output:
chk file
Method B
file: /tmp/afile.txt
You can defined subparser with ay/be as subcommand or alternatively declare a second parser instance for a. Something like:
parser = argparse.ArgumentParser(
description="An example",
formatter_class=argparse.RawTextHelpFormatter
)
# ensure either option -a or -b only
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument("-a", "--ay", help="Method A, requires file.",
action='store_true')
group.add_argument("-b", "--be", help="Method B, no file required.",
action='store_true')
# define a parser for option -a
parser_a = argparse.ArgumentParser()
parser_a.add_argument("-f", "--file", help="A file, used with method A.",
type=check_file, required=True)
parser_a.add_argument("-a", "--ay", help="Method A, requires file.",
action='store_true')
# first parse - get either -a/-b
args = parser.parse_known_args(sys.argv[1:])
# if -a, use the second parser to ensure -f is in argument
# note parse_known_args return tuple, the first one is the populated namespace
if args[0].ay:
args = parser_a.parse_args(sys.argv[1:])
Your problem lies with how argparse handles defaults. You'd get this behavior even if -f was the only argument. If the default is a string value, it will be 'evaluated' if the Action isn't seen.
parser.add_argument("-f", "--file", help="A file, used with method A.", default=aFile, type=check_file)
At the start of parsing defaults are put into the args namespace. During parsing it keeps track of whether Actions have been seen. At the end of parsing it checks Namespace values for Actions which haven't been seen. If they match the default (the usual case) and are strings, it passes the default through the type function.
In your -f case, the default is probably a file name, a string. So it will be 'evaluated' if the user doesn't provide an alternative. In earlier argparse versions defaults were evaluate regardless of whether they were used or not. For something like a int or float type that wasn't a problem, but for FileType it could result in unneeded file opening/creation.
Ways around this?
write check_file so it gracefully handles aFile.
make sure aFile is valid so check_file runs without error. This the usual case.
use a non-string default, e.g. an already open file.
use the default default None, and add the default value after parsing.
if args.file is None:
args.file = aFile
Combining this with -a and -b actions you have to decide whether:
if -a, is a -f value required? If -f isn't provided, what's the right default.
if -b, does it matter whether -f has a default or whether the user provides this argument? Could you just ignore it?
If -f is useful only when -a is True, why not combine them?
parser.add_argument('-a', nargs='?', default=None, const='valid_file', type=check_file)
With ?, this works in 3 ways. (docs on const)
no -a, args.a = default
bare -a, args.a = const
-a afile,args.a = afile
An even simpler example of this behavior
In [956]: p = argparse.ArgumentParser()
In [957]: p.add_argument('-f',type=int, default='astring')
...
In [958]: p.parse_args('-f 1'.split())
Out[958]: Namespace(f=1)
In [959]: p.parse_args(''.split())
usage: ipython3 [-h] [-f F]
ipython3: error: argument -f: invalid int value: 'astring'
The string default is passed through int resulting in an error. If I'd set default to something else like a list, default=[1,2,3], it would have run even though int would have choked on the default.

How to use python argparse with args other than sys.argv?

Is there a way to use argparse with any list of strings, instead of only with sys.argv?
Here's my problem: I have a program which looks something like this:
# This file is program1.py
import argparse
def main(argv):
parser = argparse.ArgumentParser()
# Do some argument parsing
if __name__ == '__main__':
main(sys.argv)
This works fine when this program is called straight from the command line. However, I have another python script which runs batch versions of this script with different commandline arguments, which I'm using like this:
import program1
arguments = ['arg1', 'arg2', 'arg3']
program1.main(arguments)
I still want to be able to parse the arguments, but argparse automatically defaults to using sys.argv instead of the arguments that I give it. Is there a way to pass in the argument list instead of using sys.argv?
You can pass a list of strings to parse_args:
parser.parse_args(['--foo', 'FOO'])
Just change the script to default to sys.argv[1:] and parse arguments omitting the first one (which is the name of the invoked command)
import argparse,sys
def main(argv=sys.argv[1:]):
parser = argparse.ArgumentParser()
parser.add_argument("--level", type=int)
args = parser.parse_args(argv)
if __name__ == '__main__':
main()
Or, if you cannot omit the first argument:
import argparse,sys
def main(args=None):
# if None passed, uses sys.argv[1:], else use custom args
parser = argparse.ArgumentParser()
parser.add_argument("--level", type=int)
args = parser.parse_args(args)
# Do some argument parsing
if __name__ == '__main__':
main()
Last one: if you cannot change the called program, you can still do something
Let's suppose the program you cannot change is called argtest.py (I added a call to print arguments)
Then just change the local argv value of the argtest.sys module:
import argtest
argtest.sys.argv=["dummy","foo","bar"]
argtest.main()
output:
['dummy', 'foo', 'bar']
Python argparse now has a parameter nargs for add_argument (https://docs.python/3/library/argparse.html).
It allows us to have as many arguments as we want for a named parameter (here, alist)
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--alist", nargs="*")
args = parser.parse_args()
print(args.alist)
All command line values that follow --alist are added to a list.
Example:
$ python3 argparse-01.py --alist fred barney pebbles "bamm bamm"
['fred', 'barney', 'pebbles', 'bamm bamm']
As you see, it is allowed to quote the arguments, but not necessary unless you need to protect a space.

python argparse file extension checking

can argparse be used to validate filename extensions for a filename cmd line parameter?
e.g. if i have a python script i run from the cmd line:
$ script.py file.csv
$ script.py file.tab
$ script.py file.txt
i would like argparse to accept the 1st two filename cmd line options but reject the 3rd
i know you can do something like this:
parser = argparse.ArgumentParser()
parser.add_argument("fn", choices=["csv","tab"])
args = parser.parse_args()
to specify two valid choices for a cmd line option
what i'd like is this:
parser.add_argument("fn", choices=["*.csv","*.tab"])
to specify two valid file extensions for the cmd line option. Unfortunately this doesn't work - is there a way to achieve this using argparse?
Sure -- you just need to specify an appropriate function as the type.
import argparse
import os.path
parser = argparse.ArgumentParser()
def file_choices(choices,fname):
ext = os.path.splitext(fname)[1][1:]
if ext not in choices:
parser.error("file doesn't end with one of {}".format(choices))
return fname
parser.add_argument('fn',type=lambda s:file_choices(("csv","tab"),s))
parser.parse_args()
demo:
temp $ python test.py test.csv
temp $ python test.py test.foo
usage: test.py [-h] fn
test.py: error: file doesn't end with one of ('csv', 'tab')
Here's a possibly more clean/general way to do it:
import argparse
import os.path
def CheckExt(choices):
class Act(argparse.Action):
def __call__(self,parser,namespace,fname,option_string=None):
ext = os.path.splitext(fname)[1][1:]
if ext not in choices:
option_string = '({})'.format(option_string) if option_string else ''
parser.error("file doesn't end with one of {}{}".format(choices,option_string))
else:
setattr(namespace,self.dest,fname)
return Act
parser = argparse.ArgumentParser()
parser.add_argument('fn',action=CheckExt({'csv','txt'}))
print parser.parse_args()
The downside here is that the code is getting a bit more complicated in some ways -- The upshot is that the interface gets a good bit cleaner when you actually go to format your arguments.
Define a custom function which takes the name as a string - split the extension off for comparison and just return the string if it's okay, otherwise raise the exception that argparse expects:
def valid_file(param):
base, ext = os.path.splitext(param)
if ext.lower() not in ('.csv', '.tab'):
raise argparse.ArgumentTypeError('File must have a csv or tab extension')
return param
And then use that function, such as:
parser = argparse.ArgumentParser()
parser.add_argument('filename', type=valid_file)
No. You can provide a container object to choices argument, or anything that supports the "in" operator. You can read more at pydocs
You can always check it yourself and provide feedback to the user though.

type=dict in argparse.add_argument()

I'm trying to set up a dictionary as optional argument (using argparse); the following line is what I have so far:
parser.add_argument('-i','--image', type=dict, help='Generate an image map from the input file (syntax: {\'name\': <name>, \'voids\': \'#08080808\', \'0\': \'#00ff00ff\', \'100%%\': \'#ff00ff00\'}).')
But running the script:
$ ./script.py -i {'name': 'img.png','voids': '#00ff00ff','0': '#ff00ff00','100%': '#f80654ff'}
script.py: error: argument -i/--image: invalid dict value: '{name:'
Even though, inside the interpreter,
>>> a={'name': 'img.png','voids': '#00ff00ff','0': '#ff00ff00','100%': '#f80654ff'}
works just fine.
So how should I pass the argument instead?
Thanks in advance.
Necroing this: json.loads works here, too. It doesn't seem too dirty.
import json
import argparse
test = '{"name": "img.png","voids": "#00ff00ff","0": "#ff00ff00","100%": "#f80654ff"}'
parser = argparse.ArgumentParser()
parser.add_argument('-i', '--input', type=json.loads)
args = parser.parse_args(['-i', test])
print(args.input)
Returns:
{u'0': u'#ff00ff00', u'100%': u'#f80654ff', u'voids': u'#00ff00ff', u'name': u'img.png'}
For completeness, and similarly to json.loads, you could use yaml.load (available from PyYAML in PyPI). This has the advantage over json in that there is no need to quote individual keys and values on the command line unless you are trying to, say, force integers into strings or otherwise overcome yaml conversion semantics. But obviously the whole string will need quoting as it contains spaces!
>>> import argparse
>>> import yaml
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('-fna', '--filename-arguments', type=yaml.load)
>>> data = "{location: warehouse A, site: Gloucester Business Village}"
>>> ans = parser.parse_args(['-fna', data])
>>> print ans.filename_arguments['site']
Gloucester Business Village
Although admittedly in the question given, many of the keys and values would have to be quoted or rephrased to prevent yaml from barfing. Using the following data seems to work quite nicely, if you need numeric rather than string values:
>>> parser.add_argument('-i', '--image', type=yaml.load)
>>> data = "{name: img.png, voids: 0x00ff00ff, '0%': 0xff00ff00, '100%': 0xf80654ff}"
>>> ans = parser.parse_args(['-i', data])
>>> print ans.image
{'100%': 4161164543L, 'voids': 16711935, 'name': 'img.png', '0%': 4278255360L}
Using simple lambda parsing is quite flexible:
parser.add_argument(
'--fieldMap',
type=lambda x: {k:int(v) for k,v in (i.split(':') for i in x.split(','))},
help='comma-separated field:position pairs, e.g. Date:0,Amount:2,Payee:5,Memo:9'
)
I’ll bet your shell is messing with the braces, since curly braces are the syntax used for brace expansion features in many shells (see here).
Passing in a complex container such as a dictionary, requiring the user to know Python syntax, seems a bad design choice in a command line interface. Instead, I’d recommend just passing options in one-by-one in the CLI within an argument group, and then build the dict programmatically from the parsed group.
Combining the type= piece from #Edd and the ast.literal_eval piece from #Bradley yields the most direct solution, IMO. It allows direct retrieval of the argval and even takes a (quoted) default value for the dict:
Code snippet
parser.add_argument('--params', '--p', help='dict of params ', type=ast.literal_eval, default="{'name': 'adam'}")
args = parser.parse_args()
Running the Code
python test.py --p "{'town': 'union'}"
note the quotes on the dict value. This quoting works on Windows and Linux (tested with [t]csh).
Retrieving the Argval
dict=args.params
You can definitely get in something that looks like a dictionary literal into the argument parser, but you've got to quote it so when the shell parses your command line, it comes in as
a single argument instead of many (the space character is the normal argument delimiter)
properly quoted (the shell removes quotes during parsing, because it's using them for grouping)
So something like this can get the text you wanted into your program:
python MYSCRIPT.py -i "{\"name\": \"img.png\", \"voids\": \"#00ff00ff\",\"0\": \"#ff00ff00\",\"100%\": \"#f80654ff\"}"
However, this string is not a valid argument to the dict constructor; instead, it's a valid python code snippet. You could tell your argument parser that the "type" of this argument is eval, and that will work:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-i','--image', type=eval, help='Generate an image map...')
args = parser.parse_args()
print args
and calling it:
% python MYSCRIPT.py -i "{\"name\": \"img.png\", \"voids\": \"#00ff00ff\",\"0\": \"#ff00ff00\",\"100%\": \"#f80654ff\"}"
Namespace(image={'0': '#ff00ff00', '100%': '#f80654ff', 'voids': '#00ff00ff', 'name': 'img.png'})
But this is not safe; the input could be anything, and you're evaluating arbitrary code. It would be equally unwieldy, but the following would be much safer:
import argparse
import ast
parser = argparse.ArgumentParser()
parser.add_argument('-i','--image', type=ast.literal_eval, help='Generate an image map...')
args = parser.parse_args()
print args
This also works, but is MUCH more restrictive on what it will allow to be eval'd.
Still, it's very unwieldy to have the user type out something, properly quoted, that looks like a python dictionary on the command line. And, you'd have to do some checking after the fact to make sure they passed in a dictionary instead of something else eval-able, and had the right keys in it. Much easier to use if:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--image-name", required=True)
parser.add_argument("--void-color", required=True)
parser.add_argument("--zero-color", required=True)
parser.add_argument("--full-color", required=True)
args = parser.parse_args()
image = {
"name": args.image_name,
"voids": args.void_color,
"0%": args.zero_color,
"100%": args.full_color
}
print image
For:
% python MYSCRIPT.py --image-name img.png --void-color \#00ff00ff --zero-color \#ff00ff00 --full-color \#f80654ff
{'100%': '#f80654ff', 'voids': '#00ff00ff', 'name': 'img.png', '0%': '#ff00ff00'}
One of the simplest ways I've found is to parse the dictionary as a list, and then convert that to a dictionary. For example using Python3:
#!/usr/bin/env python3
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-i', '--image', type=str, nargs='+')
args = parser.parse_args()
if args.image is not None:
i = iter(args.image)
args.image = dict(zip(i, i))
print(args)
then you can type on the command line something like:
./script.py -i name img.png voids '#00ff00ff' 0 '#ff00ff00' '100%' '#f80654ff'
to get the desired result:
Namespace(image={'name': 'img.png', '0': '#ff00ff00', 'voids': '#00ff00ff', '100%': '#f80654ff'})
General Advice: DO NOT USE eval.
If you really have to ...
"eval" is dangerous. Use it if you are sure no one will knowingly input malicious input. Even then there can be disadvantages. I have covered one bad example.
Using eval instead of json.loads has some advantages as well though. A dict doesn't really need to be a valid json. Hence, eval can be pretty lenient in accepting "dictionaries". We can take care of the "danger" part by making sure that final result is indeed a python dictionary.
import json
import argparse
tests = [
'{"name": "img.png","voids": "#00ff00ff","0": "#ff00ff00","100%": "#f80654ff"}',
'{"a": 1}',
"{'b':1}",
"{'$abc': '$123'}",
'{"a": "a" "b"}' # Bad dictionary but still accepted by eval
]
def eval_json(x):
dicti = eval(x)
assert isinstance(dicti, dict)
return dicti
parser = argparse.ArgumentParser()
parser.add_argument('-i', '--input', type=eval_json)
for test in tests:
args = parser.parse_args(['-i', test])
print(args)
Output:
Namespace(input={'name': 'img.png', '0': '#ff00ff00', '100%': '#f80654ff', 'voids': '#00ff00ff'})
Namespace(input={'a': 1})
Namespace(input={'b': 1})
Namespace(input={'$abc': '$123'})
Namespace(input={'a': 'ab'})
A minimal example to pass arguments as a dictionary from the command line:
# file.py
import argparse
import json
parser = argparse.ArgumentParser()
parser.add_argument("-par", "--parameters",
required=False,
default=None,
type=json.loads
)
args = parser.parse_args()
print(args.parameters)
and in the terminal you can pass your arguments as a dictionary using a string format:
python file.py --parameters '{"a":1}'
 You could try:
$ ./script.py -i "{'name': 'img.png','voids': '#00ff00ff','0': '#ff00ff00','100%': '#f80654ff'}"
I haven't tested this, on my phone right now.
Edit: BTW I agree with #wim, I think having each kv of the dict as an argument would be nicer for the user.
Here is a another solution since I had to do something similar myself. I use the ast module to convert the dictionary, which is input to the terminal as a string, to a dict. It is very simple.
Code snippet
Say the following is called test.py:
import argparse
import ast
parser = argparse.ArgumentParser()
parser.add_argument('--params', '--p', help='dict of params ',type=str)
options = parser.parse_args()
my_dict = options.params
my_dict = ast.literal_eval(my_dict)
print(my_dict)
for k in my_dict:
print(type(my_dict[k]))
print(k,my_dict[k])
Then in the terminal/cmd line, you would write:
Running the code
python test.py --p '{"name": "Adam", "lr": 0.001, "betas": (0.9, 0.999)}'
Output
{'name': 'Adam', 'lr': 0.001, 'betas': (0.9, 0.999)}
<class 'str'>
name Adam
<class 'float'>
lr 0.001
<class 'tuple'>
betas (0.9, 0.999)
TLDR Solution:
The simplest and quickest solution is as below:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("-par", "--parameters",
default={},
type=str)
args = parser.parse_args()
In the parser.add_argument function:
Use a dictionary object for default object
str as the type
Then args.parameters will automatically be converted to a dictionary without any need for ast.literal.eval or json.loads.
Motivation:
The methods posted by #Galuoises and #frankeye, appear to not work when the default is set as a json encoded dictionary such as below.
parser.add_argument("-par", "--parameters",
required=False, default="{\"k1\":v1, \"k2\":v2}",
type=json.loads)
This is because
The following works just fine:
parser = argparse.ArgumentParser()
parser.add_argument("-par", "--parameters",
required=False, default={"k1a":"v1a","k2a":"v2a"},
type=json.loads)
args = parser.parse_args()
print(str(parameters))
result:
{'k1a': 'v1a', 'k2a': 'v2a'}
For default value, the type should be dict since json.loads returns a dictionary, not a string, the default object should be given as a dictionary.
import argparse,json,sys
sys.argv.extend(['-par','{"k1b":"v1b","k2b":"v2b"}'])
parser = argparse.ArgumentParser()
parser.add_argument("-par", "--parameters",
required=False, default={"k1":"v1","k2":"v2"},
type=json.loads)
args = parser.parse_args()
print(str(args.parameters))
result:
{'k1b': 'v1b', 'k2b': 'v2b'}

Using a file to store optparse arguments

I've been using optparse for a while now, and would like to add the ability to load the arguments from a config file.
So far the best I can think of is a wrapper batch script with the arguments hardcoded... seems clunky.
What is the most elegant way to do this?
I agree with S.Lott's idea of using a config file, but I'd recommend using the built-in ConfigParser (configparser in 3.0) module to parse it, rather than a home-brewed solution.
Here's a brief script that illustrates ConfigParser and optparse in action.
import ConfigParser
from optparse import OptionParser
CONFIG_FILENAME = 'defaults.cfg'
def main():
config = ConfigParser.ConfigParser()
config.read(CONFIG_FILENAME)
parser = OptionParser()
parser.add_option("-l",
"--language",
dest="language",
help="The UI language",
default=config.get("Localization", "language"))
parser.add_option("-f",
"--flag",
dest="flag",
help="The country flag",
default=config.get("Localization", "flag"))
print parser.parse_args()
if __name__ == "__main__":
main()
Output:
(<Values at 0x2182c88: {'flag': 'japan.png', 'language': 'Japanese'}>, [])
Run with "parser.py --language=French":
(<Values at 0x2215c60: {'flag': 'japan.png', 'language': 'French'}>, [])
Help is built in.
Run with "parser.py --help":
Usage: parser.py [options]
Options:
-h, --help show this help message and exit
-l LANGUAGE, --language=LANGUAGE
The UI language
-f FLAG, --flag=FLAG The country flag
The config file:
[Localization]
language=Japanese
flag=japan.png
You can use argparse module for that:
>>> open('args.txt', 'w').write('-f\nbar')
>>> parser = argparse.ArgumentParser(fromfile_prefix_chars='#')
>>> parser.add_argument('-f')
>>> parser.parse_args(['-f', 'foo', '#args.txt'])
Namespace(f='bar')
It might be included in stdlib, see pep 389.
I had a similar problem, but also wanted to specific the config file as an argument. Inspired by S. Lott's answer, I came up with the following code.
Example terminal session:
$ python defaultconf.py # use hard-coded defaults
False
$ python defaultconf.py --verbose # verbose on command line
True
$ python defaultconf.py --loadconfig blah # load config with 'verbose':True
True
$ python defaultconf.py --loadconfig blah --quiet # Override configured value
False
Code:
#!/usr/bin/env python2.6
import optparse
def getParser(defaults):
"""Create and return an OptionParser instance, with supplied defaults
"""
o = optparse.OptionParser()
o.set_defaults(**defaults)
o.add_option("--verbose", dest = "verbose", action="store_true")
o.add_option("--quiet", dest = "verbose", action="store_false")
o.add_option("--loadconfig", dest = "loadconfig")
return o
def main():
# Hard coded defaults (including non-command-line-argument options)
my_defaults = {'verbose': False, 'config_only_variable': 42}
# Initially parse arguments
opts, args = getParser(my_defaults).parse_args()
if opts.loadconfig is not None:
# Load config from disk, update the defaults dictionary, and reparse
# Could use ConfigParser, simplejson, yaml etc.
config_file_values = {'verbose': True} # the dict loaded from disk
my_defaults.update(config_file_values)
opts, args = getParser(my_defaults).parse_args()
print opts.verbose
if __name__ == '__main__':
main()
A practical implementation can be found on Github: The defaults dictionary, the argument parser and the main function
That's what the set_defaults function is for. http://docs.python.org/library/optparse.html#optparse.OptionParser.set_defaults
Create a file that's the dictionary of default values.
{ 'arg1': 'this',
'arg2': 'that'
}
Then read this file, eval it to convert the text to a dictionary, and provide this dictionary as the arguments to set_defaults.
If you're really worried about eval, then use JSON (or YAML) notation for this file. Or you could even make an .INI file out of it and use configparser to get your defaults.
Or you can use a simple list of assignment statements and exec.
Config File.
arg1 = 'this'
arg2 = 'that'
Reading the config file.
defaults= {}
with open('defaults.py','r') as config
exec config in {}, defaults
Read the arguments in the same commandline format from a file e.g. #commands, then use your original parser to parse them.
options, args = parser.parse_args()
if args[0][0] == '#': # script.py #optfile
with open(args[0][1:]) as f:
fa = [l.strip() for l in f]
fa = fa + args[1:] # put back any other positional arguments
# Use your original parser to parse the new options
options, args = parser.parse_args(args=fa, values=options)
I've built a lot of scripts with flags and options lately, and I've come up with the solution described here.
Basically I instance an optionparser with a special flag that tells to try and load options from a file, so you can use normally your script specifying options from command line or provide them (or a set of them) from a file.
Update: i have shared code on GitHub

Categories

Resources