I am writing a command-line plugin-based program where the plugins will provide additional functionality on top of whatever I provide.
So for example suppose I wrote a simple script that parsed images and stored them, and that's all I do. Then someone else can write a set of scripts to manipulate the image, putting his scripts in a plugin.
The plugin would be loaded and users can access the plugin by specifying its name in the command line.
It is not uncommon for scripts to want to provide additional options for the user.
So suppose in some years, 20 different plugins have been written.
Now, all of the authors want to allow users to provide options, so the main engine should take the user's options and pass them to the plugin so that it can handle them however it wants.
To keep it uniform, they might agree that certain options should perform a similar operation. Like "-o name" should set the output name to "name". They would then go about implementing their own options and stuff, which the main engine does not know about (of course, it shouldn't know what the plugins do)
I am using the deprecated getopt module, and it will throw exceptions whenever I specify an undefined option. I have heard of optparse and argparse, but I am not sure if these will allow the user to specify any options he wants without the code throwing an exception.
How can I make it so I can specify any command-line option?
argparse lets you partially parse an argument list with the parse_known_args method, returning what was parsed correctly, together with a list of the remaining arguments.
The solution you want is probably to treat the command line arguments as a sort of in process pipeline. Which options are also a part of where the options may go.
command <global options> sub_command <sub_options> new_sub_command <new_sub_options>
each command will shift options off of sys.argv until it finds one it doesn't understand, or one that cannot be a valid option, and then it stops parsing arguments, does its job, and returns control to the plugin-dispatcher.
Related
I would like to pass command line arguments to pycharm script from file.
I am aware command line arguments can be passed via run->edit configurations -> parameters.
This method is not good enough for me because
In some cases the parameters line gets deleted. not sure why, maybe git-pull? maybe other reason.
I want several configurations, and I want to save them in source control
I want to set those parameters programaticaly.
I think taking command-line arguments from some config file would solve all my problems.
How can I do that?
EDIT1:
Use case example, as it seems my point isn't perfectly clear:
I want to debug my code in pycharm with some configuration. add some breakpoints, go line by line.
Next I want to change configuration and debug again, with pycharm.
Doing this with some script that hacks the pycharm file where the run configurations are stored seems to me like going too far.
Does pycharm offer no way to give it command line parameters from file?
PyCharm lets you have unlimited named runtime configurations, as you appear to know, so I am a little puzzled that you ask. Click on the current configuraton name to the left of the green Run arrow, top right, then Edit Configurations.
These configurations live in workspace.xml. Nothing stopping you from checking it in.
For programs that take complex command line parameters it is traditional to provide a way to read the values from a named file, typically introduced by #. In argparse you specify this as follows:
parser = argparse.ArgumentParser(fromfile_prefix_chars='#')
Arguments read from a file must by default be one per line.
Create shell scripts calling the Python script the way you need.
I need some Python module to support forwarding command line arguments to other commands.
argparse allows to parse arguments easily, but doesn't deliver any "deparsing" tool.
I could just forward os.sys.argv if I hadn't need to delete or change values of any of them, but I have.
I can imagine myself a class that just operates on array of strings all the time, without losing any information, but I failed finding any.
Does somebody know such tool or maybe met similar problem and found out another nice way to handle?
(Sorry for English :()
If you use the subprocess module to run the commands with delegated arguments you can specify your command as a list of strings that won't be subject to shell parsing (as long as you don't use shell=True). You therefore don't need to bother about quoting concerns the same way you would if you were reconstructing a command line. See https://docs.python.org/2/library/subprocess.html#frequently-used-arguments for further details.
All I can find is this reference:
Is it possible to use POD(plain old documentation) with Python?
which looks like you have to generate a whole separate set of docs to go with code.
I would like to try Python for making cmdline utils, but when I do this with Perl I can embed the docs directly in the source, and use the Pod2Usage module along with Getopt so that any of my scripts can be run like this:
cmd --man
and this triggers the pod system to dump documentation that is embedded in the script in man-page format. It can also generate shorter (synopsis), or medium formats.
It looks like I could use the pydoc code and kind of reverse engineer it to sort-of do the task (at least showing the full documentation), but I am hoping something better already exists.
The python-modargs package lets you create self-documenting command line interfaces. You define a function for each command you want to make available, and the function's docstring becomes the help text for that function. The function's keyword arguments become named arguments and python-modargs will parse inline comments after the keyword arguments to be help text for that argument.
I use python-modargs to generate the command line interface for dexy, here is the module which defines the commands:
https://github.com/ananelson/dexy/blob/027954f9234363d506225d40b675b3d6478994f4/dexy/commands.py#L144
You need to implement a help_command method to get the generated help, it's a 1-liner.
I think pydoc may be what you're looking for.
It certainly isn't quite the same as POD, as you have to call pydoc itself (e.g. pydoc myscript.py), but I guess it can be a good starting point.
Of course, you can always add pydoc support for your script by importing from it and using it's functions/classes.
Checkout pydoc's own cli implementation for the best example.
I am working on a series of command line tools which connect to the same server and do related but different things. I'd like users to be able to have a single configuration file where they can place common arguments such as connection information that can be shared across all the tools. Ideally, I'd like something that does the following for me:
If the server address is specified at the command line use this and ignore any other values
If the server address is not specified at the command line but is in a config file that is specified at the command line use this address. Ignore any other values.
If the server address is not specified at the command line or a config file specified at the command, but is available in a in a config file in the user's home directory (say .myapprc), use this value.
If the server address is not specified in any of the above mechinisms exit with an error message.
The closest I've seen to this is the configparse module, which from what I can tell offers an option parser that will also look at config files, but does not seem to have the notion of "Must be specified somewhere" which I need.
Does anyone know of an existing module that can cover my use case above? If not, a simple extension to optparse, configparse, or some other module I have not reviewed would also be greatly appreciated.
This-party module configparse is written to extend optparse from the standard Python library. As the optparse docs I pointed to mention, "optparse doesn’t prevent you from implementing required options, but doesn’t give you much help at it either" (though it follows with a couple of URLs that show you ways to do it). Simplest is to use the default value functionality: specify a default value that's not actually a legal value (for something like a server's address, that's pretty easy) -- then, once options are processed, verify that the specified value is legal (which is a good idea anyway!-) and raise the appropriate exception otherwise.
I've used opster's middleware feature together with SafeConfigParser to achieve a similar (but slightly simpler) effect as you ask. You have to implement the specific logic you described yourself, but it assists you enough to make it relatively painless. An example of opster's middleware use is in its test/test.py example.
use a dict to store options to your program.
first parse the option file in the user's directory and store every options in a dict (configparse or any other module is welcome). then parse the command line (using any module you want, optparse might fit well), if an arguments specifies a config file, parse the specified file in a dict and update your options from what you read (dict.update is really handy to merge 2 dict). then store all other arguments into another dict, and merge them again (dict.update again...).
this way, you are sure that the dict in which you stored the options contains the value you want, which was either read from the user's file, from the specified config file or directly from the command line. if it does not contain a required value, exit with an error.
When your application takes a few (~ 5) configuration parameters, and the application is going
to be used by non-technology users (i.e. KISS), how do you usually handle reading
configuration options, and then passing around the parameters between objects/functions
(multiple modules)?
Options examples: input and output directories/file names, verbosity level.
I generally use optparse (Python) and pass around the options/parameters as
arguments; but I'm wondering if it's more common to use a configuration text
file that is read directly by all modules' objects (but then, isn't this
like having 'global' variables?, and without anyone 'owning' the state?).
Another typical issue is unit testing; if I want to unit test each
single module independently, a particular module may only require
1 out of the 5 configuration options; how do you usually decouple individual
modules/objects from the rest of the application, and yet still allow it to
accept 1 or 2 required parameters (does the unit test framework somehow
invoke or take over the configuration functionality)?
My guess is that there is not a unique correct way to do this, but it'd
be interesting to read about various approaches, or well-known patterns.
Do you usually read config options via:
- command-line/gui options
- a config text file
Both. We use Django's settings.py and logging.ini. We also use command-line options and arguments for the options that change most frequently.
How do multiple modules/objects have access to these options?
settings.py; logging.ini -- can't say.
Our options are private to the main program, and used to build
arguments to functions or object initializers.
[Sharing the optparse options is a big pain in the neck and needless binds a lot of things into an untestable mess.]
When doing unit-testing of a single module (NOT the "main" module):
(e.g. read option specifying input filename)
[I can't parse the question. I assume this is "how do you test when there are options?"]
The answer is -- we don't. Since only the main method parses command-line options, no other module, function or class has any idea of command-line options. There's no this module "require 1 out of the 5 config options" The module's classes (or functions) have ordinary arguments and that's that.
We only use optparse.
"Counts answer"
Please update these counts and feel free to add/modify.
Do you usually read config options via:
- command-line/gui options : 1
- a config text file : 0
How do multiple modules/objects have access to these options?
- they receive them from the caller as an argument: 1
- read them directly from the config text file: 0
When doing unit-testing of a single module (NOT the "main" module)
and the module uses one option, e.g. input filename:
- unit-test framework provides own "simplified" config functionality: 0
- unit-test framework invokes main app's config functionality: 1
Do you use:
- optparse: 1
- getopt: 0
- others?
Please list any config management "design pattern"
(usable in Python) and add a count if you use it - thanks.
-
-