Pass commandline arguments to a Python script installed with Poetry - python

The poetry documentation says that the script section can be used to install scripts or executable when the package is installed. But it does not show any example of how to pass arguments to the script.
How can you do to receive with argparse the arguments in the function?

First a little project setup:
Starting from a new poetry project with poetry new example_script (and creating a main.py file inside example_script dir) with a structure like this:
├── example_script
│   ├── __init__.py
│   ├── main.py
├── pyproject.toml
├── README.rst
└── tests
├── __init__.py
└── test_poetry_example.py
And adding in the pyproject.toml the config (in the section [tool.poetry.scripts]) of the script that we are going to install:
# pyproject.toml
[tool.poetry]
name = "example_script"
# some lines excluded
[tool.poetry.scripts]
my-script = "example_script.main:start"
# some lines excluded
And finally the main.py file, which has to have a start function inside (as we passed it in the toml). The arguments parser goes inside this function, since this function is the one that will end up executing when we run the script:
import argparse
def some_function(target, end="!"):
"""Some example funcion"""
msg = "hi " + target + end
print(msg)
def start():
# All the logic of argparse goes in this function
parser = argparse.ArgumentParser(description='Say hi.')
parser.add_argument('target', type=str, help='the name of the target')
parser.add_argument('--end', dest='end', default="!",
help='sum the integers (default: find the max)')
args = parser.parse_args()
some_function(args.target, end=args.end)
We can run the script with poetry, or install and run it directly:
# run with poetry
$ poetry run my-script
# install the proyect (this will create a virtualenv if you didn't have it created)
$ poetry install
# activate the virtualenv
$ poetry shell
# run the script
$ my-script --help
usage: my-script [-h] [--end END] target
Say hi.
positional arguments:
target the name of the target
optional arguments:
-h, --help show this help message and exit
--end END sum the integers (default: find the max)
$ my-script "spanish inquisition" --end "?"
hi spanish inquisition?

This question is really two separate questions:
How do I pass arguments into a script that is run using Poetry
How do I access and parse those arguments, in particular, using argparse
The initial answer (by Lucas), addresses parts of each, especially about argparse, but I'm answering to fill in some additional details and explain how to directly access the args.
Access arguments directly in any function or script
As an alternative to argparse, arguments can be directly accessed in Python at any time using sys.argv, which is a list of strings, each one is one of the arguments. Python splits up the arguments based on spaces, unless the spaces are enclosed in quotes (either single or double quotes).
This method is more direct and lightweight than argparse, with a lot less functionality.
args.py setup as a main script file with a start() function:
import sys
def start(args=sys.argv):
for i, arg in enumerate(args):
print(f'Arg #{i}: {arg}')
if __name__ == '__main__':
start()
Run it at the command-line with a variety of argument types:
$ python args.py "item 1" 'Hello Arguments!!' "i 3" 4 5 6
Arg #0: args.py
Arg #1: item 1
Arg #2: Hello Arguments!!
Arg #3: i 3
Arg #4: 4
Arg #5: 5
Arg #6: 6
The first argument is always the script that was called, in exactly the way it was called (i.e. relative or absolute path to the script file or other reference).
Adding arguments when calling with poetry run
While you can run scripts with Poetry by activating the virtual environment with poetry shell and then running the script as normal with python script.py arg1 arg2 arg3, you can also add arguments directly to the poetry run command:
At the command-line, directly running the script:
$ poetry run python args.py arg1 arg2 arg3
Arg #0: <some_path>/args.py
Arg #1: arg1
Arg #2: arg2
Arg #3: arg3
Running a python file as an installed Poetry script
Or, run it as a script, installed by Poetry. In this case the script name we assign is arg_script, and you just run it directly at a terminal prompt with the virtual environment activated (i.e. do not invoke with python):
In pyproject.toml:
[tool.poetry.scripts]
arg_script = 'args:start' # run start() function from ./args.py
After updating pyproject.toml, run poetry install at a terminal prompt to install the script in the virtual environment named as arg_script.
With Poetry, you can run a command in the virtual environment by using poetry run:
$ poetry run arg_script arg1 arg2 arg3
Arg #0: arg_script
Arg #1: arg1
Arg #2: arg2
Arg #3: arg3
Any arguments added after poetry run, act just like you were typing them into a terminal that has the virtual environment already activated. i.e. The equivalent is:
$ poetry shell
$ args_script arg1 arg2 arg3

Related

Run vulture on Python module with CLI arguments?

Context
Suppose one has a project structure with src.projectname.__main__.py which can be executed using the following command with accompanying arguments:
python -m src.projectname -e mdsa_size3_m1 -v -x
Question
How would one run vulture whilst passing cli arguments to the script on which vulture runs?
Approach I
When I run:
python -m src.projectname -e mdsa_size3_m1 -v -x
It throws the following
usage: vulture [options] [PATH ...]
vulture: error: unrecognized arguments: -e mdsa_size3_m1 -x
because vulture tries to parse the arguments for the script that is being ran.
Notes
I am aware normally one would expect to run vulture on the script and its entirety without narrowing down the scope with arguments. However, in this case the arguments are required to specify the number of runs/duration of the code execution.
One can hack around this issue by temporarily manually hardcoding the args with (for example):
args = parse_cli_args()
args.experiment_settings_name = "mdsa_size3_m1"
args.export_images = True
process_args(args)
assuming one has such an args object, however, I thought perhaps this functionality can be realised using the CLI, without temporarily modifying the code.

Python Code Changes not Reflected on Script Run

$ python --version
Python 3.6.8
I've written a script which has some command-line arguments. Initially, these worked without issue:
arg_parser = argparse.ArgumentParser()
arg_parser.add_argument(
'-log',
'--loglevel',
default = 'info'
)
arg_parser.add_argument(
'-lf',
'--logfile',
default = './logs/populate.log'
)
...
cl_options = arg_parser.parse_args()
...
I then changed the name of the "-log" short flag, and added another flag:
arg_parser = argparse.ArgumentParser()
arg_parser.add_argument(
'-ll',
'--loglevel',
default = 'info'
)
arg_parser.add_argument(
'-lf',
'--logfile',
default = './logs/populate.log'
)
arg_parser.add_argument(
'-d',
'--daemon',
action = 'store_true'
)
...
cl_options = arg_parser.parse_args()
...
When running the script now, the initial set of arguments are still used - the name of the "-log" flag is the same and it is missing the "-d/--daemon" flag when run:
$ python3 populate.py --daemon
usage: populate.py [-h] [-log LOGLEVEL] [-lf LOGFILE]
populate.py: error: unrecognized arguments: --daemon
Things I have tried:
make sure I have checked out the proper git branch
delete the pycache folder
reboot the machine the script runs on
use the reload() option for argparse
If I look at the contents of the script I can see that the changes I've made are there, but they refuse to take effect.
I'm not a Python expert, I'm still learning, but I must be doing something wrong here. Can anyone point me in the right direction?
Thanks!
EDIT:
I have verified as well as I can that the script is using the most current files:
Remote System (where script is running):
$ pwd
/opt/ise-web-rpt
$ ls populate.py
populate.py
$ git branch
* develop
main
$ sha256sum populate.py
2601cbb49f6956611e2ff50a1b1b90ba61c9c0686ed199831d671e682492be4b populate.py
Local System (where development happens):
$ git branch
* develop
main
$ sha256sum populate.py
2601cbb49f6956611e2ff50a1b1b90ba61c9c0686ed199831d671e682492be4b populate.py
As far as I can tell the script is the correct file and I'm on the correct branch in Git.
Stepping through this in pdb, it appears this was caused by importing another Python file in the populate.py script.
Both files had argparse configured the exact same way, so initially, there was no problem. When I added the new parameter to populate.py, the second file that was imported didn't have this parameter added, so it was "unrecognized" to the imported Python file. That's also why the flag names didn't appear to change - it was returning the names from the imported file, not the one I was trying to run. I added the new parameter to the args list in the second file and the script(s) were able to run.
I now need to figure out how hierarchy works for argparse, but that's a separate issue. Thanks everyone for the input.

How require options for CLI app based on Python and Click

I am building a CLI app with Python and the Click library.
How do I achieve the following use case:
First I only want the subcommand to be followed by an argument no options are required:
$ myapp subcommand argument
This is straight forward.
But how can I write the code that if argument2 is set that also some options are required?
$ myapp subcommand argument2 -o1 abc -o2 def
For example:
no options are required:
$ ./myapp.py install basic
options are required:
$ ./myapp.py install custom -o1 abc -o2 def
Furthermore I do not know how to make choice for arguments, that means that the user must choose between "basic" or "custom". In case he chooses "custom", he needs to add some options.
I have achieved this successfully by making your argument2 be a click.Command.
Running through the code below, my main way of interacting with the CLI application is via the cli group. That cli group has another group, install, added as a command. So we have a CLI with nested groups.
install has 2 commands, basic and custom, as in your example.
basic takes no parameters, while custom takes 2 required Options.
Calls would look like this:
❯ myapp install custom -o1 arg1 -o2 def
This is a custom install with option1: arg1 and option2: def
❯ myapp install basic
Executing a basic install
You can see the nested group install acts as a command inside the help message:
❯ myapp
Usage: myapp [OPTIONS] COMMAND [ARGS]...
Options:
--help Show this message and exit.
Commands:
install
And if you were to invoke install, this is the help output you'd get.
❯ myapp install
Usage: myapp install [OPTIONS] COMMAND [ARGS]...
Options:
--help Show this message and exit.
Commands:
basic
custom
This is the code:
import click
#click.group()
def cli():
pass
#click.group()
def install():
pass
#install.command()
def basic():
print('Executing a basic install')
#install.command()
#click.option("-o1", "--option1", required=True)
#click.option("-o2", "--option2", required=True)
def custom(option1, option2):
print(f'This is a custom install with option1: {option1} and option2: {option2}')
def main():
cli.add_command(install)
cli()
if __name__ == '__main__':
main()

Activating a Python virtual environment and calling python script inside another python script

I am using pipenv for managing my packages. I want to write a python script that calls another python script that uses a different Virtual Environment(VE).
How can I run python script 1 that uses VE1 and call another python script (script2 that uses VE2).
I found this code for the cases where there is no need for changing the virtual environment.
import os
os.system("python myOtherScript.py arg1 arg2 arg3")
The only idea that I had was simply navigating to the target project and activate shell:
os.system("cd /home/mmoradi2/pgrastertime/")
os.system("pipenv shell")
os.system("python test.py")
but it says:
Shell for /home/..........-GdKCBK2j already activated.
No action taken to avoid nested environments.
What should I do now?
in fact my own code needs VE1 and the subprocess (second script) needs VE2. How can I call the second script inside my code?
In addition, the second script is used as a command line tool that accepts the inputs with flags:
python3 pgrastertime.py -s ./sql/postprocess.sql -t brasdor_c_07_0150
-p xml -f -r ../data/brasdor_c_07_0150.object.xml
How can I call it using the solution of #tzaman
Each virtualenv has its own python executable which you can use directly to execute the script.
Using subprocess (more versatile than os.system):
import subprocess
venv_python = '/path/to/other/venv/bin/python'
args = [venv_python, 'my_script.py', 'arg1', 'arg2', 'arg3']
subprocess.run(args)

Suppress output to show only Usage and Docstring

I would like to suppress the output when using python-fire, for a given command line option.
The fire trace and everything apart from the docstring and usage is essentially useless to me and clutters up the terminal. Any way I can get rid of it ?
I'm creating the cli using python-fire like this, where "command" is a function defined earlier :
if __name__ == "__main__":
fire.Fire(
{
"command": command
}
)
$ python cli.py command
Fire trace:
1. Initial component
2. Accessed property "command"
3. ('The function received no value for the required argument:)
Type: function
String form: <function list_property_versions at 0x10de5d840>
File: ./cli.py
Line: 171
Docstring: Does something
Usage: cli.py command arg1
cli.py command --first-arg arg1
Expected Output:
$ python cli.py command1
Docstring: Does something
Usage: cli.py command1 arg1
cli.py command1 --first-arg arg1
You can achieve this by editing core.py in the python-fire library by commenting/deleting out the printing of the trace in the following if condition:
if component_trace.HasError():
It's hacky but it works for now.
The Fire trace is no longer shown by default in the latest version of Fire, starting with version v0.2.0. I think you'll find the output is much cleaner than it was in earlier versions.
pip install -U fire to upgrade.

Categories

Resources