slow response of CLI written with Python Click - python

I'm experiencing slow responses of my CLI written with Click 7.0 on Python 3.6.6 (under conda environment).
It takes time to print the help message when calling the CLI when the package has been installed with pip (using setuptools):
$ time cli
Usage: cli [OPTIONS] COMMAND [ARGS]...
Welcome in the CLI!
Options:
--version Show the version and exit.
--help Show this message and exit.
real 0m0,523s
user 0m0,482s
sys 0m0,042s
However, I don't get this lag when calling the CLI directly from the source:
$ time python myproject/cli.py
Usage: cli.py [OPTIONS] COMMAND [ARGS]...
Welcome in the CLI!
Options:
--version Show the version and exit.
--help Show this message and exit.
real 0m0,088s
user 0m0,071s
sys 0m0,016s
Here is the content of myproject/cli.py:
import click
#click.group('cli', invoke_without_command=True)
#click.pass_context
#click.version_option(version='0.0.1', prog_name="test")
def cli(ctx):
"""
Welcome in the CLI!
"""
if ctx.invoked_subcommand is None:
# show help if no option passed to cli
if all(v==False for v in ctx.params.values()):
click.echo(ctx.get_help())
if __name__ == '__main__':
cli()
And setup.py is configured like this:
setup(
name=name,
version=__version__,
packages=find_packages(),
install_requires=install_requires,
author=author,
author_email=author_email,
description=description,
entry_points='''
[console_scripts]
cli=myproject.cli:cli
''',
keywords=keywords,
cmdclass=cmdclass,
include_package_data=True,
)
Could someone help me with this? This is really inconvenient to get such lag for a CLI.

For small Python CLIs this delay is very noticable. It has to do with the wrapper that setuptools creates around your CLI endpoint.
It implements some auxiliary functionality with your endpoint, like checking that your (virtual) python environment has all required dependencies.
People have created solutions to circumvent these auxilary functionalities with tools like fast-entry_points. Check it out, it might suite your use-case.
Note: This speed improvement is mostly noticeable for small CLIs. If you have a larger CLI/Project, you will need to structure your imports as local imports to prevent all imports being loaded when you perform a specific action. Especially when using auto-complete on your CLI, it might be worth to also change your imports.

Related

Python Code Changes not Reflected on Script Run

$ python --version
Python 3.6.8
I've written a script which has some command-line arguments. Initially, these worked without issue:
arg_parser = argparse.ArgumentParser()
arg_parser.add_argument(
'-log',
'--loglevel',
default = 'info'
)
arg_parser.add_argument(
'-lf',
'--logfile',
default = './logs/populate.log'
)
...
cl_options = arg_parser.parse_args()
...
I then changed the name of the "-log" short flag, and added another flag:
arg_parser = argparse.ArgumentParser()
arg_parser.add_argument(
'-ll',
'--loglevel',
default = 'info'
)
arg_parser.add_argument(
'-lf',
'--logfile',
default = './logs/populate.log'
)
arg_parser.add_argument(
'-d',
'--daemon',
action = 'store_true'
)
...
cl_options = arg_parser.parse_args()
...
When running the script now, the initial set of arguments are still used - the name of the "-log" flag is the same and it is missing the "-d/--daemon" flag when run:
$ python3 populate.py --daemon
usage: populate.py [-h] [-log LOGLEVEL] [-lf LOGFILE]
populate.py: error: unrecognized arguments: --daemon
Things I have tried:
make sure I have checked out the proper git branch
delete the pycache folder
reboot the machine the script runs on
use the reload() option for argparse
If I look at the contents of the script I can see that the changes I've made are there, but they refuse to take effect.
I'm not a Python expert, I'm still learning, but I must be doing something wrong here. Can anyone point me in the right direction?
Thanks!
EDIT:
I have verified as well as I can that the script is using the most current files:
Remote System (where script is running):
$ pwd
/opt/ise-web-rpt
$ ls populate.py
populate.py
$ git branch
* develop
main
$ sha256sum populate.py
2601cbb49f6956611e2ff50a1b1b90ba61c9c0686ed199831d671e682492be4b populate.py
Local System (where development happens):
$ git branch
* develop
main
$ sha256sum populate.py
2601cbb49f6956611e2ff50a1b1b90ba61c9c0686ed199831d671e682492be4b populate.py
As far as I can tell the script is the correct file and I'm on the correct branch in Git.
Stepping through this in pdb, it appears this was caused by importing another Python file in the populate.py script.
Both files had argparse configured the exact same way, so initially, there was no problem. When I added the new parameter to populate.py, the second file that was imported didn't have this parameter added, so it was "unrecognized" to the imported Python file. That's also why the flag names didn't appear to change - it was returning the names from the imported file, not the one I was trying to run. I added the new parameter to the args list in the second file and the script(s) were able to run.
I now need to figure out how hierarchy works for argparse, but that's a separate issue. Thanks everyone for the input.

Alias "python script" to a specified name so as to present a command for a python cli application

I currently writing a python cli application that takes in a CSV file as an argument and various options from the cli.
python .\test.py data.csv
usage: test.py [-h] [-v | -q] [-o] filepath
I want to alias or replace the python ./test in the cli with another word so as to look like a command like angular or git cli. For example rexona data.csv -o.
I want to do that on both Windows and Linux so as I can publish it as a PyPI distribution.
Thank you
Aliasing is a very OS and environment-dependent and is not the correct way to achieve what you are looking for.
Instead, you should use the tools offered by the packaging tool you are using to create the distributed package.
For example, if using setup.py then add
entry_points={
'console_scripts': ['rexona = path.to.module:function_name']
},
to the call to setup(...).

How require options for CLI app based on Python and Click

I am building a CLI app with Python and the Click library.
How do I achieve the following use case:
First I only want the subcommand to be followed by an argument no options are required:
$ myapp subcommand argument
This is straight forward.
But how can I write the code that if argument2 is set that also some options are required?
$ myapp subcommand argument2 -o1 abc -o2 def
For example:
no options are required:
$ ./myapp.py install basic
options are required:
$ ./myapp.py install custom -o1 abc -o2 def
Furthermore I do not know how to make choice for arguments, that means that the user must choose between "basic" or "custom". In case he chooses "custom", he needs to add some options.
I have achieved this successfully by making your argument2 be a click.Command.
Running through the code below, my main way of interacting with the CLI application is via the cli group. That cli group has another group, install, added as a command. So we have a CLI with nested groups.
install has 2 commands, basic and custom, as in your example.
basic takes no parameters, while custom takes 2 required Options.
Calls would look like this:
❯ myapp install custom -o1 arg1 -o2 def
This is a custom install with option1: arg1 and option2: def
❯ myapp install basic
Executing a basic install
You can see the nested group install acts as a command inside the help message:
❯ myapp
Usage: myapp [OPTIONS] COMMAND [ARGS]...
Options:
--help Show this message and exit.
Commands:
install
And if you were to invoke install, this is the help output you'd get.
❯ myapp install
Usage: myapp install [OPTIONS] COMMAND [ARGS]...
Options:
--help Show this message and exit.
Commands:
basic
custom
This is the code:
import click
#click.group()
def cli():
pass
#click.group()
def install():
pass
#install.command()
def basic():
print('Executing a basic install')
#install.command()
#click.option("-o1", "--option1", required=True)
#click.option("-o2", "--option2", required=True)
def custom(option1, option2):
print(f'This is a custom install with option1: {option1} and option2: {option2}')
def main():
cli.add_command(install)
cli()
if __name__ == '__main__':
main()

ipython not showing argparse help message

In my python script myscript.py I use argparse to pass command-line arguments. When I want to display the help information about the input arguments, I just do:
$ python myscript.py --help
If instead I want to use ipython to run my script, the help message won't be displayed. Ipython will display its own help information:
$ ipython -- myscript.py -h
=========
IPython
=========
Tools for Interactive Computing in Python
=========================================
A Python shell with automatic history (input and output), dynamic object
introspection, easier configuration, command completion, access to the
system shell and more. IPython can also be embedded in running programs.
Usage
ipython [subcommand] [options] [files]
It's not so annoying, but is there a way around it?
You need to run your .py script inside the ipython. Something like that:
%run script.py -h
This is an IPython bug, corrected in https://github.com/ipython/ipython/pull/2663.
My 0.13 has this error; it is corrected in 0.13.2. The fix is in IPthyon/config/application.py Application.parse_command_line. This function looks for help and version flags (-h,-V) in sys.argv before passing things on to parse_known_args (hence the custom help formatting). In the corrected release, it checks sys.argv only up to the first --. Before it looked in the whole array.
earlier:
A fix for earlier releases is to define an alternate help flag in the script:
simple.py script:
import argparse, sys
print(sys.argv)
p = argparse.ArgumentParser(add_help=False) # turn off the regular -h
p.add_argument('-t')
p.add_argument('-a','--ayuda',action=argparse._HelpAction,help='alternate help')
print(p.parse_args())
Invoke with:
$ ./ipython3 -- simple.py -a
['/home/paul/mypy/argdev/simple.py', '-a']
usage: simple.py [-t T] [-a]
optional arguments:
-t T
-a, --ayuda alternate help
$ ./ipython3 -- simple.py -t test
['/home/paul/mypy/argdev/simple.py', '-t', 'test']
Namespace(t='test')

running system commands on linux using python?

I'm wondering if someone can either direct me to a example or help me with my code for running commands on linux(centos). Basically, I am assuming I have a basic fresh server and want to configure it. I thought I could list the commands I need to run and it would work but I'm getting errors. The errors are related to nothing to make(when making thift).
I think this is because(I'm just assuming here) that python is just sending the code to run and then sending another and another and not waiting for each command to finish running(after the script fails, I check and the thrift package is downloaded and successfully uncompressed).
Here's the code:
#python command list to setup new server
import commands
commands_to_run = ['yum -y install pypy autocon automake libtool flex boost-devel gcc-c++ byacc svn openssl-devel make java-1.6.0-openjdk git wget', 'service mysqld start',
'wget http://www.quickprepaidcard.com/apache//thrift/0.8.0/thrift-0.8.0.tar.gz', 'tar zxvf thrift-0.8.0.tar.gz',
'cd thrift-0.8.0', './configure', 'make', 'make install' ]
for x in commands_to_run:
print commands.getstatusoutput(x)
Any suggestions on how to get this to work? If my approach is totally wrong then let me know(I know I can use a bash script but I'm trying to improve my python skills).
Since commands has been deprecated for a long time, you should really be using subprocess, specifically subprocess.check_output. Also, cd thrift-0.8.0 only affects the subprocess, and not yours. You can either call os.chdir or pass the cwd argument to subprocess functions:
import subprocess, os
commands_to_run = [['yum', '-y', 'install',
'pypy', 'python', 'MySQL-python', 'mysqld', 'mysql-server',
'autocon', 'automake', 'libtool', 'flex', 'boost-devel',
'gcc-c++', 'perl-ExtUtils-MakeMaker', 'byacc', 'svn',
'openssl-devel', 'make', 'java-1.6.0-openjdk', 'git', 'wget'],
['service', 'mysqld', 'start'],
['wget', 'http://www.quickprepaidcard.com/apache//thrift/0.8.0/thrift-0.8.0.tar.gz'],
['tar', 'zxvf', 'thrift-0.8.0.tar.gz']]
install_commands = [['./configure'], ['make'], ['make', 'install']]
for x in commands_to_run:
print subprocess.check_output(x)
os.chdir('thrift-0.8.0')
for cmd in install_commands:
print subprocess.check_output(cmd)
Since CentOS maintains ancient versions of Python, you may want to use this backport instead.
Note that if you want to print the output out anyways, you can just call the subprocess with check_call, since the subprocess inherits your stdout,stderr, and stdin by default.

Categories

Resources