This happens to me almost every day:
pip isntall matplotlib --upgrade
ERROR: unknown command "isntall" - maybe you meant "install"
There is one obvious solution; learn how to type install correctly. So far, that hasn't gone well
I've been wondering, is there a way to instead have pip run the install command when I type isntall? It already recommends the solution, so why not just run it instead of making me type it again.
I'm aware of how silly of a question this is, but I honestly can't seem to type install correctly (I've had to correct it twice already in this question)
You can use thufuck, when you make a typo like this, just write fuck, you can also customize this later, and it will show the probably correct way to do it.
https://github.com/nvbn/thefuck
You can create a function to solve this (stick it in your .bashrc or what-have-you).
pip() {
if [[ $1 == "isntall" ]]; then
command pip install ${#: 2}
else;
command pip $#
fi
}
If I wrote that correctly it will catch your mispeling and run the command properly.
Disclaimer
This is not a solution to be taken seriously, rather of the "you can do this a well" kind. The code uses knowledge of internal pip implementation and may break on new release of pip.
You can patch the related pip logic in a custom sitecustomize module. Create a file sitecustomize.py with the following contents:
import re
import pip._internal.cli.main_parser as pip_cli_parser
from pip._internal.exceptions import CommandError
error_pattern = re.compile(
r'unknown command "(?P<cmd>.*?)"(?: - maybe you meant "(?P<guess>.*?)")?'
)
parse_command_orig = pip_cli_parser.parse_command
def parse_command(args):
try:
return parse_command_orig(args)
except CommandError as err:
msg = str(err)
cmd_name, guess = error_pattern.search(msg).groups()
if guess is not None:
cmd_args = args[:]
cmd_args.remove(cmd_name)
return guess, cmd_args
else:
raise err
pip_cli_parser.parse_command = parse_command
Place the file in the site-packages directory where pip is installed to. You can find it e.g. by running pip -V:
$ pip -V
pip 21.2.3 from /tmp/tst/lib64/python3.10/site-packages/pip (python 3.10)
The target directory in this example is thus /tmp/tst/lib64/python3.10/site-packages.
Now pip's command parser will be patched each time a Python process starts:
$ pip lit # instead of 'list'
Package Version
---------- -------
pip 22.0.4
setuptools 57.4.0
I ended up editing the a file in the pip folder to add an sintall command. In [path_to_python]\Lib\site-packages\pip_internal\commands_init_.py, there is a dictionary that looks like this...
commands_dict: Dict[str, CommandInfo] = {
"install": CommandInfo(
"pip._internal.commands.install",
"InstallCommand",
"Install packages.",...
And I just added a dictionary entry for 'isntall'
"isntall": CommandInfo(
"pip._internal.commands.install",
"InstallCommand",
"Install packages.",
)
Definitely not an ideal solution because any new env of update would require the same edit, but for now it's working well and doesn't require any new typing after the typo. DEfinitely still happy to hear a better answer
Related
I would like to install the modules 'mutagen' and 'gTTS' for my code, but I want to have it so it will install the modules on every computer that doesn't have them, but it won't try to install them if they're already installed. I currently have:
def install(package):
pip.main(['install', package])
install('mutagen')
install('gTTS')
from gtts import gTTS
from mutagen.mp3 import MP3
However, if you already have the modules, this will just add unnecessary clutter to the start of the program whenever you open it.
EDIT - 2020/02/03
The pip module has updated quite a lot since the time I posted this answer. I've updated the snippet with the proper way to install a missing dependency, which is to use subprocess and pkg_resources, and not pip.
To hide the output, you can redirect the subprocess output to devnull:
import sys
import subprocess
import pkg_resources
required = {'mutagen', 'gTTS'}
installed = {pkg.key for pkg in pkg_resources.working_set}
missing = required - installed
if missing:
python = sys.executable
subprocess.check_call([python, '-m', 'pip', 'install', *missing], stdout=subprocess.DEVNULL)
Like #zwer mentioned, the above works, although it is not seen as a proper way of packaging your project. To look at this in better depth, read the the page How to package a Python App.
you can use simple try/except:
try:
import mutagen
print("module 'mutagen' is installed")
except ModuleNotFoundError:
print("module 'mutagen' is not installed")
# or
install("mutagen") # the install function from the question
If you want to know if a package is installed, you can check it in your terminal using the following command:
pip list | grep <module_name_you_want_to_check>
How this works:
pip list
lists all modules installed in your Python.
The vertical bar | is commonly referred to as a "pipe". It is used to pipe one command into another. That is, it directs the output from the first command into the input for the second command.
grep <module_name_you_want_to_check>
finds the keyword from the list.
Example:
pip list| grep quant
Lists all packages which start with "quant" (for example "quantstrats"). If you do not have any output, this means the library is not installed.
You can check if a package is installed using pkg_resources.get_distribution:
import pkg_resources
for package in ['mutagen', 'gTTS']:
try:
dist = pkg_resources.get_distribution(package)
print('{} ({}) is installed'.format(dist.key, dist.version))
except pkg_resources.DistributionNotFound:
print('{} is NOT installed'.format(package))
Note: You should not be directly importing the pip module as it is an unsupported use-case of the pip command.
The recommended way of using pip from your program is to execute it using subprocess:
subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'my_package'])
Although #girrafish's answer might suffice, you can check package installation via importlib too:
import importlib
packages = ['mutagen', 'gTTS']
[subprocess.check_call(['pip', 'install', pkg])
for pkg in packages if not importlib.util.find_spec(pkg)]
You can use the command line :
python -m MyModule
it will say if the module exists
Else you can simply use the best practice :
pip freeze > requirements.txt
That will put the modules you've on you python installation in a file
and :
pip install -r requirements.txt
to load them
It will automatically you purposes
Have fun
Another solution it to put an import statement for whatever you're trying to import into a try/except block, so if it works it's installed, but if not it'll throw the exception and you can run the command to install it.
You can run pip show package_name
or for broad view use pip list
Reference
If you would like to preview if a specific package (or some) are installed or not maybe you can use the idle in python.
Specifically :
Open IDLE
Browse to File > Open Module > Some Module
IDLE will either display the module or will prompt an error message.
Above is tested with python 3.9.0
According to this answer you can import pip from within a Python script and use it to install a module. Is it possible to do this with conda install?
The conda documentation only shows examples from the command line but I'm looking for code that can be executed from within a Python script.
Yes, I could execute shell commands from within the script but I am trying to avoid this as it is basically assuming that conda cannot be imported and its functions called.
You can use conda.cli.main. For example, this installs numpy:
import conda.cli
conda.cli.main('conda', 'install', '-y', 'numpy')
Use the -y argument to avoid interactive questions:
-y, --yes Do not ask for confirmation.
I was looking at the latest Conda Python API and noticed that there are actually only 2 public modules with “very long-term stability”:
conda.cli.python_api
conda.api
For your question, I would work with the first:
NOTE: run_command() below will always add a -y/--yes option (i.e. it will not ask for confirmation)
import conda.cli.python_api as Conda
import sys
###################################################################################################
# The below is roughly equivalent to:
# conda install -y 'args-go-here' 'no-whitespace-splitting-occurs' 'square-brackets-optional'
(stdout_str, stderr_str, return_code_int) = Conda.run_command(
Conda.Commands.INSTALL, # alternatively, you can just say "install"
# ...it's probably safer long-term to use the Commands class though
# Commands include:
# CLEAN,CONFIG,CREATE,INFO,INSTALL,HELP,LIST,REMOVE,SEARCH,UPDATE,RUN
[ 'args-go-here', 'no-whitespace-splitting-occurs', 'square-brackets-optional' ],
use_exception_handler=True, # Defaults to False, use that if you want to handle your own exceptions
stdout=sys.stdout, # Defaults to being returned as a str (stdout_str)
stderr=sys.stderr, # Also defaults to being returned as str (stderr_str)
search_path=Conda.SEARCH_PATH # this is the default; adding only for illustrative purposes
)
###################################################################################################
The nice thing about using the above is that it solves a problem that occurs (mentioned in the comments above) when using conda.cli.main():
...conda tried to interpret the comand line arguments instead of the arguments of conda.cli.main(), so using conda.cli.main() like this might not work for some things.
The other question in the comments above was:
How [to install a package] when the channel is not the default?
import conda.cli.python_api as Conda
import sys
###################################################################################################
# Either:
# conda install -y -c <CHANNEL> <PACKAGE>
# Or (>= conda 4.6)
# conda install -y <CHANNEL>::<PACKAGE>
(stdout_str, stderr_str, return_code_int) = Conda.run_command(
Conda.Commands.INSTALL,
'-c', '<CHANNEL>',
'<PACKAGE>'
use_exception_handler=True, stdout=sys.stdout, stderr=sys.stderr
)
###################################################################################################
Having worked with conda from Python scripts for a while now, I think calling conda with the subprocess module works the best overall. In Python 3.7+, you could do something like this:
import json
from subprocess import run
def conda_list(environment):
proc = run(["conda", "list", "--json", "--name", environment],
text=True, capture_output=True)
return json.loads(proc.stdout)
def conda_install(environment, *package):
proc = run(["conda", "install", "--quiet", "--name", environment] + packages,
text=True, capture_output=True)
return json.loads(proc.stdout)
As I pointed out in a comment, conda.cli.main() was not intended for external use. It parses sys.argv directly, so if you try to use it in your own script with your own command line arguments, they will get fed to conda.cli.main() as well.
#YenForYang's answer suggesting conda.cli.python_api is better because this is a publicly documented API for calling conda commands. However, I have found that it still has rough edges. conda builds up internal state as it executes a command (e.g. caches). The way conda is usually used and usually tested is as a command line program. In that case, this internal state is discarded at the end of the conda command. With conda.cli.python_api, you can execute several conda commands within a single process. In this case, the internal state carries over and can sometimes lead to unexpected results (e.g. the cache becomes outdated as commands are performed). Of course, it should be possible for conda to handle this internal state directly. My point is just that using conda this way is not the main focus of the developers. If you want the most reliable method, use conda the way the developers intend it to be used -- as its own process.
conda is a fairly slow command, so I don't think one should worry about the performance impact of calling a subprocess. As I noted in another comment, pip is a similar tool to conda and explicitly states in its documentation that it should be called as a subprocess, not imported into Python.
I found that conda.cli.python_api and conda.api are limited, in the sense that, they both don't have the option to execute commands like this:
conda export env > requirements.txt
So instead I used subprocess with the flag shell=True to get the job done.
subprocess.run(f"conda env export --name {env} > {file_path_from_history}",shell=True)
where env is the name of the env to be saved to requirements.txt.
The simpler thing that i tried and worked for me was :
import os
try:
import graphviz
except:
print ("graphviz not found, Installing graphviz ")
os.system("conda install -c anaconda graphviz")
import graphviz
And make sure you run your script as admin.
Try this:
!conda install xyzpackage
Please remember this has to be done within the Python script not the OS prompt.
Or else you could try the following:
import sys
from conda.cli import main
sys.exit(main())
try:
import conda
from conda.cli import main
sys.argv = ['conda'] + list(args)
main()
How can I set the installation path for pip using get-pip.py to /usr/local/bin/? I can't find any mention in the setup guide or in the command line options.
To clarify I don't mean the path where pip packages are installed, but the path where pip itself is installed (it shuold be in /usr/local/bin/pip).
Edit
I do agree with many of the comments/answers that virtualenv would be a better idea in general. However it simply isn't the best one for me at the moment since it would be too disruptive; many of our user's scripts rely on a python2.7 being magically available; this is not convenient either and should change, but we have been using python since before virtualenv was a thing.
Pip itself is a Python package, and the actual pip command just runs a small Python script which then imports and runs the pip package.
You can edit locations.py to change installation directories, however, as stated above, I highly recommend that you do not do this.
Pip Command
Pip accepts a flag, '--install-option="--install-scripts"', which can be used to change the installation directory:
pip install somepackage --install-option="--install-scripts=/usr/local/bin"
Source method
On line 124 in pip/locations.py, we see the following:
site_packages = sysconfig.get_python_lib()
user_site = site.USER_SITE
You can technically edit these to change the default install path, however, using a virtual environment would be highly preferable. This is then used to find the egglink path, which then finds the dist path (code appended below, from pip/__init__.py).
def egg_link_path(dist):
"""
Return the path for the .egg-link file if it exists, otherwise, None.
There's 3 scenarios:
1) not in a virtualenv
try to find in site.USER_SITE, then site_packages
2) in a no-global virtualenv
try to find in site_packages
3) in a yes-global virtualenv
try to find in site_packages, then site.USER_SITE
(don't look in global location)
For #1 and #3, there could be odd cases, where there's an egg-link in 2
locations.
This method will just return the first one found.
"""
sites = []
if running_under_virtualenv():
if virtualenv_no_global():
sites.append(site_packages)
else:
sites.append(site_packages)
if user_site:
sites.append(user_site)
else:
if user_site:
sites.append(user_site)
sites.append(site_packages)
for site in sites:
egglink = os.path.join(site, dist.project_name) + '.egg-link'
if os.path.isfile(egglink):
return egglink
def dist_location(dist):
"""
Get the site-packages location of this distribution. Generally
this is dist.location, except in the case of develop-installed
packages, where dist.location is the source code location, and we
want to know where the egg-link file is.
"""
egg_link = egg_link_path(dist)
if egg_link:
return egg_link
return dist.location
However, once again, using a virtualenv is much more traceable, and any Pip updates will override these changes, unlike your own virtualenv.
It seems the easiest workaround I found to do this is:
install easy_install; this will go in /usr/local/bin/ as expected; the steps for doing this are listed here; I personally ended up running wget https://bootstrap.pypa.io/ez_setup.py -O - | python
install pip with /usr/local/bin/easy_install pip; this will make pip go in /usr/local/bin
I am using simple entry points to make a custom script, with this in setup.py:
entry_points = {
'my_scripts': ['combine_stuff = mypackage.mymod.test:foo']
}
where mypackage/mymod/test.py contains:
import argh
from argh import arg
#arg("myarg", help="Test arg.")
def foo(myarg):
print "Got: ", myarg
When I install my package using this (in same directory as setup.py)
pip install --user -e .
The entry points do not get processed at all it seems. Why is that?
If I install with distribute easy_install, like:
easy_install --user -U .
then the entry points get processed and it creates:
$ cat mypackage.egg-info/entry_points.txt
[my_scripts]
combine_stuff = mypackage.mymod.test:foo
but no actual script called combine_stuff gets placed anywhere in my bin dirs (like ~/.local/bin/). It just doesn't seem to get made. What is going wrong here? How can I get it to make an executable script, and ideally work with pip too?
The answer was to use console_scripts instead of my_scripts. It was unclear that the scripts name was anything other than internal label for the programmer.
When I create a virtualenv, it installs setuptools and pip. Is it possible to add new packages to this list?
Example use cases:
Following this solution to use ipython in virtualenv (from this question) requires installing ipython in every virtualenv (unless I allow system-site-packages).
Or if I'm doing a only flask/pygame/framework development, I'd want it in every virtualenv.
I took a different approach from what is chosen as the correct answer.
I chose I directory, like ~/.virtualenv/deps and installed packages in there by doing
pip install -U --target ~/.virtualenv/deps ...
Next in ~/.virtualenv/postmkvirtualenv I put the following:
# find directory
SITEDIR=$(virtualenvwrapper_get_site_packages_dir)
PYVER=$(virtualenvwrapper_get_python_version)
# create new .pth file with our path depending of python version
if [[ $PYVER == 3* ]];
then
echo "$HOME/.virtualenvs/deps3/" > "$SITEDIR/extra.pth";
else
echo "$HOME/.virtualenvs/deps/" > "$SITEDIR/extra.pth";
fi
Post that basically says the same thing.
You can write a python script, say personalize_venv.py that extends the EnvBuilder class and override its post_setup() method for installing any default packages that you need.
You can get the basic example from https://docs.python.org/3/library/venv.html#an-example-of-extending-envbuilder.
This doesn't need a hook. Directly run the script with command line argument dirs pointing to your venv directory/directories. The hook is the post_setup() method itself of EnvBuilder class.