Related
I installed the Python modules construct and statlib using setuptools:
sudo apt-get install python-setuptools
sudo easy_install statlib
sudo easy_install construct
How do I check their versions from the command line?
Use pip instead of easy_install.
With pip, list all installed packages and their versions via:
pip freeze
On most Linux systems, you can pipe this to grep (or findstr on Windows) to find the row for the particular package you're interested in.
Linux:
pip freeze | grep lxml
lxml==2.3
Windows:
pip freeze | findstr lxml
lxml==2.3
For an individual module, you can try the __version__ attribute. However, there are modules without it:
python -c "import requests; print(requests.__version__)"
2.14.2
python -c "import lxml; print(lxml.__version__)"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AttributeError: 'module' object has no attribute 'version'
Lastly, as the commands in your question are prefixed with sudo, it appears you're installing to the global python environment. I strongly advise to take look into Python virtual environment managers, for example virtualenvwrapper.
You can try
>>> import statlib
>>> print statlib.__version__
>>> import construct
>>> print contruct.__version__
This is the approach recommended by PEP 396. But that PEP was never accepted and has been deferred. In fact, there appears to be increasing support amongst Python core developers to recommend not including a __version__ attribute, e.g. in Remove importlib_metadata.version..
Python >= 3.8:
If you're on Python >= 3.8, you can use a module from the built-in library for that. To check a package's version (in this example construct) run:
>>> from importlib.metadata import version
>>> version('construct')
'4.3.1'
Python < 3.8:
Use pkg_resources module distributed with setuptools library. Note that the string that you pass to get_distribution method should correspond to the PyPI entry.
>>> import pkg_resources
>>> pkg_resources.get_distribution('construct').version
'2.5.2'
Side notes:
Note that the string that you pass to the get_distribution method should be the package name as registered in PyPI, not the module name that you are trying to import. Unfortunately, these aren't always the same (e.g. you do pip install memcached, but import memcache).
If you want to apply this solution from the command line you can do something like:
python -c \
"import pkg_resources; print(pkg_resources.get_distribution('construct').version)"
Use pip show to find the version!
# In order to get the package version, execute the below command
pip show YOUR_PACKAGE_NAME | grep Version
You can use pip show YOUR_PACKAGE_NAME - which gives you all details of package. This also works in Windows.
grep Version is used in Linux to filter out the version and show it.
The better way to do that is:
For the details of a specific package
pip show <package_name>
It details out the package_name, version, author, location, etc.
$ pip show numpy
Name: numpy
Version: 1.13.3
Summary: NumPy: array processing for numbers, strings, records, and objects.
Home-page: http://www.numpy.org
Author: NumPy Developers
Author-email: numpy-discussion#python.org
License: BSD
Location: c:\users\prowinjvm\appdata\local\programs\python\python36\lib\site-packages
Requires:
For more details: >>> pip help
pip should be updated to do this.
pip install --upgrade pip
On Windows the recommended command is:
python -m pip install --upgrade pip
In Python 3 with brackets around print:
>>> import celery
>>> print(celery.__version__)
3.1.14
module.__version__ is a good first thing to try, but it doesn't always work.
If you don't want to shell out, and you're using pip 8 or 9, you can still use pip.get_installed_distributions() to get versions from within Python:
The solution here works in pip 8 and 9, but in pip 10 the function has been moved from pip.get_installed_distributions to pip._internal.utils.misc.get_installed_distributions to explicitly indicate that it's not for external use. It's not a good idea to rely on it if you're using pip 10+.
import pip
pip.get_installed_distributions() # -> [distribute 0.6.16 (...), ...]
[
pkg.key + ': ' + pkg.version
for pkg in pip.get_installed_distributions()
if pkg.key in ['setuptools', 'statlib', 'construct']
] # -> nicely filtered list of ['setuptools: 3.3', ...]
The previous answers did not solve my problem, but this code did:
import sys
for name, module in sorted(sys.modules.items()):
if hasattr(module, '__version__'):
print name, module.__version__
Use dir() to find out if the module has a __version__ attribute at all.
>>> import selenium
>>> dir(selenium)
['__builtins__', '__doc__', '__file__', '__name__',
'__package__', '__path__', '__version__']
>>> selenium.__version__
'3.141.0'
>>> selenium.__path__
['/venv/local/lib/python2.7/site-packages/selenium']
You can try this:
pip list
This will output all the packages with their versions.
Output
In the Python 3.8 version, there is a new metadata module in the importlib package, which can do that as well.
Here is an example from the documentation:
>>> from importlib.metadata import version
>>> version('requests')
'2.22.0'
Some modules don't have __version__ attribute, so the easiest way is check in the terminal: pip list
If the methods in previous answers do not work, it is worth trying the following in Python:
import modulename
modulename.version
modulename.version_info
See Get the Python Tornado version
Note, the .version worked for me on a few others, besides Tornado as well.
First add executables python and pip to your environment variables. So that you can execute your commands from command prompt. Then simply give Python command.
Then import the package:
import scrapy
Then print the version name
print(scrapy.__version__)
This will definitely work.
Assuming we are using Jupyter Notebook (if using Terminal, drop the exclamation marks):
if the package (e.g., xgboost) was installed with pip:
!pip show xgboost
!pip freeze | grep xgboost
!pip list | grep xgboost
if the package (e.g. caffe) was installed with Conda:
!conda list caffe
I suggest opening a Python shell in the terminal (in the Python version you are interested), importing the library, and getting its __version__ attribute.
>>> import statlib
>>> statlib.__version__
>>> import construct
>>> contruct.__version__
Note 1: We must regard the Python version. If we have installed different versions of Python, we have to open the terminal in the Python version we are interested in. For example, opening the terminal with Python 3.8 can (surely will) give a different version of a library than opening with Python 3.5 or Python 2.7.
Note 2: We avoid using the print function, because its behavior depends on Python 2 or Python 3. We do not need it, and the terminal will show the value of the expression.
This answer is for Windows users. As suggested in all other answers, you can use the statements as:
import [type the module name]
print(module.__version__) # module + '.' + double underscore + version + double underscore
But, there are some modules which don't print their version even after using the method above. So, you can simply do:
Open the command prompt.
Navigate to the file address/directory by using cd (file address) where you've kept your Python and all supporting modules installed. If you have only one Python interpreter on your system, the PyPI packages are normally visible in the directory/folder: Python → Lib → site-packages.
use the command "pip install [module name]" and hit Enter.
This will show you a message as "Requirement already satisfied: file address\folder name (with version)".
See the screenshot below for example: I had to know the version of a pre-installed module named "Selenium-Screenshot". It correctly showed as 1.5.0:
Go to terminal like pycharm-terminal
Now write py or python
and hit Enter.
Now you are inside python in the terminal you can try this way:
# import <name_of_the_library>
import kivy
# So if the library has __version__ magic method, so this way will help you
kivy.__version__ # then hit Enter to see the version
# Output >> '2.1.0'
but if the above way not working for you can try this way to know information include the version of the library
pip show module <HERE PUT THE NAME OF THE LIBRARY>
Example:
pip show module pyperclip
Output:
Name: pyperclip
Version: 1.8.2
Summary: A cross-platform clipboard module for Python. (Only handles plain text for now.)
Home-page: https://github.com/asweigart/pyperclip
Author: Al Sweigart
Author-email: al#inventwithpython.com
License: BSD
Location: c:\c\kivymd\virt\lib\site-packages
Requires:
Required-by:
There is another way that could help you to show all the libraries and versions of them inside the project:
pip freeze
# I used the above command in a terminal inside my project this is the output
certifi==2021.10.8
charset-normalizer==2.0.12
docutils==0.18.1
idna==3.3
Kivy==2.1.0
kivy-deps.angle==0.3.2
kivy-deps.glew==0.3.1
kivy-deps.sdl2==0.4.5
Kivy-Garden==0.1.5
kivymd # file:///C:/c/kivymd/KivyMD
Pillow==9.1.0
Pygments==2.12.0
pyperclip==1.8.2
pypiwin32==223
pywin32==303
requests==2.27.1
urllib3==1.26.9
and sure you can try using the below command to show all libraries and their versions
pip list
Hope to Help anyone,
Greetings
In summary:
conda list
(It will provide all the libraries along with version details.)
And:
pip show tensorflow
(It gives complete library details.)
After scouring the Internet, trying to figure out how to ensure the version of a module I’m running (apparently python_is_horrible.__version__ isn’t a thing in Python 2?) across operating systems and Python versions... literally none of these answers worked for my scenario...
Then I thought about it a minute and realized the basics... after ~30 minutes of fails...
assumes the module is already installed and can be imported
Python 3.7
>>> import sys,sqlite3
>>> sys.modules.get("sqlite3").version
'2.6.0'
>>> ".".join(str(x) for x in sys.version_info[:3])
'3.7.2'
Python 2.7
>>> import sys,sqlite3
>>> sys.modules.get("sqlite3").version
'2.6.0'
>>> ".".join(str(x) for x in sys.version_info[:3])
'2.7.11'
Literally that’s it...
(See also How do I get the version of an installed module in Python programmatically?)
I found it quite unreliable to use the various tools available (including the best one pkg_resources mentioned by Jakub Kukul' answer), as most of them do not cover all cases. For example
built-in modules
modules not installed but just added to the python path (by your IDE for example)
two versions of the same module available (one in python path superseding the one installed)
Since we needed a reliable way to get the version of any package, module or submodule, I ended up writing getversion. It is quite simple to use:
from getversion import get_module_version
import foo
version, details = get_module_version(foo)
See the documentation for details.
This works in Jupyter Notebook on Windows, too! As long as Jupyter is launched from a Bash-compliant command line such as Git Bash (Mingw-w64), the solutions given in many of the answers can be used in Jupyter Notebook on Windows systems with one tiny tweak.
I'm running Windows 10 Pro with Python installed via Anaconda, and the following code works when I launch Jupyter via Git Bash (but does not when I launch from the Anaconda prompt).
The tweak: Add an exclamation mark (!) in front of pip to make it !pip.
>>>!pip show lxml | grep Version
Version: 4.1.0
>>>!pip freeze | grep lxml
lxml==4.1.0
>>>!pip list | grep lxml
lxml 4.1.0
>>>!pip show lxml
Name: lxml
Version: 4.1.0
Summary: Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API.
Home-page: http://lxml.de/
Author: lxml dev team
Author-email: lxml-dev#lxml.de
License: BSD
Location: c:\users\karls\anaconda2\lib\site-packages
Requires:
Required-by: jupyter-contrib-nbextensions
A Python program to list all packages (you can copy it to file requirements.txt):
from pip._internal.utils.misc import get_installed_distributions
print_log = ''
for module in sorted(get_installed_distributions(), key=lambda x: x.key):
print_log += module.key + '~=' + module.version + '\n'
print(print_log)
The output would look like:
asn1crypto~=0.24.0
attrs~=18.2.0
automat~=0.7.0
beautifulsoup4~=4.7.1
botocore~=1.12.98
To get a list of non-standard (pip) modules imported in the current module:
[{pkg.key : pkg.version} for pkg in pip.get_installed_distributions()
if pkg.key in set(sys.modules) & set(globals())]
Result:
>>> import sys, pip, nltk, bs4
>>> [{pkg.key : pkg.version} for pkg in pip.get_installed_distributions() if pkg.key in set(sys.modules) & set(globals())]
[{'pip': '9.0.1'}, {'nltk': '3.2.1'}, {'bs4': '0.0.1'}]
Note:
This code was put together from solutions both on this page and from How to list imported modules?
For situations where field __version__ is not defined:
try:
from importlib import metadata
except ImportError:
import importlib_metadata as metadata # python<=3.7
metadata.version("package")
Alternatively, and like it was already mentioned:
import pkg_resources
pkg_resources.get_distribution('package').version
Here's a small Bash program to get the version of any package in your Python environment. Just copy this to your /usr/bin and provide it with executable permissions:
#!/bin/bash
packageName=$1
python -c "import ${packageName} as package; print(package.__version__)"
Then you can just run it in the terminal, assuming you named the script py-check-version:
py-check-version whatever_package
And in case your production system is hardened beyond comprehension so it has neither pip nor conda, here is a Bash replacement for pip freeze:
ls /usr/local/lib/python3.8/dist-packages | grep info | awk -F "-" '{print $1"=="$2}' | sed 's/.dist//g'
(make sure you update your dist-packages folder to your current python version and ignore inconsistent names, e.g., underscores vs. dashes).
Sample printout:
Flask==1.1.2
Flask_Caching==1.10.1
gunicorn==20.1.0
[..]
I myself work in a heavily restricted server environment and unfortunately none of the solutions here are working for me. There may be no global solution that fits all, but I figured out a swift workaround by reading the terminal output of pip freeze within my script and storing the modules labels and versions in a dictionary.
import os
os.system('pip freeze > tmpoutput')
with open('tmpoutput', 'r') as f:
modules_version = f.read()
module_dict = {item.split("==")[0]:item.split("==")[-1] for item in modules_versions.split("\n")}
Retrieve your module's versions through passing the module label key, e.g.:
>> module_dict["seaborn"]
'0.9.0'
Building on Jakub Kukul's answer I found a more reliable way to solve this problem.
The main problem of that approach is that requires the packages to be installed "conventionally" (and that does not include using pip install --user), or be in the system PATH at Python initialisation.
To get around that you can use pkg_resources.find_distributions(path_to_search). This basically searches for distributions that would be importable if path_to_search was in the system PATH.
We can iterate through this generator like this:
avail_modules = {}
distros = pkg_resources.find_distributions(path_to_search)
for d in distros:
avail_modules[d.key] = d.version
This will return a dictionary having modules as keys and their version as value. This approach can be extended to a lot more than version number.
Thanks to Jakub Kukul for pointing in the right direction.
You can first install some package like this and then check its version:
pip install package
import package
print(package.__version__)
It should give you the package version.
According to this answer you can import pip from within a Python script and use it to install a module. Is it possible to do this with conda install?
The conda documentation only shows examples from the command line but I'm looking for code that can be executed from within a Python script.
Yes, I could execute shell commands from within the script but I am trying to avoid this as it is basically assuming that conda cannot be imported and its functions called.
You can use conda.cli.main. For example, this installs numpy:
import conda.cli
conda.cli.main('conda', 'install', '-y', 'numpy')
Use the -y argument to avoid interactive questions:
-y, --yes Do not ask for confirmation.
I was looking at the latest Conda Python API and noticed that there are actually only 2 public modules with “very long-term stability”:
conda.cli.python_api
conda.api
For your question, I would work with the first:
NOTE: run_command() below will always add a -y/--yes option (i.e. it will not ask for confirmation)
import conda.cli.python_api as Conda
import sys
###################################################################################################
# The below is roughly equivalent to:
# conda install -y 'args-go-here' 'no-whitespace-splitting-occurs' 'square-brackets-optional'
(stdout_str, stderr_str, return_code_int) = Conda.run_command(
Conda.Commands.INSTALL, # alternatively, you can just say "install"
# ...it's probably safer long-term to use the Commands class though
# Commands include:
# CLEAN,CONFIG,CREATE,INFO,INSTALL,HELP,LIST,REMOVE,SEARCH,UPDATE,RUN
[ 'args-go-here', 'no-whitespace-splitting-occurs', 'square-brackets-optional' ],
use_exception_handler=True, # Defaults to False, use that if you want to handle your own exceptions
stdout=sys.stdout, # Defaults to being returned as a str (stdout_str)
stderr=sys.stderr, # Also defaults to being returned as str (stderr_str)
search_path=Conda.SEARCH_PATH # this is the default; adding only for illustrative purposes
)
###################################################################################################
The nice thing about using the above is that it solves a problem that occurs (mentioned in the comments above) when using conda.cli.main():
...conda tried to interpret the comand line arguments instead of the arguments of conda.cli.main(), so using conda.cli.main() like this might not work for some things.
The other question in the comments above was:
How [to install a package] when the channel is not the default?
import conda.cli.python_api as Conda
import sys
###################################################################################################
# Either:
# conda install -y -c <CHANNEL> <PACKAGE>
# Or (>= conda 4.6)
# conda install -y <CHANNEL>::<PACKAGE>
(stdout_str, stderr_str, return_code_int) = Conda.run_command(
Conda.Commands.INSTALL,
'-c', '<CHANNEL>',
'<PACKAGE>'
use_exception_handler=True, stdout=sys.stdout, stderr=sys.stderr
)
###################################################################################################
Having worked with conda from Python scripts for a while now, I think calling conda with the subprocess module works the best overall. In Python 3.7+, you could do something like this:
import json
from subprocess import run
def conda_list(environment):
proc = run(["conda", "list", "--json", "--name", environment],
text=True, capture_output=True)
return json.loads(proc.stdout)
def conda_install(environment, *package):
proc = run(["conda", "install", "--quiet", "--name", environment] + packages,
text=True, capture_output=True)
return json.loads(proc.stdout)
As I pointed out in a comment, conda.cli.main() was not intended for external use. It parses sys.argv directly, so if you try to use it in your own script with your own command line arguments, they will get fed to conda.cli.main() as well.
#YenForYang's answer suggesting conda.cli.python_api is better because this is a publicly documented API for calling conda commands. However, I have found that it still has rough edges. conda builds up internal state as it executes a command (e.g. caches). The way conda is usually used and usually tested is as a command line program. In that case, this internal state is discarded at the end of the conda command. With conda.cli.python_api, you can execute several conda commands within a single process. In this case, the internal state carries over and can sometimes lead to unexpected results (e.g. the cache becomes outdated as commands are performed). Of course, it should be possible for conda to handle this internal state directly. My point is just that using conda this way is not the main focus of the developers. If you want the most reliable method, use conda the way the developers intend it to be used -- as its own process.
conda is a fairly slow command, so I don't think one should worry about the performance impact of calling a subprocess. As I noted in another comment, pip is a similar tool to conda and explicitly states in its documentation that it should be called as a subprocess, not imported into Python.
I found that conda.cli.python_api and conda.api are limited, in the sense that, they both don't have the option to execute commands like this:
conda export env > requirements.txt
So instead I used subprocess with the flag shell=True to get the job done.
subprocess.run(f"conda env export --name {env} > {file_path_from_history}",shell=True)
where env is the name of the env to be saved to requirements.txt.
The simpler thing that i tried and worked for me was :
import os
try:
import graphviz
except:
print ("graphviz not found, Installing graphviz ")
os.system("conda install -c anaconda graphviz")
import graphviz
And make sure you run your script as admin.
Try this:
!conda install xyzpackage
Please remember this has to be done within the Python script not the OS prompt.
Or else you could try the following:
import sys
from conda.cli import main
sys.exit(main())
try:
import conda
from conda.cli import main
sys.argv = ['conda'] + list(args)
main()
Is there any way to get pip to print the config it will attempt to use? For debugging purposes it would be very nice to know that:
config.ini files are in the correct place and pip is finding them.
The precedence of the config settings is treated in the way one would expect from the docs
For 10.0.x and higher
There is new pip config command, to list current configuration values
pip config list
(As pointed by #wmaddox in comments) To get the list of where pip looks for config files
pip config list -v
Pre 10.0.x
You can start python console and do. (If you have virtaulenv don't forget to activate it first)
from pip import create_main_parser
parser = create_main_parser()
# print all config files that it will try to read
print(parser.files)
# reads parser files that are actually found and prints their names
print(parser.config.read(parser.files))
create_main_parser is function that creates parser which pip uses to read params from command line(optparse) and loading configs(configparser)
Possible file names for configurations are generated in get_config_files. Including PIP_CONFIG_FILE environment variable if it set.
parser.config is instance of RawConfigParser so all generated file names in get_config_files are passed to parser.config.read
.
Attempt to read and parse a list of filenames, returning a list of filenames which were successfully parsed. If filenames is a string, it is treated as a single filename. If a file named in filenames cannot be opened, that file will be ignored. This is designed so that you can specify a list of potential configuration file locations (for example, the current directory, the user’s home directory, and some system-wide directory), and all existing configuration files in the list will be read. If none of the named files exist, the ConfigParser instance will contain an empty dataset. An application which requires initial values to be loaded from a file should load the required file or files using read_file() before calling read() for any optional files:
From how I see it, your question can be interpreted in three ways:
What is the configuration of the pip executable?
There is a quite extensive documentation for the configurations supported by pip, see here: https://pip.pypa.io/en/stable/user_guide/#configuration
What is the configuration that pip uses when configuring and subsequently building code required by a Python module?
This is specified by the package that is being installed. The package maintainer is responsible for producing a configuration script. For example, Numpy has a Configuration class (https://github.com/numpy/numpy/blob/master/numpy/distutils/misc_util.py) that they use to configure their Cython build.
What are the current modules installed with pip so I can reproduce a specific environment configuration?
This is easy, pip freeze > requirements.txt. This will produce a file of all currently installed pip modules along with their exact versions. You can then do pip install -r requirements.txt to reproduce that exact environment configuration on another machine.
I hope this helps.
You can run pip in pdb. Here's an example inside ipython:
>>> import pip
>>> import pdb
>>> pdb.run("pip.main()", globals())
(Pdb) s
--Call--
> /usr/lib/python3.5/site-packages/pip/__init__.py(197)main()
-> def main(args=None):
(Pdb) b /usr/lib/python3.5/site-packages/pip/baseparser.py:146
Breakpoint 1 at /usr/lib/python3.5/site-packages/pip/baseparser.py:146
(Pdb) c
> /usr/lib/python3.5/site-packages/pip/baseparser.py(146)__init__()
-> if self.files:
(Pdb) p self.files
['/etc/xdg/pip/pip.conf', '/etc/pip.conf', '/home/andre/.pip/pip.conf', '/home/andre/.config/pip/pip.conf']
The only trick here was looking up the path of the baseparser (and knowing that the files are in there). If you don't know this already you can simply step through the program or read the source. This type of exploration should work for most Python programs.
When I create a virtualenv, it installs setuptools and pip. Is it possible to add new packages to this list?
Example use cases:
Following this solution to use ipython in virtualenv (from this question) requires installing ipython in every virtualenv (unless I allow system-site-packages).
Or if I'm doing a only flask/pygame/framework development, I'd want it in every virtualenv.
I took a different approach from what is chosen as the correct answer.
I chose I directory, like ~/.virtualenv/deps and installed packages in there by doing
pip install -U --target ~/.virtualenv/deps ...
Next in ~/.virtualenv/postmkvirtualenv I put the following:
# find directory
SITEDIR=$(virtualenvwrapper_get_site_packages_dir)
PYVER=$(virtualenvwrapper_get_python_version)
# create new .pth file with our path depending of python version
if [[ $PYVER == 3* ]];
then
echo "$HOME/.virtualenvs/deps3/" > "$SITEDIR/extra.pth";
else
echo "$HOME/.virtualenvs/deps/" > "$SITEDIR/extra.pth";
fi
Post that basically says the same thing.
You can write a python script, say personalize_venv.py that extends the EnvBuilder class and override its post_setup() method for installing any default packages that you need.
You can get the basic example from https://docs.python.org/3/library/venv.html#an-example-of-extending-envbuilder.
This doesn't need a hook. Directly run the script with command line argument dirs pointing to your venv directory/directories. The hook is the post_setup() method itself of EnvBuilder class.
How do I learn where the source file for a given Python module is installed? Is the method different on Windows than on Linux?
I'm trying to look for the source of the datetime module in particular, but I'm interested in a more general answer as well.
For a pure python module you can find the source by looking at themodule.__file__.
The datetime module, however, is written in C, and therefore datetime.__file__ points to a .so file (there is no datetime.__file__ on Windows), and therefore, you can't see the source.
If you download a python source tarball and extract it, the modules' code can be found in the Modules subdirectory.
For example, if you want to find the datetime code for python 2.6, you can look at
Python-2.6/Modules/datetimemodule.c
You can also find the latest version of this file on github on the web at
https://github.com/python/cpython/blob/main/Modules/_datetimemodule.c
Running python -v from the command line should tell you what is being imported and from where. This works for me on Windows and Mac OS X.
C:\>python -v
# installing zipimport hook
import zipimport # builtin
# installed zipimport hook
# C:\Python24\lib\site.pyc has bad mtime
import site # from C:\Python24\lib\site.py
# wrote C:\Python24\lib\site.pyc
# C:\Python24\lib\os.pyc has bad mtime
import os # from C:\Python24\lib\os.py
# wrote C:\Python24\lib\os.pyc
import nt # builtin
# C:\Python24\lib\ntpath.pyc has bad mtime
...
I'm not sure what those bad mtime's are on my install!
I realize this answer is 4 years late, but the existing answers are misleading people.
The right way to do this is never __file__, or trying to walk through sys.path and search for yourself, etc. (unless you need to be backward compatible beyond 2.1).
It's the inspect module—in particular, getfile or getsourcefile.
Unless you want to learn and implement the rules (which are documented, but painful, for CPython 2.x, and not documented at all for other implementations, or 3.x) for mapping .pyc to .py files; dealing with .zip archives, eggs, and module packages; trying different ways to get the path to .so/.pyd files that don't support __file__; figuring out what Jython/IronPython/PyPy do; etc. In which case, go for it.
Meanwhile, every Python version's source from 2.0+ is available online at http://hg.python.org/cpython/file/X.Y/ (e.g., 2.7 or 3.3). So, once you discover that inspect.getfile(datetime) is a .so or .pyd file like /usr/local/lib/python2.7/lib-dynload/datetime.so, you can look it up inside the Modules directory. Strictly speaking, there's no way to be sure of which file defines which module, but nearly all of them are either foo.c or foomodule.c, so it shouldn't be hard to guess that datetimemodule.c is what you want.
If you're using pip to install your modules, just pip show $module the location is returned.
The sys.path list contains the list of directories which will be searched for modules at runtime:
python -v
>>> import sys
>>> sys.path
['', '/usr/local/lib/python25.zip', '/usr/local/lib/python2.5', ... ]
from the standard library try imp.find_module
>>> import imp
>>> imp.find_module('fontTools')
(None, 'C:\\Python27\\lib\\site-packages\\FontTools\\fontTools', ('', '', 5))
>>> imp.find_module('datetime')
(None, 'datetime', ('', '', 6))
datetime is a builtin module, so there is no (Python) source file.
For modules coming from .py (or .pyc) files, you can use mymodule.__file__, e.g.
> import random
> random.__file__
'C:\\Python25\\lib\\random.pyc'
Here's a one-liner to get the filename for a module, suitable for shell aliasing:
echo 'import sys; t=__import__(sys.argv[1],fromlist=[\".\"]); print(t.__file__)' | python -
Set up as an alias:
alias getpmpath="echo 'import sys; t=__import__(sys.argv[1],fromlist=[\".\"]); print(t.__file__)' | python - "
To use:
$ getpmpath twisted
/usr/lib64/python2.6/site-packages/twisted/__init__.pyc
$ getpmpath twisted.web
/usr/lib64/python2.6/site-packages/twisted/web/__init__.pyc
In the python interpreter you could import the particular module and then type help(module). This gives details such as Name, File, Module Docs, Description et al.
Ex:
import os
help(os)
Help on module os:
NAME
os - OS routines for Mac, NT, or Posix depending on what system we're on.
FILE
/usr/lib/python2.6/os.py
MODULE DOCS
http://docs.python.org/library/os
DESCRIPTION
This exports:
- all functions from posix, nt, os2, or ce, e.g. unlink, stat, etc.
- os.path is one of the modules posixpath, or ntpath
- os.name is 'posix', 'nt', 'os2', 'ce' or 'riscos'
et al
On windows you can find the location of the python module as shown below:i.e find rest_framework module
New in Python 3.2, you can now use e.g. code_info() from the dis module:
http://docs.python.org/dev/whatsnew/3.2.html#dis
Check out this nifty "cdp" command to cd to the directory containing the source for the indicated Python module:
cdp () {
cd "$(python -c "import os.path as _, ${1}; \
print _.dirname(_.realpath(${1}.__file__[:-1]))"
)"
}
Just updating the answer in case anyone needs it now, I'm at Python 3.9 and using Pip to manage packages. Just use pip show, e.g.:
pip show numpy
It will give you all the details with the location of where pip is storing all your other packages.
On Ubuntu 12.04, for example numpy package for python2, can be found at:
/usr/lib/python2.7/dist-packages/numpy
Of course, this is not generic answer
Another way to check if you have multiple python versions installed, from the terminal.
$ python3 -m pip show pyperclip
Location: /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-
$ python -m pip show pyperclip
Location: /Users/umeshvuyyuru/Library/Python/2.7/lib/python/site-packages
Not all python modules are written in python. Datetime happens to be one of them that is not, and (on linux) is datetime.so.
You would have to download the source code to the python standard library to get at it.
For those who prefer a GUI solution: if you're using a gui such as Spyder (part of the Anaconda installation) you can just right-click the module name (such as "csv" in "import csv") and select "go to definition" - this will open the file, but also on the top you can see the exact file location ("C:....csv.py")
If you are not using interpreter then you can run the code below:
import site
print (site.getsitepackages())
Output:
['C:\\Users\\<your username>\\AppData\\Local\\Programs\\Python\\Python37', 'C:\\Users\\<your username>\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages']
The second element in Array will be your package location. In this case:
C:\Users\<your username>\AppData\Local\Programs\Python\Python37\lib\site-packages
In an IDE like Spyder, import the module and then run the module individually.
enter image description here
as written above
in python just use help(module)
ie
import fractions
help(fractions)
if your module, in the example fractions, is installed then it will tell you location and info about it, if its not installed it says module not available
if its not available it doesn't come by default with python in which case you can check where you found it for download info