Get package's dependencies without installing them - python

From this answer, I am able to get the dependencies of a package. However I am looking for a solution that does not require you to download and install the package.
Is this possible?
Edit: I meant getting the package dependencies from PyPI. I want to get the package dependencies from a PyPI package, but my only solution requires me to install the package, which is not what I want. Is it possible to get the package dependencies from PyPI?

As far as I know, in many cases PyPI won't be able to deliver reliable info about the dependencies for a distribution. For example, for source distributions the list of dependencies is somewhat dynamic, and the only way to get a definitive static list of dependencies is to build the distribution (run the setup.py). Meaning that in some cases there is no way around downloading and building the distributions locally (with the correct Python interpreter). No need to install them though.
Note:
I believe johnnydep is a tool that does exactly that. But it doesn't answer your question as you are probably looking for a library or an API.

Related

What is the pythonic way to install a single python script

I have a git repository with a single python script which I use for some task. I want to create a 'package' for this script and then install it. Previously I was using cmake to do this, but I'm wondering what the pythonic way of doing it is.
I tried using setuptools' console_scripts keyword argument, but that didn't work.
This script should get installed into some ./bin directory.
The pythonic way here would be to create an actual package https://packaging.python.org/en/latest/tutorials/packaging-projects/ and either building something like a .whl and installing that using pip or conda (whichever you use for managing your virtual environment), or publishing the package on PyPI for general use and easy installation on any machine with internet access.
Packaging a project is fairly standard and the linked documentation is official. Getting a package onto PyPI is fairly straightforward as well, although there are many options you can add to make the installation and the accompanying page nicer - there's many guides online on how to publish a package on PyPI, but none of then official documentation, so I suggest just searching for one.
The answer here was to use the scripts keyword instead of console_scripts. my setup.py was then able to properly install the script.

pypi: Why don't all the packages use wheel?

This python wheel website says, only 300 of the top 360 packages use wheel. I further analysed the Python ecosystem and found that about 2961 packages out of top 5000 use wheel, and others don't.
My questions are:
If they don't use wheel, do they use egg?
Why don't they use wheel? Is that just the laziness of authors or something else, which stop them from using wheel.
I also found from this post that wheel stops install time scripts (correct me if I'm wrong here). So, isn't it the case that because of some wheel functionalities, those packages can't use wheel (because they might need some functionalities of setup.py file, during the installation, e.g. install time scripts).
If they don't use wheel, do they use egg?
They probably don't. Wheels are built distributions, the alternative is to provide a source distribution, so this is likely what these packages are publishing instead (source distributions have filenames that end in .zip or .tar.gz.
Why don't they use wheel? Is that just the laziness of authors or something else, which stop them from using wheel.
Unless the project can be built with pure-Python wheels, building wheels for a certain platform requires access to a similar build environment. It's possible that they either don't have a given build environment, or don't have enough users to justify the extra work. It's also possible that their package is trivial enough that it doesn't make much difference to install from source vs. from a built distribution.
I also found from this post that wheel stops install time scripts (correct me if I'm wrong here).
This is correct: wheels are built for a given platform, and thus don't do anything at install-time other than put the package in the path.
So, isn't it the case that because of some wheel functionalities, those packages can't use wheel (because they might need some functionalities of setup.py file, during the installation, e.g. install time scripts).
Not really, any package that can be installed can produce a wheel. There is the possibility that a given package is doing more than just installing at install-time (e.g., perhaps it is also downloading some large files or something from an external source) but patterns like this are generally discouraged.

Lambda package includes pip, setuptools

I followed the AWS guide to prepare a deployment package for my lambda function. The generated zip file is around 9 - 10MB which includes pip, setuptools, pylint. Are they really required?
Here are the commands.
virtualenv v-env
source v-env/bin/activate
pip install xmltodict
pip install requests
deactivate
cd v-env/lib/python3.7/site-packages/
zip -r9 ../../../../function.zip .
Edit: Remove installing boto as it is provided by AWS already
Well, as you may see the guides provide standars and, obviously, guidance for a clean and nice coding or project deployment.
Pylint has a lot of features that help you out while using Python as your programming language, such as checking coding standars, error detection, refactoring help in order to prevend duplicated code, among other tools.
Setuptools is really useful too. Is a development process library designed to facilitate packaging Python projects by enhancing the Python standard library distribution utilities, I encourage you to use it in order to wrap your processes and models in order to have a strong modular project.
And pip is a Package Manager for python packages or modules. You can add, download, delete and a lot more things with it by simple using few words on a line of code. This package manager is useful, you can download wheels, zip's and modules from the internet and easily install them by just using
pip install <module or library name>
So, by answering your question, if you downloaded and installed a package for AWS supported in Python and it installed those libraries I must think those are being used across the modules you want to use.
You can always check the source code in order to be sure.
If the libraries aren't really being used, they aren't necessary since there are several libraries and packages that do what those libraries do.
Hope it helps, happy coding.

Install a new package from requirement.txt without upgrading the dependencies which are already satisfied

I am using requirement.txt to specify the package dependencies that are used in my python application. And everything seems to work fine for packages of which either there are no internal dependencies or for the one using the package dependencies which are not already installed.
The issue occurs when i try to install a package which has a nested dependency on some other package and an older version of this package is already installed.
I know i can avoid this while installing a package manually bu using pip install -U --no-deps <package_name>. I want to understand how to do this using the requirement.txt as the deployment and requirement installation is an automated process.
Note:
The already installed package is not something i am directly using in my project but is part of a different project on the same server.
Thanks in advance.
Dependency resolution is a fairly complicated problem. A requirements.txt just specifies your dependencies with optional version ranges. If you want to "lock" your transitive dependencies (dependencies of dependencies) in place you would have to produce a requirements.txt that contains exact versions of every package you install with something like pip freeze. This doesn't solve the problem but it would at least point out to you on an install which dependencies conflict so that you can manually pick the right versions.
That being said the new (as of writing) officially supported tool for managing application dependencies is Pipenv. This tool will both manage the exact versions of transitive dependencies for you (so you won't have to maintain a "requirements.txt" manually) and it will isolate the packages that your code requires from the rest of the system. (It does this using the virtualenv tool under the hood). This isolation should fix your problems with breaking a colocated project since your project can have different versions of libraries than the rest of the system.
(TL;DR Try using Pipenv and see if your problem just disappears)

Automatically installing Python dependencies using CMake

I've had a quick look around, but because of terminology like dependencies and packages being used in different ways, it's quite tricky to pin down an answer.
I'm building a mixed-language source (Fortran, some C and Python) and the Fortran calls a Python script which depends on the networkx Python package in the PyPI. Normally, I just have networkx installed anyway, so it isn't a problem for me when rebuilding.
However, for distribution, I want the best way to:
Install pip or equivalent, if it is not installed.
Possibly install virtualenv and create a virtual environment, if appropriate.
Download and install networkx using the --user option with pip.
Is there a standard way? Or should I just use CMake dependencies with custom commands that install pip etc.?
it depends. for "manual" install, you definitely should detect if all required (to build) tools are installed, and issue an error if they don't. then use execute_process() to run pip and whatever you want.
from other side, if you are going to produce a real package for some particular Linux, you just pack your binaries and require (via corresponding syntax of particular package format like *.rpm or *.deb that your package depends on some other packages. so, you can be sure that they will be installed w/ (or even before) your package.

Categories

Resources