I'm making a utility for the Python mobile app Pythonista. It is basically just a version of pip, which is not supported by default (Yes I am aware one already exists, I am making mine differently for personal use).
In packages, there exists a setup.py, but not always a requirements.txt. How can I read the file and find the dependencies from it? Alternatively, how would I fetch the dependencies? I know it should be possible, because pip itself finds the dependencies, and they don't always have a requirements.txt.
So how would I get the dependencies of a package by the setup.py, or however pip does it?
A solution not using setup.py or pip:
You can try and us pipreqs package.
pipreqs - Generate requirements.txt file for any project based on imports
Another option use the pip-tools
The pip-compile command lets you compile a requirements.txt file from your dependencies, specified in either setup.py or requirements.in.
Related
There is following structure in my python project:
├───pyproject.toml
└───mypackage
│
├───lib
│ localdep-0.2.0-py3-none-any.whl
│ localdep-0.2.0.tar.gz
└───service
app.py
home.py
modules.py
I need to build mypackage using poetry and local dependency localdep from mypackage/lib/localdep-0.2.0... to be able to install mypackage just using simple pip install mypackage-0.1.0.tar.gz command without any additional files. I've tried to use path and file specifiers in pyproject.toml however I continuously get following error:
ERROR: Could not find a version that satisfies the requirement localdep(from mypackage==0.1.0) (from versions: none)
Current version of my pyproject.toml:
[build-system]
requires = [ "poetry>=0.12",]
build-backend = "poetry.masonry.api"
[tool.poetry]
name = "myproject"
version = "0.1.0"
description = "Simple demo project."
authors = ["Some Author"]
license = "MPL 2.0"
[tool.poetry.dependencies]
python = "3.7.3"
localdep = {file = "mypackage/lib/localdep-0.2.0-py3-none-any.whl"}
Does anyone know how to pass local dependency to pyproject.toml so poetry build will be able to package it in the correct way?
The use case of poetry's local dependency syntax is different from what you need, so it can't solve your problem here.
From the usage examples in those docs you can see that the path points to packages that are not part of the package itself, they always leave the root first, like this: ../some/different/location. The whole construct is only useful during development, and its logic will be run with poetry install, not poetry build.
What you want is to bundle the local dependency together with your project so that pip will know during a deployment of your project's .whl where to pull the local dependency from. But since pyproject.toml does not get packaged together with the wheel metadata, the information where to get dependencies from is no longer available, so it can't work. A built package only knows what dependencies it has, not where to get them from. This philosophy might be a bit unusual coming from other languages where it is not unusual to bundle all dependencies together with your code.
So even if you are able to build your package to include another wheel, which by default doesn't work because setuptools only includes .py files in the sdist/bdist by default, there is no way for pip to know that the dependency is reachable.
I see four options to solve your problem in a way that python supports.
Use a version of your localdep that is on pyPI, or upload it there. But if that were a possibility, you would have probably not asked this question.
If localdep is under your control and is only used by mypackage, write it to be a simple submodule of mypackage instead - read, the initial decision to make localdep its own package was overengineering, which may or may not be true.
Use a way to vendor localdep that python understands. Take this guide for an in-depth explanation with all the considerations and pitfalls, or this hacky post if you want to get it to work without needing to really understand why or how.
Deploy your package as a "wheelhouse".
What is a wheelhouse?
The wheelhouse part needs a little bit more text, but it might be closest to what you initially had in mind. In this approach, you don't need to include localdep in your project, and the whole mypackage/lib folder should best be deleted. localdep only needs to be installed into your python interpreter, which you can ensure by running pip freeze and checking the output for localdep's name and version.
That same output will then be used to build said wheelhouse with pip wheel -w wheelhouse $(pip freeze). If you don't want to include dev dependencies, delete your current virtualenv and set it up and enter it before running the wheel command with poetry install; poetry shell again.
For completeness' sake, you can build dist/myproject.whl with poetry build and throw it in there as well, then you can just use this wheelhouse to install your package wherever you like by running python -m pip install wheelhouse/* with whichever python you want.
Why use pip and not poetry for that?
Poetry is a dependency manager for developing packages, as such it doesn't deal with deployment issues as much and only targets them tangentially. This is usually fine, because pip, as we saw, is quite capable in that regard. pip is not that convenient during development, which is why poetry is nice, but poetry does not completely replace pip.
I am making a python package that has some dependencies not hosted on PyPI or other indexes. I'd like to include those .whl dependencies in my repo/package, and install them from local files when the user pip installs my package.
I could add code to call pip directly on those files within my setup.py file, but I'm not sure of a reliable way to do that, especially considering if the user passed special arguments to pip originally.
Is there a proper way to do this? Or is it not something well supported by pip/setuptools?
I have a python project that uses tools/programs from the command line command called pip.
How do i ensure that the user has all the tools i'm using on their computer?
Do i have to include a readme that states you need the following tools in order to properly run the program, or is there some sort of function or module for me that can automatically install the missing features?
Oh, and i'm fairly new to Python, so maybe i just do not understand how pip works. Can i just use os.system("pip install something")? And what if i wanna be not platform specific?
The convention is to include a requirements.txt which contains information on which packages need to be installed. You can read more at the official documentation for pip.
However, since generating that manually can be painful, there are tools like pipreqs which examines your project and generates a requirements.txt file for you by comparing your imports against those which are found through official pip repositories.
Once the requirements.txt file is generated, it can be installed this way: pip install -r requirements.txt.
I'm wondering if there's a way to "install" single-file python modules using pip (i.e. just have pip download the specified version of the file and copy it to site-packages).
I have a Django project that uses several 3rd-party modules which aren't proper distributions (django-thumbs and a couple others) and I want to pip freeze everything so the project can be easily installed elsewhere. I've tried just doing
pip install git+https://github.com/path/to/file.git
(and tried with the -e tag too) but pip complains that there's no setup.py file.
Edit: I should have mentioned - the reason I want to do this is so I can include the required module in a requirements.txt file, to make setting up the project on a new machine or new virtualenv easier.
pip requires a valid setup.py to install a python package. By definition every python package has a setup.py... What you are trying to install isn't a package but rather a single file module... what's wrong with doing something like:
git clone git+https://github.com/path/to/file.git /path/to/python/install/lib
I don't quite understand the logic behind wanting to install something that isn't a package with a package manager...
This is somewhat related to this question. Let's say I have a package that I want to deploy via rpm because I need to do some file copying on post-install and I have some non-python dependencies I want to declare. But let's also say I have some python dependencies that are easily available in PyPI. It seems like if I just package as an egg, an unzip followed by python setup.py install will automatically take care of my python dependencies, at the expense of losing any post-install functionality and non-python dependencies.
Is there any recommended way of doing this? I suppose I could specify this in a pre-install script, but then I'm getting into information duplication and not really using setuptools for much of anything.
(My current setup involves passing install_requires = ['dependency_name'] to setup, which works for python setup.py bdist_egg and unzip my_package.egg; python my_package/setup.py install, but not for python setup.py bdist_rpm --post-install post-install.sh and rpm --install my_package.rpm.)
I think it would be best if your python dependencies were available as RPMs also, and declared as dependencies in the RPM. If they aren't available elsewhere, create them yourself, and put them in your yum repository.
Running PyPI installations as a side effect of RPM installation is evil, as it won't support proper uninstallation (i.e. uninstalling your RPM will remove your package, but leave the dependencies behind, with no proper removal procedure).