Split requirements files in pip - python

To create Python virtual environments I use virtualenv and pip. The workflow is very simple:
$ virtualenv project
$ cd project
$ . bin/activate
$ pip install -r /path/to/requirements/req1.txt
$ pip install -r /path/to/requirements/req2.txt
The number of different requirement files can grow enough to make handy to have a way to include them at once, so I'd rather prefer to say:
$ pip install -r /path/to/requirements/req1_req2.txt
with req1_req2.txt containing something like:
include /path/to/requirements/req1.txt
include /path/to/requirements/req2.txt
or otherwise:
$ pip install -r /path/to/requirements/*.txt
None of that works and however much simple it could be, I can't figure out how to do what I want.
Any suggestion?

The -r flag isn't restricted to command-line use only, it can also be used inside requirements files. So running pip install -r req-1-and-2.txt when req-1-and-2.txt contains this:
-r req-1.txt
-r req-2.txt
will install everything specified in req-1.txt and req-2.txt.

Just on a note, you can also split the requirements based on your groupings and embed them in a single file ( or again can prepare multiple requirements file based on your environment), that you can execute.
For example, the test requirements here:
requirements-test.txt
pylint==2.4.4
pytest==5.3.2
The dev requirements here:
requirements-dev.txt
boto3>=1.12.11
Master requirements file containing your other requirements:
requirements.txt
-r requirements-dev.txt
-r requirements-test.txt
Now, you can just install the requirements file embedding your other requirements
pip3 install -r requirements.txt

Related

Fixing python packages mis-installed by `sudo pip` / `sudo pip3` [duplicate]

How do I uninstall all packages installed by pip from my currently activated virtual environment?
I've found this snippet as an alternative solution. It's a more graceful removal of libraries than remaking the virtualenv:
pip freeze | xargs pip uninstall -y
In case you have packages installed via VCS, you need to exclude those lines and remove the packages manually (elevated from the comments below):
pip freeze | grep -v "^-e" | xargs pip uninstall -y
If you have packages installed directly from github/gitlab, those will have #.
Like:
django # git+https://github.com/django.git#<sha>
You can add cut -d "#" -f1 to get just the package name that is required to uninstall it.
pip freeze | cut -d "#" -f1 | xargs pip uninstall -y
This will work for all Mac, Windows, and Linux systems.
To get the list of all pip packages in the requirements.txt file (Note: This will overwrite requirements.txt if exist else will create the new one, also if you don't want to replace old requirements.txt then give different file name in the all following command in place requirements.txt).
pip freeze > requirements.txt
Now to remove one by one
pip uninstall -r requirements.txt
If we want to remove all at once then
pip uninstall -r requirements.txt -y
If you're working on an existing project that has a requirements.txt file and your environment has diverged, simply replace requirements.txt from the above examples with toberemoved.txt. Then, once you have gone through the steps above, you can use the requirements.txt to update your now clean environment.
And For single command without creating any file (As #joeb suggested).
pip uninstall -y -r <(pip freeze)
I wanted to elevate this answer out of a comment section because it's one of the most elegant solutions in the thread. Full credit for this answer goes to #joeb.
pip uninstall -y -r <(pip freeze)
This worked great for me for the use case of clearing my user packages folder outside the context of a virtualenv which many of the above answers don't handle.
Edit: Anyone know how to make this command work in a Makefile?
Bonus: A bash alias
I add this to my bash profile for convenience:
alias pipuninstallall="pip uninstall -y -r <(pip freeze)"
Then run:
pipuninstallall
Alternative for Pipenv
If you are using pipenv, you can run:
pipenv uninstall --all
Alternative for Poetry
If you are using Poetry, run:
poetry env remove --python3.9
(Note that you need to change the version number there to match whatever your Python version is.)
This works with the latest. I think it's the shortest and most declarative way to do it.
virtualenv --clear MYENV
But why not just delete and recreate the virtualenv?
Immutability rules. Besides it's hard to remember all those piping and grepping the other solutions use.
Other answers that use pip list or pip freeze must include --local else it will also uninstall packages that are found in the common namespaces.
So here are the snippet I regularly use
pip freeze --local | xargs pip uninstall -y
Ref: pip freeze --help
I managed it by doing the following:
Create the requirements file called reqs.txt with currently installed packages list
pip freeze > reqs.txt
Then uninstall all the packages from reqs.txt
# -y means remove the package without prompting for confirmation
pip uninstall -y -r reqs.txt
I like this method as you always have a pip requirements file to fall back on should you make a mistake. It's also repeatable, and it's cross-platform (Windows, Linux, MacOs).
Method 1 (with pip freeze)
pip freeze | xargs pip uninstall -y
Method 2 (with pip list)
pip list | awk '{print $1}' | xargs pip uninstall -y
Method 3 (with virtualenv)
virtualenv --clear MYENV
On Windows if your path is configured correctly, you can use:
pip freeze > unins && pip uninstall -y -r unins && del unins
It should be a similar case for Unix-like systems:
pip freeze > unins && pip uninstall -y -r unins && rm unins
Just a warning that this isn't completely solid as you may run into issues such as 'File not found' but it may work in some cases nonetheless
EDIT: For clarity: unins is an arbitrary file which has data written out to it when this command executes: pip freeze > unins
That file that it written in turn is then used to uninstall the aforementioned packages with implied consent/prior approval via pip uninstall -y -r unins
The file is finally deleted upon completion.
I use the --user option to uninstall all the packages installed in the user site.
pip3 freeze --user | xargs pip3 uninstall -y
For Windows users, this is what I use on Windows PowerShell
pip uninstall -y (pip freeze)
First, add all package to requirements.txt
pip freeze > requirements.txt
Then remove all
pip uninstall -y -r requirements.txt
The quickest way is to remake the virtualenv completely. I'm assuming you have a requirements.txt file that matches production, if not:
# On production:
pip freeze > reqs.txt
# On your machine:
rm $VIRTUALENV_DIRECTORY
mkdir $VIRTUALENV_DIRECTORY
pip install -r reqs.txt
Using virtualenvwrapper function:
wipeenv
See wipeenv documentation
Its an old question I know but I did stumble across it so for future reference you can now do this:
pip uninstall [options] <package> ...
pip uninstall [options] -r <requirements file> ...
-r, --requirement file
Uninstall all the packages listed in the given requirements file. This option can be used multiple times.
from the pip documentation version 8.1
(adding this as an answer, because I do not have enough reputation to comment on #blueberryfields 's answer)
#blueberryfields 's answer works well, but fails if there is no package to uninstall (which can be a problem if this "uninstall all" is part of a script or makefile). This can be solved with xargs -r when using GNU's version of xargs:
pip freeze --exclude-editable | xargs -r pip uninstall -y
from man xargs:
-r, --no-run-if-empty
If the standard input does not contain any nonblanks, do not run the command. Normally, the command is run once even if there
is no input. This option is a GNU extension.
pip3 freeze --local | xargs pip3 uninstall -y
The case might be that one has to run this command several times to get an empty pip3 freeze --local.
Best way to remove all packages from the virtual environment.
Windows PowerShell:
pip freeze > unins ; pip uninstall -y -r unins ; del unins
Windows Command Prompt:
pip freeze > unins && pip uninstall -y -r unins && del unins
Linux:
pip3 freeze > unins ; pip3 uninstall -y -r unins ; rm unins
This was the easiest way for me to uninstall all python packages.
from pip import get_installed_distributions
from os import system
for i in get_installed_distributions():
system("pip3 uninstall {} -y -q".format(i.key))
the easy robust way
cross-platform
and work in pipenv as well is:
pip freeze
pip uninstall -r requirement
by pipenv:
pipenv run pip freeze
pipenv run pip uninstall -r requirement
but won't update piplock or pipfile so be aware
Cross-platform support by using only pip:
#!/usr/bin/env python
from sys import stderr
from pip.commands.uninstall import UninstallCommand
from pip import get_installed_distributions
pip_uninstall = UninstallCommand()
options, args = pip_uninstall.parse_args([
package.project_name
for package in
get_installed_distributions()
if not package.location.endswith('dist-packages')
])
options.yes = True # Don't confirm before uninstall
# set `options.require_venv` to True for virtualenv restriction
try:
print pip_uninstall.run(options, args)
except OSError as e:
if e.errno != 13:
raise e
print >> stderr, "You lack permissions to uninstall this package.
Perhaps run with sudo? Exiting."
exit(13)
# Plenty of other exceptions can be thrown, e.g.: `InstallationError`
# handle them if you want to.
On Windows if your path is configured correctly, you can use:
pip freeze > unins && pip uninstall -y -r unins && del unins
This works on my windows system
pip freeze > packages.txt && pip uninstall -y -r packages.txt && del packages.txt
The first part pip freeze > packages.txt creates a text file with list of packages installed using pip along with the version number
The second part pip uninstall -y -r packages.txt deletes all the packages installed without asking for a confirmation prompt.
The third part del packages.txt deletes the just now created packages.txt.
This is the command that works for me:
pip list | awk '{print $1}' | xargs pip uninstall -y
If you're running virtualenv:
virtualenv --clear </path/to/your/virtualenv>
for example, if your virtualenv is /Users/you/.virtualenvs/projectx, then you'd run:
virtualenv --clear /Users/you/.virtualenvs/projectx
if you don't know where your virtual env is located, you can run which python from within an activated virtual env to get the path
In Command Shell of Windows, the command pip freeze | xargs pip uninstall -y won't work. So for those of you using Windows, I've figured out an alternative way to do so.
Copy all the names of the installed packages of pip from the pip freeze command to a .txt file.
Then, go the location of your .txt file and run the command pip uninstall -r *textfile.txt*
If you are using pew, you can use the wipeenv command:
pew wipeenv [env]
I simply wanted to remove packages installed by the project, and not other packages I've installed (things like neovim, mypy and pudb which I use for local dev but are not included in the app requirements). So I did:
cat requirements.txt| sed 's/=.*//g' | xargs pip uninstall -y
which worked well for me.
Select Libraries To Delete From This Folder:
C:\Users\User\AppData\Local\Programs\Python\Python310\Lib\site-packages
Why not just rm -r .venv and start over?
pip uninstall `pip freeze --user`
The --user option prevents system-installed packages from being included in the listing, thereby avoiding /usr/lib and distutils permission errors.

Not able to install any python package in docker container

I am trying to create an docker image with ubutu 16.04 as base. I want to install few python packages like pandas, flask etc. I have kept all packages in "requirements.txt". But when I am trying to build image, I am getting
Could not find a version that satisfies the requirement requests (from -r requirements.txt (line 1)) (from versions: )
No matching distribution found for requests (from -r requirements.txt (line 1))
Basically, I have not mentioned any version in "requirements.txt". I guess it should take the latest available and compatible version of that package. But for every package same issue I am getting.
My DockerFile is as follows.
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev build-essential cmake pkg-config libx11-dev libatlas-base-dev
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /testing/requirements.txt
WORKDIR /testing
RUN pip3 install -r requirements.txt
and requirements.txt is.
pandas
requests
PyMySQL
Flask
Flask-Cors
Pillow
face-recognition
Flask-SocketIO
Where I am doing wrong ? Can anybody help ?
I too ran into the same situation. I observed that, python packages is looking for the network within docker. It is thinking that, it is running in a standalone without network so its not able to locate the package. In these type of situations either
No matching distribution found
or sometimes
Retrying ...
error may occur.
I used a --network option in the docker build command like below to overcome this error where the command insists python to use the host network to download the required packages.
docker build --network=host -t tracker:latest .
Try using this:
RUN python3.6 -m pip install --upgrade pip \
&& python3.6 -m pip install -r requirements.txt
by using it in this way, you are specifying the version of python in which you want to search for those packages.
Change it to python3.7 if you wish to use 3.7 version.
I suggest using the official python image instead. As a result, your Dockerfile will now become:
FROM python:3
WORKDIR /testing
COPY ./requirements.txt /testing/requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
... etc ...
Now re: Angular/Node. You have two options from here: 1) Install Angular/Node on the Python image; or 2) Use Docker's multi-stage build feature so you build the Angular and Python-specific images before merging them together. Option 2 is recommended but it would take some work. It would probably look like this:
FROM node:8 as node
# Angular-specific build
FROM python:3 as python
# Python-specific build
# Then copy your data from the Angular image to the Python one:
COPY --from=node /usr/src/app/dist/angular-docker /usr/src/app

ERROR: Directory is not installable. Neither 'setup.py' nor 'pyproject.toml'

I've got the following error:
ERROR: Directory is not installable. Neither 'setup.py' nor 'pyproject.toml'
Background is that I'm following a guide online to expose an ML model via API Gateway on AWS that can be found here:
Hosting your ML model on AWS Lambdas + API Gateway
I'm trying to pull some python packages into a local folder using the following command:
pip install -r requirements.txt --no-deps --target python/lib/python3.6/site-packages/
I have also tried this:
pip install -r requirements.txt --no-deps -t python/lib/python3.6/site-packages/
and all I get is the above error.
Google is pretty bare when it comes to help with this issue, any ideas?
thanks,
Does this work?
You can create a new folder e.g. lib, and run this command:
pip3 install <your_python_module_name> -t lib/
Would suggest making the path explicit to requirements.txt, e.g. ./requirements.txt if you're running the command in the same directory
Also may need to add a basic setup.py to the folder where you're trying to install. The pip docs mention that this will happen if there's no setup.py file:
When looking at the items to be installed, pip checks what type of
item each is, in the following order:
Project or archive URL.
Local directory (which must contain a
setup.py, or pip will report an error).
Local file (a sdist or wheel
format archive, following the naming conventions for those formats).
A
requirement, as specified in PEP 440.
https://pip.pypa.io/en/stable/cli/pip_install/#argument-handling
Please try this:
ADD requirements.txt ./
pip install -r requirements.txt --no-deps -t python/lib/python3.6/site-packages/
syntax: ADD source destination
'ADD requirements.txt ./' adds requirements.txt (assumed to be at the cwd) to the docker image's './' folder.
Thus creating a layer from which the daemon has the context to the location of requirements.txt in the docker image.
More about it in dockerfile-best-practices
you can change your directory as follow
import os
os.chdir(path)
instead of:
cd path
also try to use:
!pip freeze > requirements.txt
instead of:
pip install -r requirements.txt
then execute your code:
!pip install .
or
!pip install -e .
in conclusion try this:
import os
os.chdir(path)
!pip freeze > requirements.txt
!pip install .

How do you make a .tar.gz file of all your pip packages used in a project?

I was wondering how to make a .tar.gz file of all pip packages used in a project. The project will not have access to the internet when a user sets up the application. So, I though it would be easiest to create .tar.gz file that would contain an all the necessary packages and the user would just extract and install them with a setup.py file (example) or something along those lines. Thanks
Do:
pip freeze > requirements.txt
It will store all your requirements in file requirements.txt
pip wheel -r requirements.txt --wheel-dir="packages"
It will pre-package or bundle your dependencies into the directory packages
Now you can turn-off your Wi-fi and install the dependencies when ever you want from the "packages" folder.
Just Run this:
pip install --force-reinstall --ignore-installed --upgrade --no-index --no-deps packages/*
Thanks :)
I also had a similar use case and I achieved it using pip wheel. Using pip wheel, you can bundle up all of a project's dependencies, with any compilation done, into a single archive.
I found it a more easy and appropriate approach for my use case than the one mentioned above by #ajoseps.
$ tempdir=$(mktemp -d /tmp/wheelhouse-XXXXX)
$ pip wheel -r requirements.txt --wheel-dir=$tempdir
$ cwd=`pwd`
$ (cd "$tempdir"; tar -cjvf "$cwd/bundled.tar.bz2" *)
You can then install from the archive like this:
$ tempdir=$(mktemp -d /tmp/wheelhouse-XXXXX)
$ (cd $tempdir; tar -xvf /path/to/bundled.tar.bz2)
$ pip install --force-reinstall --ignore-installed --upgrade --no-index --no-deps ${tempdir}/*
Sources: https://pip.pypa.io/en/stable/user_guide/#installation-bundles

Install python requirements.txt with Makefile only requirements.txt is changed

How can I run target make install only if requirements.txt is changed ?
I don't want to upgrade packages each time when I do make install
I found some workaround by creating fake file _requirements.txt.pyc but is ugly and dirty. It will refuse install pip requirements second time because requirements.txt has no changes
$ make install-pip-requirements
make: Nothing to be done for 'install-pip-requirements'.
But my goal is to do:
# first time,
$ make install # create virtual environment, install requirements
# second time
$ make install # detected and skipping creating virtual env,
# detect that requirements.txt have no changes
# and skipping installing again all python packages
make: Nothing to be done for 'install'.
Python package looks like:
.
├── Makefile
├── README.rst
├── lambda_handler.py
└── requirements.txt
I am using file, Makefile, for some automation in python:
/opt/virtual_env:
# create virtual env if folder not exists
python -m venv /opt/virtual_env
virtual: /opt/virtual_env
# if requirements.txt is modified than execute pip install
_requirements.txt.pyc: requirements.txt
/opt/virtual_env/bin/pip install -r --upgrade requirements.txt
echo > _requirements.txt.pyc
requirements: SOME MAGIG OR SOME make flags
pip install -r requirements.txt
install-pip-requirements: _requirements.txt.pyc
install: virtual requirements
I am sure that
Must be a better way
to do this;)
Not sure it will answer your question at this point. The better way is to use a fully fledged Python PIP project template.
We use cookiecutter to create a particular pip package with this cookiecutter template.
It has a Makefile, which does not constantly re-install all the dependencies and it makes use of Python tox, which allows running a project tests in different python envs automatically. You still can develop in dev virtualenv, but we update it only when new package is added, everything else is handle by tox.
But, what you show so far is trying to write a Python build from scratch, which was done with numerous project templates. If you really want to understand what is going on there, you can analyze these templates.
As followup: Because you expect it to work with a makefile, I'd suggest removing the --upgrade flag from the pip command. I suspect your requirements do not include versions that are needed for the project to work. We made an experience, that not putting versions there might badly brake things. Thus our requirements.txt looks like:
configure==0.5
falcon==0.3.0
futures==3.0.5
gevent==1.1.1
greenlet==0.4.9
gunicorn==19.4.5
hiredis==0.2.0
python-mimeparse==1.5.2
PyYAML==3.11
redis==2.10.5
six==1.10.0
eventlet==0.18.4
Using the requirements without --upgrade causes pip simply verify what is in virtualenv and what not. Everything that satisfies the required version will be skipped (no download). You can also reference git versions in requirements like that:
-e git+http://some-url-here/path-to/repository.git#branch-name-OR-commit-id#egg=package-name-how-to-appear-in-pip-freeze
#Andrei.Danciuc, make just needs two files to compare; you can use any of the output files from running pip install.
For example, I usually use a "vendored" folder, so I can alias the path to the "vendored" folder instead of using a dummy file.
# Only run install if requirements.txt is newer than vendored folder
vendored-folder := vendored
.PHONY: install
install: $(vendored-folder)
$(vendored-folder): requirements.txt
rm -rf $(vendored-folder)
pip install -r requirements.txt -t $(vendored-folder)
If you don't use a vendored folder, this code below should work for both virtualenv and global setups.
# Only run install if requirements.txt is newer than SITE_PACKAGES location
.PHONY: install
SITE_PACKAGES := $(shell pip show pip | grep '^Location' | cut -f2 -d':')
install: $(SITE_PACKAGES)
$(SITE_PACKAGES): requirements.txt
pip install -r requirements.txt

Categories

Resources