I have a Dockerfile which installs a few packages via pip.
Some of them are requiring grpcio, and it takes a few minutes only to build this part.
Does anyone have a tip to speed up this part?
Installing collected packages: python-dateutil, azure-common, azure-nspkg, azure-storage, jmespath, docutils, botocore, s3transfer, boto3, smmap2, gitdb2, GitPython, grpcio, protobuf, googleapis-common-protos, grpc-google-iam-v1, pytz, google-api-core, google-cloud-pubsub
Found existing installation: python-dateutil 2.7.3
Uninstalling python-dateutil-2.7.3:
Successfully uninstalled python-dateutil-2.7.3
Running setup.py install for grpcio: started
Running setup.py install for grpcio: still running...
Running setup.py install for grpcio: still running...
Running setup.py install for grpcio: still running...
Thanks.
I had the same issue and it was solved by upgrading pip:
$ pip3 install --upgrade pip
Here's a word from one of the maintainers of grpc project:
pip grpcio install is (still) very slow #22815
Had the same issue, fixed it by using a virtualenv and a multistage dockerfile :
FROM python:3.7-slim as base
# ---- compile image -----------------------------------------------
FROM base AS compile-image
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
build-essential \
gcc
RUN python -m venv /app/env
ENV PATH="/app/env/bin:$PATH"
COPY requirements.txt .
RUN pip install --upgrade pip
# pip install is fast here (while slow without the venv) :
RUN pip install -r requirements.txt
# ---- build image -----------------------------------------------
FROM base AS build-image
COPY --from=compile-image /app/env /app/env
# Make sure we use the virtualenv:
ENV PATH="/app/env/bin:$PATH"
COPY . /app
WORKDIR /app
Here is my requirements.txt :
fastapi==0.27.*
grpcio-tools==1.21.*
uvicorn==0.7.*
Some docker images (looking at you, Alpine) can't pull prebuilt wheels. Use a docker image that can, like Debian.
Check out this nice write-up on why. I'll reproduce a quote from it, that's especially apt:
But for Python, as Alpine doesn't use the standard tooling used for
building Python extensions, when installing packages, in many cases
Python (pip) won't find a precompiled installable package (a "wheel")
for Alpine. And after debugging lots of strange errors you will
realize that you have to install a lot of extra tooling and build a
lot of dependencies just to use some of these common Python packages.
😩
-- Sebastián RamÃrez
Related
I'm trying to build a package with its dependencies and then install in a separate step.
I'm not using a requirements file I'm using setup.cfg and pyproject.toml.
pip download vendor --dest ./build/dependencies --no-cache-dir
python setup.py check
python setup.py bdist_wheel
python setup.py sdist
That seems to install dependencies into the ./build/dependencies folder, but I can't figure out how to install the wheel by looking in that folder for dependencies.
--find-links doesn't appear to work because I get "Could not find a version that satisfies the requirement.." errors when doing this:
python -m pip install --no-index $(ls dist/*.whl) --find-links="./build/dependencies"
It builds fine without --no-index fetching from the internet.
I also tried running pip install with --target like this,
pip install -e . --target=./build/dependencies
But get the same errors when trying to point to it with --find-links.
I'm trying to build a Ubuntu 18.04 Docker image running Python 3.7 for a machine learning project. When installing specific Python packages with pip from requirements.txt, I get the following error:
Collecting sklearn==0.0
Downloading sklearn-0.0.tar.gz (1.1 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error'
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [1 lines of output]
ERROR: Can not execute `setup.py` since setuptools is not available in the build environment.
[end of output]
Although here the error arises in the context of sklearn, the issue is not specific to one library; when I remove that libraries and try to rebuild the image, the error arises with other libraries.
Here is my Dockerfile:
FROM ubuntu:18.04
# install python
RUN apt-get update && \
apt-get install --no-install-recommends -y \
python3.7 python3-pip python3.7-dev
# copy requirements
WORKDIR /opt/program
COPY requirements.txt requirements.txt
# install requirements
RUN python3.7 -m pip install --upgrade pip && \
python3.7 -m pip install -r requirements.txt
# set up program in image
COPY . /opt/program
What I've tried:
installing python-devtools, both instead of and alongside, python3.7-dev before installing requirements with pip;
installing setuptools in requirements.txt before affected libraries are installed.
In both cases the same error arose.
Do you know how I can ensure setuptools is available in my environment when installing libraries like sklearn?
As mentioned in comment, install setuptools with pip before running pip install -r requirements.txt.
It is different than putting setuptools higher in the requirements.txt because it forces the order while the requirements file collect all the packages and installs them after so you don't control the order.
I'm installing numpy in an alpine Docker Python image, but it takes really very long when building the wheel at that point:
Building wheel for numpy (PEP 517) ... |
(the same appears for pandas e.g.)
What does this mean and why is it so slow?
I never faced a so slow install on Ubuntu, so I guess it may then be related to the alpine Linux environment.
Here is the Dockerfile:
FROM python:3.9.1-alpine3.12
WORKDIR /app
RUN python -m pip install --upgrade pip \
&& pip install -U setuptools wheel \
&& pip install -U numpy
Host machine is an Ubuntu 18.04 mid-range laptop.
not all docker images are born equal - each image packs s different set of packages.
that implies that it will take different effort to install required packages for running whatever you need (such as installing numpy)
you might like to read this
I need to update offline a library in Python.
I have downloaded the library with pip download and then I try to update the library with the command:
pip install --no-index --user --find-links /tmp/pip/ --upgrade Werkzeug==0.15.5
which gives:
Ignoring indexes: https://...
Collecting Werkzeug==0.15.5
Installing collected packages: Werkzeug
Successfully installed Werkzeug-0.11.15
and then the library stays in the same version!
pip freeze | grep Wer
Werkzeug==0.11.15
Any ideas why this happens?
UPDATE: After the comment from #hoefling I rerun with the -vvv option and this is what I got:
pip install --no-index --user --find-links /tmp/pip2/ -vvv Werkzeug==0.15.5
Ignoring indexes: https://pypi:pypi#..../simple/
Collecting Werkzeug==0.15.5
0 location(s) to search for versions of Werkzeug:
Skipping link /tmp/pip2/werk/ (from -f); not a file
Found link file:///tmp/pip2/werk/Werkzeug-0.15.5-py2.py3-none-any.whl, version:
0.15.5
Local files found: /tmp/pip2/werk/Werkzeug-0.15.5-py2.py3-none-any.whl
Using version 0.15.5 (newest of versions: 0.15.5)
Installing collected packages: Werkzeug
Successfully installed Werkzeug-0.11.15
Cleaning up...
Try this command:
pip install Werkzeug-0.15.5.tar.gz
and the result must be like this:
Processing ./Werkzeug-0.15.5.tar.gz
Installing collected packages: Werkzeug
Running setup.py install for Werkzeug ... done
Successfully installed Werkzeug-0.15.5
This behaviour can happen because pip by default works with system Python which is located in /usr/bin/ on Linux. When installing the package, by giving Python --user flag your package is installed in your user's version of Python, probably located somewhere in ~/.local/.
To solve the problem you can install the package to your system Python, which is generally not recommended without --user flag. Another option is to use virtual environments and have the distribution that is made specifically for your project. Currently the recommended way is using venv.
$ python -m venv env
$ source env/bin/activate
(env) $ pip install ... (packages you need to install without --user flag)
(env) $ pip freeze
# should give you the packages you installed
This can help you not only with this example, but it can always keep your system Python installation clean and if you mess something up, you will only mess the environment you are having for specific project.
I'm currently using a buildhost for third party packages running
$ pip3 wheel --wheel-dir=/root/wheelhouse -r /requirements.txt
After successful build I copy the directory /root/wheelhouse onto a new machine and install the compiled packages by running
$ pip3 install -r /requirements.txt --no-index --find-links=/root/wheelhouse
Is there something similar in pipenv?
Everything I found in combination with wheel are bug reports on GitHub.
Note that copying the venv directory is not an option. I'm using a docker container and want to install the packages system wide.