after 'sudo docker-compose up' command
the terminal is freezing in 'Building wheels for collected packages ... '
Building wheels for collected packages: cryptography, freeze, ipaddress, psutil, pycrypto, pygobject, pykerberos, pyxdg, PyYAML, scandir, SecretStorage, Twisted, pycairo, pycparser
Building wheel for cryptography (setup.py): started
i've tried :
pip install --upgrade pip
pip install --upgrade setup tools
commands in Dockerfile
this is my Dockerfile :
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /todo_service && cd /todo_service/ && mkdir requirements
COPY /requirements.txt /todo_service/requirements
RUN pip install --upgrade pip
RUN pip install -r /todo_service/requirements/requirements.txt
COPY . /todo_service
WORKDIR /todo_service
i want to know the reason of this, if it's connection or dns issues ...?
Building wheels for those modules might take some time to install in a proper way.
If you want to ignore the wheel building then use this command if you are the Ubuntu user
pip install cryptography --no-binary cryptography
otherwise refers this link--->
https://cryptography.io/en/latest/installation/
Related
I'm trying to build a Ubuntu 18.04 Docker image running Python 3.7 for a machine learning project. When installing specific Python packages with pip from requirements.txt, I get the following error:
Collecting sklearn==0.0
Downloading sklearn-0.0.tar.gz (1.1 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error'
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [1 lines of output]
ERROR: Can not execute `setup.py` since setuptools is not available in the build environment.
[end of output]
Although here the error arises in the context of sklearn, the issue is not specific to one library; when I remove that libraries and try to rebuild the image, the error arises with other libraries.
Here is my Dockerfile:
FROM ubuntu:18.04
# install python
RUN apt-get update && \
apt-get install --no-install-recommends -y \
python3.7 python3-pip python3.7-dev
# copy requirements
WORKDIR /opt/program
COPY requirements.txt requirements.txt
# install requirements
RUN python3.7 -m pip install --upgrade pip && \
python3.7 -m pip install -r requirements.txt
# set up program in image
COPY . /opt/program
What I've tried:
installing python-devtools, both instead of and alongside, python3.7-dev before installing requirements with pip;
installing setuptools in requirements.txt before affected libraries are installed.
In both cases the same error arose.
Do you know how I can ensure setuptools is available in my environment when installing libraries like sklearn?
As mentioned in comment, install setuptools with pip before running pip install -r requirements.txt.
It is different than putting setuptools higher in the requirements.txt because it forces the order while the requirements file collect all the packages and installs them after so you don't control the order.
I have a Dockerfile which installs a few packages via pip.
Some of them are requiring grpcio, and it takes a few minutes only to build this part.
Does anyone have a tip to speed up this part?
Installing collected packages: python-dateutil, azure-common, azure-nspkg, azure-storage, jmespath, docutils, botocore, s3transfer, boto3, smmap2, gitdb2, GitPython, grpcio, protobuf, googleapis-common-protos, grpc-google-iam-v1, pytz, google-api-core, google-cloud-pubsub
Found existing installation: python-dateutil 2.7.3
Uninstalling python-dateutil-2.7.3:
Successfully uninstalled python-dateutil-2.7.3
Running setup.py install for grpcio: started
Running setup.py install for grpcio: still running...
Running setup.py install for grpcio: still running...
Running setup.py install for grpcio: still running...
Thanks.
I had the same issue and it was solved by upgrading pip:
$ pip3 install --upgrade pip
Here's a word from one of the maintainers of grpc project:
pip grpcio install is (still) very slow #22815
Had the same issue, fixed it by using a virtualenv and a multistage dockerfile :
FROM python:3.7-slim as base
# ---- compile image -----------------------------------------------
FROM base AS compile-image
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
build-essential \
gcc
RUN python -m venv /app/env
ENV PATH="/app/env/bin:$PATH"
COPY requirements.txt .
RUN pip install --upgrade pip
# pip install is fast here (while slow without the venv) :
RUN pip install -r requirements.txt
# ---- build image -----------------------------------------------
FROM base AS build-image
COPY --from=compile-image /app/env /app/env
# Make sure we use the virtualenv:
ENV PATH="/app/env/bin:$PATH"
COPY . /app
WORKDIR /app
Here is my requirements.txt :
fastapi==0.27.*
grpcio-tools==1.21.*
uvicorn==0.7.*
Some docker images (looking at you, Alpine) can't pull prebuilt wheels. Use a docker image that can, like Debian.
Check out this nice write-up on why. I'll reproduce a quote from it, that's especially apt:
But for Python, as Alpine doesn't use the standard tooling used for
building Python extensions, when installing packages, in many cases
Python (pip) won't find a precompiled installable package (a "wheel")
for Alpine. And after debugging lots of strange errors you will
realize that you have to install a lot of extra tooling and build a
lot of dependencies just to use some of these common Python packages.
😩
-- Sebastián Ramírez
I'm currently using a buildhost for third party packages running
$ pip3 wheel --wheel-dir=/root/wheelhouse -r /requirements.txt
After successful build I copy the directory /root/wheelhouse onto a new machine and install the compiled packages by running
$ pip3 install -r /requirements.txt --no-index --find-links=/root/wheelhouse
Is there something similar in pipenv?
Everything I found in combination with wheel are bug reports on GitHub.
Note that copying the venv directory is not an option. I'm using a docker container and want to install the packages system wide.
I have a Python project that runs in a docker container and I am trying to convert to a multistage docker build process. My project depends on the cryptography package. My Dockerfile consists of:
# Base
FROM python:3.6 AS base
RUN pip install cryptography
# Production
FROM python:3.6-alpine
COPY --from=base /root/.cache /root/.cache
RUN pip install cryptography \
&& rm -rf /root/.cache
CMD python
Which I try to build with e.g:
docker build -t my-python-app .
This process works for a number of other Python requirements I have tested, such as pycrypto and psutil, but throws the following error for cryptography:
Step 5/6 : RUN pip install cryptography && rm -rf /root/.cache
---> Running in ebc15bd61d43
Collecting cryptography
Downloading cryptography-2.1.4.tar.gz (441kB)
Collecting idna>=2.1 (from cryptography)
Using cached idna-2.6-py2.py3-none-any.whl
Collecting asn1crypto>=0.21.0 (from cryptography)
Using cached asn1crypto-0.24.0-py2.py3-none-any.whl
Collecting six>=1.4.1 (from cryptography)
Using cached six-1.11.0-py2.py3-none-any.whl
Collecting cffi>=1.7 (from cryptography)
Downloading cffi-1.11.5.tar.gz (438kB)
Complete output from command python setup.py egg_info:
No working compiler found, or bogus compiler options passed to
the compiler from Python's standard "distutils" module. See
the error messages above. Likely, the problem is not related
to CFFI but generic to the setup.py of any Python package that
tries to compile C code. (Hints: on OS/X 10.8, for errors about
-mno-fused-madd see http://stackoverflow.com/questions/22313407/
Otherwise, see https://wiki.python.org/moin/CompLangPython or
the IRC channel #python on irc.freenode.net.)
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-uyh9_v63/cffi/
Obviously I was hoping not to have to install any compiler on my production image. Do I need to copy across another directory other than /root/.cache?
There is no manylinux wheel for Alpine, so you need to compile it yourself. Below is pasted from documentation on installation. Install and remove build dependencies in the same command to only save the package to the docker image layer.
If you are on Alpine or just want to compile it yourself then
cryptography requires a compiler, headers for Python (if you’re not
using pypy), and headers for the OpenSSL and libffi libraries
available on your system.
Alpine Replace python3-dev with python-dev if you’re using Python 2.
$ sudo apk add gcc musl-dev python3-dev libffi-dev openssl-dev
If you get an error with openssl-dev you may have to use libressl-dev.
Docs can be found here
I hope, my answer will be useful.
You should use --user option for cryptography installing via pip in base stage. Example: RUN pip install --user cryptography. This option means, that all files will be installed in the .local directory of
the current user’s home directory.
COPY --from=base /root/.local /root/.local, because cryptography installed in /root/.local.
Thats all. Full example docker multistage
# Base
FROM python:3.6 AS base
RUN pip install --user cryptography
# Production
FROM python:3.6-alpine
COPY --from=base /root/.local /root/.local
RUN pip install cryptography \
&& rm -rf /root/.cache
CMD python
I'm trying to create "offline package" for python code.
I'm running
pip install -d <dest dir> -r requirements.txt
The thing is that cffi==1.6.0 (inside requirements.txt) doesn't get built into a wheel.
Is there a way I can make it? (I trying to avoid the dependency in gcc in the target machine)
install -d just downloads the packages; it doesn't build them. To force everything to be built into wheels, use the pip wheel command instead:
pip wheel -w <dir> -r requirements.txt