I'm trying to create "offline package" for python code.
I'm running
pip install -d <dest dir> -r requirements.txt
The thing is that cffi==1.6.0 (inside requirements.txt) doesn't get built into a wheel.
Is there a way I can make it? (I trying to avoid the dependency in gcc in the target machine)
install -d just downloads the packages; it doesn't build them. To force everything to be built into wheels, use the pip wheel command instead:
pip wheel -w <dir> -r requirements.txt
Related
I'm trying to build a package with its dependencies and then install in a separate step.
I'm not using a requirements file I'm using setup.cfg and pyproject.toml.
pip download vendor --dest ./build/dependencies --no-cache-dir
python setup.py check
python setup.py bdist_wheel
python setup.py sdist
That seems to install dependencies into the ./build/dependencies folder, but I can't figure out how to install the wheel by looking in that folder for dependencies.
--find-links doesn't appear to work because I get "Could not find a version that satisfies the requirement.." errors when doing this:
python -m pip install --no-index $(ls dist/*.whl) --find-links="./build/dependencies"
It builds fine without --no-index fetching from the internet.
I also tried running pip install with --target like this,
pip install -e . --target=./build/dependencies
But get the same errors when trying to point to it with --find-links.
after 'sudo docker-compose up' command
the terminal is freezing in 'Building wheels for collected packages ... '
Building wheels for collected packages: cryptography, freeze, ipaddress, psutil, pycrypto, pygobject, pykerberos, pyxdg, PyYAML, scandir, SecretStorage, Twisted, pycairo, pycparser
Building wheel for cryptography (setup.py): started
i've tried :
pip install --upgrade pip
pip install --upgrade setup tools
commands in Dockerfile
this is my Dockerfile :
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /todo_service && cd /todo_service/ && mkdir requirements
COPY /requirements.txt /todo_service/requirements
RUN pip install --upgrade pip
RUN pip install -r /todo_service/requirements/requirements.txt
COPY . /todo_service
WORKDIR /todo_service
i want to know the reason of this, if it's connection or dns issues ...?
Building wheels for those modules might take some time to install in a proper way.
If you want to ignore the wheel building then use this command if you are the Ubuntu user
pip install cryptography --no-binary cryptography
otherwise refers this link--->
https://cryptography.io/en/latest/installation/
I'm currently using a buildhost for third party packages running
$ pip3 wheel --wheel-dir=/root/wheelhouse -r /requirements.txt
After successful build I copy the directory /root/wheelhouse onto a new machine and install the compiled packages by running
$ pip3 install -r /requirements.txt --no-index --find-links=/root/wheelhouse
Is there something similar in pipenv?
Everything I found in combination with wheel are bug reports on GitHub.
Note that copying the venv directory is not an option. I'm using a docker container and want to install the packages system wide.
I have a Python project that runs in a docker container and I am trying to convert to a multistage docker build process. My project depends on the cryptography package. My Dockerfile consists of:
# Base
FROM python:3.6 AS base
RUN pip install cryptography
# Production
FROM python:3.6-alpine
COPY --from=base /root/.cache /root/.cache
RUN pip install cryptography \
&& rm -rf /root/.cache
CMD python
Which I try to build with e.g:
docker build -t my-python-app .
This process works for a number of other Python requirements I have tested, such as pycrypto and psutil, but throws the following error for cryptography:
Step 5/6 : RUN pip install cryptography && rm -rf /root/.cache
---> Running in ebc15bd61d43
Collecting cryptography
Downloading cryptography-2.1.4.tar.gz (441kB)
Collecting idna>=2.1 (from cryptography)
Using cached idna-2.6-py2.py3-none-any.whl
Collecting asn1crypto>=0.21.0 (from cryptography)
Using cached asn1crypto-0.24.0-py2.py3-none-any.whl
Collecting six>=1.4.1 (from cryptography)
Using cached six-1.11.0-py2.py3-none-any.whl
Collecting cffi>=1.7 (from cryptography)
Downloading cffi-1.11.5.tar.gz (438kB)
Complete output from command python setup.py egg_info:
No working compiler found, or bogus compiler options passed to
the compiler from Python's standard "distutils" module. See
the error messages above. Likely, the problem is not related
to CFFI but generic to the setup.py of any Python package that
tries to compile C code. (Hints: on OS/X 10.8, for errors about
-mno-fused-madd see http://stackoverflow.com/questions/22313407/
Otherwise, see https://wiki.python.org/moin/CompLangPython or
the IRC channel #python on irc.freenode.net.)
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-uyh9_v63/cffi/
Obviously I was hoping not to have to install any compiler on my production image. Do I need to copy across another directory other than /root/.cache?
There is no manylinux wheel for Alpine, so you need to compile it yourself. Below is pasted from documentation on installation. Install and remove build dependencies in the same command to only save the package to the docker image layer.
If you are on Alpine or just want to compile it yourself then
cryptography requires a compiler, headers for Python (if you’re not
using pypy), and headers for the OpenSSL and libffi libraries
available on your system.
Alpine Replace python3-dev with python-dev if you’re using Python 2.
$ sudo apk add gcc musl-dev python3-dev libffi-dev openssl-dev
If you get an error with openssl-dev you may have to use libressl-dev.
Docs can be found here
I hope, my answer will be useful.
You should use --user option for cryptography installing via pip in base stage. Example: RUN pip install --user cryptography. This option means, that all files will be installed in the .local directory of
the current user’s home directory.
COPY --from=base /root/.local /root/.local, because cryptography installed in /root/.local.
Thats all. Full example docker multistage
# Base
FROM python:3.6 AS base
RUN pip install --user cryptography
# Production
FROM python:3.6-alpine
COPY --from=base /root/.local /root/.local
RUN pip install cryptography \
&& rm -rf /root/.cache
CMD python
I'm trying to download the dependencies of paramiko from a linux host to a windows target which has no internet access .
After reading the example on pip's documentation I've used to following command in order to download the dependencies recursively to a 64 bit windows platform:
pip3 download --only-binary=:all: --platform win_amd64 --implementation cp paramiko
Was able to recursively download the dependencies until reaching pycparser. That is not surprising since I've used the --only-binary=:all: flag. Thing is - pip forces the usage of this flag when --platform flag is passed:
ERROR: --only-binary=:all: must be set and --no-binary must not be set (or must be set to :none:) when restricting platform and interpreter constraints using --python-version, --platform, --abi, or --implementation.
Terminal produced the following output:
Collecting paramiko
Downloading paramiko-2.3.0-py2.py3-none-any.whl (182kB)
100% |████████████████████████████████| 184kB 340kB/s
Saved ./paramiko-2.3.0-py2.py3-none-any.whl
Collecting pynacl>=1.0.1 (from paramiko)
Using cached PyNaCl-1.1.2-cp35-cp35m-win_amd64.whl
Saved ./PyNaCl-1.1.2-cp35-cp35m-win_amd64.whl
Collecting cryptography>=1.5 (from paramiko)
Using cached cryptography-2.0.3-cp35-cp35m-win_amd64.whl
Saved ./cryptography-2.0.3-cp35-cp35m-win_amd64.whl
Collecting pyasn1>=0.1.7 (from paramiko)
Using cached pyasn1-0.3.5-py2.py3-none-any.whl
Saved ./pyasn1-0.3.5-py2.py3-none-any.whl
Collecting bcrypt>=3.1.3 (from paramiko)
Using cached bcrypt-3.1.3-cp35-cp35m-win_amd64.whl
Saved ./bcrypt-3.1.3-cp35-cp35m-win_amd64.whl
Collecting cffi>=1.4.1 (from pynacl>=1.0.1->paramiko)
Using cached cffi-1.11.0-cp35-cp35m-win_amd64.whl
Saved ./cffi-1.11.0-cp35-cp35m-win_amd64.whl
Collecting six (from pynacl>=1.0.1->paramiko)
Using cached six-1.11.0-py2.py3-none-any.whl
Saved ./six-1.11.0-py2.py3-none-any.whl
Collecting asn1crypto>=0.21.0 (from cryptography>=1.5->paramiko)
Using cached asn1crypto-0.22.0-py2.py3-none-any.whl
Saved ./asn1crypto-0.22.0-py2.py3-none-any.whl
Collecting idna>=2.1 (from cryptography>=1.5->paramiko)
Using cached idna-2.6-py2.py3-none-any.whl
Saved ./idna-2.6-py2.py3-none-any.whl
Collecting pycparser (from cffi>=1.4.1->pynacl>=1.0.1->paramiko)
Could not find a version that satisfies the requirement pycparser (from cffi>=1.4.1->pynacl>=1.0.1->paramiko) (from versions: )
No matching distribution found for pycparser (from cffi>=1.4.1->pynacl>=1.0.1->paramiko)
Is there a way of overcoming this issue? Will I have to manually install non-binary packages (and their dependencies)?
Thanks,
Joey.
You have two options:
run the download operation on the same platform (be careful to be the same)
fix the internet access on your host
Don't try other fancy method or you will shut yourself in the foot: some dependencies will need to compile!
You can use the --prefer-binary option in pip. That'll make pip consider wheels as more important, even if they're an older version than an existing sdist (sdist is short for source distribution). An sdist would be selected if no wheels are found to be compatible.
This was released in pip 18.0 (so that's early 2018, pip's using CalVer now).
#sorin is right, your only real option is to use the exact same environment to download the dependencies as you'll be installing them on.
My solution is to use Docker to build a wheel that matches the target platform. In my case it is Debian 10, but it will work just the same for any operating system and version as long as there is a Docker image available.
Example Dockerfile to build a wheel with the dependencies in requirements.txt for Debian 10 with cPython 3.9:
FROM python:3.9-slim-buster
COPY requirements.txt requirements.txt
RUN set -eux; \
apt-get update && \
apt-get install -y build-essential && \
python3 -m venv .venv --without-pip
ENV VIRTUAL_ENV=.venv
ENV PATH="${VIRTUAL_ENV}/bin:${PATH}"
RUN set -eux; \
curl --silent https://bootstrap.pypa.io/get-pip.py | python && \
pip download --prefer-binary --upgrade setuptools wheel setuptools-rust -d deps && \
pip download --prefer-binary -r requirements.txt -d deps && \
mkdir -p main-wheel && \
pip wheel --wheel-dir=main-wheel -r requirements.txt
Build the image and extract the wheel:
docker build -t buildwheel -f Dockerfile
mkdir -p artifacts
CONTAINER=$(docker create buildwheel || exit 1)
docker cp "${CONTAINER}":main-wheel artifacts/. || exit 1
docker rm "${CONTAINER}"
docker image rm buildwheel
Congrats, you now have a wheel specifically for Debian 10 with cPython 3.9 inside the directory artifacts/main-wheel. Copy it to the target machine and do pip install --no-index --find-links=artifacts/main-wheel -r requirements.txt and everything should work.
PS: You might need to add build-time dependencies to the apt-get install inside the Dockerfile.
in python3 you can download the dependencies like mentioned below
while running this be inside the folder you want to save
pip download -r requirements.txt
once you downloaded the files move these to the machine you want to you can install
then run this command
pip install -r req.txt --no-index --find-links="/path/to/downloaded/files"