I met a dependency of libxml issue, when create a docker container with python, installing dependencies lib from a ubuntu image :
# pull official base image
FROM python:3.8.0-alpine
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
FROM ubuntu:16.04
RUN apt-get update -y
RUN apt-get install g++ gcc libxml2 libxslt-dev -y
# install dependencies
FROM python:3.8.0-alpine
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install -r requirements.txt
# copy project
COPY . /usr/src/app/
getting this compilation output error :
Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?
You are installing those packages in ubuntu, but not alpine. In the builder pattern you would need to copy over the files from the builder layer into the runtime layer. However, ubuntu != alpine, so compiled binaries will not work.
You will need to leverage the apk installer to add those packages to the alpine layer:
...
RUN apk update && apk add g++ gcc libxml2 libxslt-dev
RUN python -m pip install --upgrade pip
...
Related
I'm currently trying to install python packages from a private gitlab repo. Unfortunately, I get problems with the credentials. Is there any way to install this package without writing my credentials into the Dockerfile or adding my personal ssh key into it?
Dockerfile:
FROM python:3.9.12-buster AS production
RUN apt-get update && apt-get install -y git
COPY ./requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
requirements.txt:
fastapi
uvicorn
cycler~=0.10.0
networkx
python-multipart
git+https://gitlab.private.net/group/private-repo.git#commit_hash#egg=foo
Error message:
#10 3.760 Cloning https://gitlab.private.net/group/private-repo.git (to revision commit_hash) to /tmp/pip-install-q9wtmf_q/foo_commit_hash
#10 3.769 Running command git clone --filter=blob:none --quiet https://gitlab.private.net/group/private-repo.git /tmp/pip-install-q9wtmf_q/foo_commit_hash
#10 4.039 fatal: could not read Username for 'https://gitlab.private.net/group/private-repo.git': No such device or address
#10 4.060 error: subprocess-exited-with-error
Generally speaking, you can use multi-stage docker builds to make sure your credentials don't stay in the image.
In your case, you might do something like this:
FROM python:3.9.12-buster as download
RUN apt-get update && apt-get install -y git
RUN pip install --upgrade pip wheel
ARG GIT_USERNAME
ARG GIT_PASSWORD
WORKDIR /build
COPY requirements.txt .
# add password to requirements file
RUN sed -i -E "s|gitlab.private.net|$GIT_USERNAME:$GIT_PASSWORD#gitlab.private.net|" requirements.txt
# download dependencies and build wheels to /build/dist
RUN python -m pip wheel -w /build/dist -r requirements.txt
FROM python:3.9.12-buster as production
WORKDIR /app
COPY --from=download /build/dist /wheelhouse
# install dependencies from the wheels created in previous build stage
RUN pip install --no-index /wheelhouse/*.whl
COPY . .
# ... the rest of your dockerfile
In GitLab CI, you might use the build command like this:
script:
# ...
- docker build --build-arg GIT_USERNAME=gitlab-ci-token --build-arg GIT_PASSWORD=$CI_JOB_TOKEN -t $CI_REGISTRY_IMAGE .
Then your image will be built and the final image won't contain your credentials. It will also be smaller since you don't have to install git :)
As a side note, you can simplify this somewhat by using the GitLab PyPI package registry.
So I also had to install my dependencies from private package repository for my python project.
This was the Dockerfile I used for building my project.
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
RUN apt-get update &&\
apt-get install -y binutils libproj-dev gettext gcc libpq-dev python3-dev build-essential python3-pip python3-setuptools python3-wheel python3-cffi libcairo2 libpango-1.0-0 libpangocairo-1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info
RUN pip config set global.extra-index-url https://<personal_access_token_name>:<personal_access_token>#gitlab.com/simple/
# you need to configure pip to pull packages from remote private repository.
# for gitlab you require personal access token to access them with read permissions
COPY . /code/
RUN --mount=type=cache,target=/root/.cache pip install -r requirements.txt
RUN --mount=type=cache,target=/root/.cache pip install -r /code/webapi/requirements.txt
WORKDIR /code/webapi
ENTRYPOINT /code/webapi/entrypoint.sh
I would like to build packages in slim image and then copy built packages to alpine one. For that I created Dockerfile:
FROM python:3.8.7-slim AS builder
ENV POETRY_VIRTUALENVS_CREATE=false
WORKDIR /app
RUN apt-get update
RUN apt-get install -y build-essential
RUN apt-get install -y libldap2-dev # for python-ldap
RUN apt-get install -y libsasl2-dev # for python-ldap
COPY poetry.lock pyproject.toml ./
RUN python -m pip install --upgrade pip && pip install poetry && poetry install --no-dev
FROM python:3.8.7-alpine3.13 AS runtime
COPY --from=builder /root/* /root/
WORKDIR /app
COPY pythonapline .
#RUN python manage.py makemigrations && python manage.py migrate
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
By default poetry creates virtual environment in directory ~/.cache/pypoetry/virtualenvs (Linux).
When running the runtime image I get import errors. It seems that copied virtual env should be activated or something like that?
The problem is that you are not copying the install packages correctly to the runtime stage. Do note that ENV POETRY_VIRTUALENVS_CREATE=false makes poetry install the dependencies without using a virtual environment.
Try to change this
COPY --from=builder /root/* /root/
to
COPY --from=builder /usr/local/lib/python3.8/site-packages /usr/local/lib/python3.8/site-packages
Also note that you can make better use of cache by separating the installation of poetry itself before running poetry install --no-dev so that you don't have to reinstall poetry after updating the dependencies.
RUN python -m pip install --upgrade pip && pip install poetry
COPY poetry.lock pyproject.toml ./
poetry install --no-dev
However, there is no guarantee that binaries that work on slim will also be compatible with alpine which uses musl libc. Using alpine linux for Python application is discouraged.
I have encountered a problem while trying to run my django project on a new Docker container.
It is my first time using Docker and I can't seem to find a good way to run a django project on it. Having tried multiple tutorials, I always get the error about psycopg2 not being installed.
requirements.txt:
-i https://pypi.org/simple
asgiref==3.2.7
django-cors-headers==3.3.0
django==3.0.7
djangorestframework==3.11.0
gunicorn==20.0.4
psycopg2-binary==2.8.5
pytz==2020.1
sqlparse==0.3.1
Dockerfile:
# pull official base image
FROM python:3.8.3-alpine
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy project
COPY . .
# set project environment variables
# grab these via Python's os.environ
# these are 100% optional here
ENV PORT=8000
ENV SECRET_KEY_TWITTER = "***"
While running docker-compose build, I get the following error:
Error: pg_config executable not found.
pg_config is required to build psycopg2 from source. Please add the directory
containing pg_config to the $PATH or specify the full executable path with the
option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.
I will gladly answer any questions that might lead to the solution.
Also, maybe someone can recommend me a good tutorial on dockerizing django apps?
I made it work. This is the code:
FROM python:3.8.3-slim #Image python:3.9.5-slim also works # Image python:3.9.5-slim-buster also works
RUN apt-get update \
&& apt-get -y install libpq-dev gcc \
&& pip install psycopg2
On Alpine Linux, you will need to compile all packages, even if a pre-compiled binary wheel is available on PyPI. On standard Linux-based images, you won't (https://pythonspeed.com/articles/alpine-docker-python/ - there are also other articles I've written there that might be helpful, e.g. on security).
So change your base image to python:3.8.3-slim-buster or python:3.8-slim-buster and it should work.
This scripts work on MacBook Air M1
Dockerfile
FROM ubuntu:20.04
RUN apt-get update && apt-get -y install libpq-dev gcc && pip install psycopg2
COPY requirements.txt /cs_account/
RUN pip3 install -r requirements.txt
requirements.txt
psycopg2-binary~=2.8.6
Updated answer from the answer of Zoltán Buzás
This worked for me. Try slim-buster image.
In your Dockerfile
FROM python:3.8.7-slim-buster
and in your requirements.txt file
psycopg2-binary~= <<version_number>>
I added this to the top answer because I was getting other errors like below:
gcc: error trying to exec 'cc1plus': execvp: No such file or directory
error: command 'gcc' failed with exit status 1
and
src/pyodbc.h:56:10: fatal error: sql.h: No such file or directory
#include <sql.h>
This is what I did to fix this, so I am not sure how others were getting that to work, however maybe it was some of the other things I was doing?
My solution that I found from other posts when googling those two errors:
FROM python:3.8.3-slim
RUN apt-get update \
&& apt-get -y install g++ libpq-dev gcc unixodbc unixodbc-dev
I've made a custom image with
FROM python:alpine
ADD requirements.txt /
RUN apk update --no-cache \
&& apk add build-base postgresql-dev libpq --no-cache --virtual .build-deps \
&& pip install --no-cache-dir --upgrade pip \
&& pip install --no-cache-dir -r /requirements.txt \
&& apk del .build-deps
RUN apk add postgresql-libs libpq --no-cache
and requirements.txt
django
djangorestframework
psycopg2-binary
Currently I am creating a virtual environment in the first stage.
Running command pip install -r requirements.txt , which install executables in /venv/bin dir.
In second stage i am copying the /venv/bin dir , but on running the python app error comes as module not found i.e i need to run pip install -r requirements.txt again to run the app .
The application is running in python 2.7 and some of the dependencies requires compiler to build . Also those dependencies are failing with alpine images compiler , and only works with ubuntu compiler or python:2.7 official image ( which in turn uses debian)
Am I missing some command in the second stage that will help in using the copied dependencies instead of installing it again .
FROM python:2.7-slim AS build
RUN apt-get update &&apt-get install -y --no-install-recommends build-essential gcc
RUN pip install --upgrade pip
RUN python3 -m venv /venv
COPY ./requirements.txt /project/requirements/
RUN /venv/bin/pip install -r /project/requirements/requirements.txt
COPY . /venv/bin
FROM python:2.7-slim AS release
COPY --from=build /venv /venv
WORKDIR /venv/bin
RUN apt-get update && apt-get install -y --no-install-recommends build-essential gcc
#RUN pip install -r requirements.txt //
RUN cp settings.py.sample settings.py
CMD ["/venv/bin/python3", "-m", "main.py"]
I am trying to avoid pip install -r requirements.txt in second stage to reduce the image size which is not happening currently.
Only copying the bin dir isn't enough; for example, packages are installed in lib/pythonX.X/site-packages and headers under include. I'd just copy the whole venv directory. You can also run it with --no-cache-dir to avoid saving the wheel archives.
insert before all
FROM yourimage:tag AS build
There is a python project in which I have dependencies defined with the help of "requirement.txt" file. One of the dependencies is gmpy2. When I am running docker build -t myimage . command it is giving me following error at the step when setup.py install is getting executed.
In file included from src/gmpy2.c:426:0:
src/gmpy.h:252:20: fatal error: mpfr.h: No such file or directory
include "mpfr.h"
similarly other two errors are:
In file included from appscript_3x/ext/ae.c:14:0:
appscript_3x/ext/ae.h:26:27: fatal error: Carbon/Carbon.h: No such file
or directory
#include <Carbon/Carbon.h>
In file included from src/buffer.cpp:12:0:
src/pyodbc.h:56:17: fatal error: sql.h: No such file or directory
#include <sql.h>
Now question is how i can define or install these internal dependencies required for successful build of image. As per my understanding gmpy2 is written in C and depends on three other C libraries: GMP, MPFR, and MPC and it is unable to find this.
Following is my docker-file:
FROM python:3
COPY . .
RUN pip install -r requirement.txt
CMD [ "python", "./mike/main.py" ]
Install this apt install libgmp-dev libmpfr-dev libmpc-dev extra dependency and then RUN pip install -r requirement.txt
i think it will work and you will be able to install all the dependency and build docker image.
FROM python:3
COPY . .
RUN apt-get update -qq && \
apt-get install -y --no-install-recommends \
libmpc-dev \
libgmp-dev \
libmpfr-dev
RUN pip install -r requirement.txt
CMD [ "python", "./mike/main.py" ]
if apt not run you can use Linux as base image.
You will need to modify your Dockerfile to install the additional C libraries using apt-get install. (The default Python 3 image is based on a Debian image).
sudo apt-get install libgmp3-dev
sudo apt-get install libmpfr-dev
It looks like you can install the dependencies for pyodbc using
sudo apt-get install unixodbc-dev
However, I'm really unsure about the requirement for Carbon.h as that's an OS X specific header file. You may have an OS X specific dependency in your requirements file that won't work on a Linux based image.