I have encountered a problem while trying to run my django project on a new Docker container.
It is my first time using Docker and I can't seem to find a good way to run a django project on it. Having tried multiple tutorials, I always get the error about psycopg2 not being installed.
requirements.txt:
-i https://pypi.org/simple
asgiref==3.2.7
django-cors-headers==3.3.0
django==3.0.7
djangorestframework==3.11.0
gunicorn==20.0.4
psycopg2-binary==2.8.5
pytz==2020.1
sqlparse==0.3.1
Dockerfile:
# pull official base image
FROM python:3.8.3-alpine
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy project
COPY . .
# set project environment variables
# grab these via Python's os.environ
# these are 100% optional here
ENV PORT=8000
ENV SECRET_KEY_TWITTER = "***"
While running docker-compose build, I get the following error:
Error: pg_config executable not found.
pg_config is required to build psycopg2 from source. Please add the directory
containing pg_config to the $PATH or specify the full executable path with the
option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.
I will gladly answer any questions that might lead to the solution.
Also, maybe someone can recommend me a good tutorial on dockerizing django apps?
I made it work. This is the code:
FROM python:3.8.3-slim #Image python:3.9.5-slim also works # Image python:3.9.5-slim-buster also works
RUN apt-get update \
&& apt-get -y install libpq-dev gcc \
&& pip install psycopg2
On Alpine Linux, you will need to compile all packages, even if a pre-compiled binary wheel is available on PyPI. On standard Linux-based images, you won't (https://pythonspeed.com/articles/alpine-docker-python/ - there are also other articles I've written there that might be helpful, e.g. on security).
So change your base image to python:3.8.3-slim-buster or python:3.8-slim-buster and it should work.
This scripts work on MacBook Air M1
Dockerfile
FROM ubuntu:20.04
RUN apt-get update && apt-get -y install libpq-dev gcc && pip install psycopg2
COPY requirements.txt /cs_account/
RUN pip3 install -r requirements.txt
requirements.txt
psycopg2-binary~=2.8.6
Updated answer from the answer of Zoltán Buzás
This worked for me. Try slim-buster image.
In your Dockerfile
FROM python:3.8.7-slim-buster
and in your requirements.txt file
psycopg2-binary~= <<version_number>>
I added this to the top answer because I was getting other errors like below:
gcc: error trying to exec 'cc1plus': execvp: No such file or directory
error: command 'gcc' failed with exit status 1
and
src/pyodbc.h:56:10: fatal error: sql.h: No such file or directory
#include <sql.h>
This is what I did to fix this, so I am not sure how others were getting that to work, however maybe it was some of the other things I was doing?
My solution that I found from other posts when googling those two errors:
FROM python:3.8.3-slim
RUN apt-get update \
&& apt-get -y install g++ libpq-dev gcc unixodbc unixodbc-dev
I've made a custom image with
FROM python:alpine
ADD requirements.txt /
RUN apk update --no-cache \
&& apk add build-base postgresql-dev libpq --no-cache --virtual .build-deps \
&& pip install --no-cache-dir --upgrade pip \
&& pip install --no-cache-dir -r /requirements.txt \
&& apk del .build-deps
RUN apk add postgresql-libs libpq --no-cache
and requirements.txt
django
djangorestframework
psycopg2-binary
Related
I'm currently trying to install python packages from a private gitlab repo. Unfortunately, I get problems with the credentials. Is there any way to install this package without writing my credentials into the Dockerfile or adding my personal ssh key into it?
Dockerfile:
FROM python:3.9.12-buster AS production
RUN apt-get update && apt-get install -y git
COPY ./requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
requirements.txt:
fastapi
uvicorn
cycler~=0.10.0
networkx
python-multipart
git+https://gitlab.private.net/group/private-repo.git#commit_hash#egg=foo
Error message:
#10 3.760 Cloning https://gitlab.private.net/group/private-repo.git (to revision commit_hash) to /tmp/pip-install-q9wtmf_q/foo_commit_hash
#10 3.769 Running command git clone --filter=blob:none --quiet https://gitlab.private.net/group/private-repo.git /tmp/pip-install-q9wtmf_q/foo_commit_hash
#10 4.039 fatal: could not read Username for 'https://gitlab.private.net/group/private-repo.git': No such device or address
#10 4.060 error: subprocess-exited-with-error
Generally speaking, you can use multi-stage docker builds to make sure your credentials don't stay in the image.
In your case, you might do something like this:
FROM python:3.9.12-buster as download
RUN apt-get update && apt-get install -y git
RUN pip install --upgrade pip wheel
ARG GIT_USERNAME
ARG GIT_PASSWORD
WORKDIR /build
COPY requirements.txt .
# add password to requirements file
RUN sed -i -E "s|gitlab.private.net|$GIT_USERNAME:$GIT_PASSWORD#gitlab.private.net|" requirements.txt
# download dependencies and build wheels to /build/dist
RUN python -m pip wheel -w /build/dist -r requirements.txt
FROM python:3.9.12-buster as production
WORKDIR /app
COPY --from=download /build/dist /wheelhouse
# install dependencies from the wheels created in previous build stage
RUN pip install --no-index /wheelhouse/*.whl
COPY . .
# ... the rest of your dockerfile
In GitLab CI, you might use the build command like this:
script:
# ...
- docker build --build-arg GIT_USERNAME=gitlab-ci-token --build-arg GIT_PASSWORD=$CI_JOB_TOKEN -t $CI_REGISTRY_IMAGE .
Then your image will be built and the final image won't contain your credentials. It will also be smaller since you don't have to install git :)
As a side note, you can simplify this somewhat by using the GitLab PyPI package registry.
So I also had to install my dependencies from private package repository for my python project.
This was the Dockerfile I used for building my project.
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
RUN apt-get update &&\
apt-get install -y binutils libproj-dev gettext gcc libpq-dev python3-dev build-essential python3-pip python3-setuptools python3-wheel python3-cffi libcairo2 libpango-1.0-0 libpangocairo-1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info
RUN pip config set global.extra-index-url https://<personal_access_token_name>:<personal_access_token>#gitlab.com/simple/
# you need to configure pip to pull packages from remote private repository.
# for gitlab you require personal access token to access them with read permissions
COPY . /code/
RUN --mount=type=cache,target=/root/.cache pip install -r requirements.txt
RUN --mount=type=cache,target=/root/.cache pip install -r /code/webapi/requirements.txt
WORKDIR /code/webapi
ENTRYPOINT /code/webapi/entrypoint.sh
I met a dependency of libxml issue, when create a docker container with python, installing dependencies lib from a ubuntu image :
# pull official base image
FROM python:3.8.0-alpine
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
FROM ubuntu:16.04
RUN apt-get update -y
RUN apt-get install g++ gcc libxml2 libxslt-dev -y
# install dependencies
FROM python:3.8.0-alpine
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install -r requirements.txt
# copy project
COPY . /usr/src/app/
getting this compilation output error :
Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?
You are installing those packages in ubuntu, but not alpine. In the builder pattern you would need to copy over the files from the builder layer into the runtime layer. However, ubuntu != alpine, so compiled binaries will not work.
You will need to leverage the apk installer to add those packages to the alpine layer:
...
RUN apk update && apk add g++ gcc libxml2 libxslt-dev
RUN python -m pip install --upgrade pip
...
Currently I am creating a virtual environment in the first stage.
Running command pip install -r requirements.txt , which install executables in /venv/bin dir.
In second stage i am copying the /venv/bin dir , but on running the python app error comes as module not found i.e i need to run pip install -r requirements.txt again to run the app .
The application is running in python 2.7 and some of the dependencies requires compiler to build . Also those dependencies are failing with alpine images compiler , and only works with ubuntu compiler or python:2.7 official image ( which in turn uses debian)
Am I missing some command in the second stage that will help in using the copied dependencies instead of installing it again .
FROM python:2.7-slim AS build
RUN apt-get update &&apt-get install -y --no-install-recommends build-essential gcc
RUN pip install --upgrade pip
RUN python3 -m venv /venv
COPY ./requirements.txt /project/requirements/
RUN /venv/bin/pip install -r /project/requirements/requirements.txt
COPY . /venv/bin
FROM python:2.7-slim AS release
COPY --from=build /venv /venv
WORKDIR /venv/bin
RUN apt-get update && apt-get install -y --no-install-recommends build-essential gcc
#RUN pip install -r requirements.txt //
RUN cp settings.py.sample settings.py
CMD ["/venv/bin/python3", "-m", "main.py"]
I am trying to avoid pip install -r requirements.txt in second stage to reduce the image size which is not happening currently.
Only copying the bin dir isn't enough; for example, packages are installed in lib/pythonX.X/site-packages and headers under include. I'd just copy the whole venv directory. You can also run it with --no-cache-dir to avoid saving the wheel archives.
insert before all
FROM yourimage:tag AS build
I would like to dockerize python program with this Dockerfile:
FROM python:3.7-alpine
COPY requirements.pip ./requirements.pip
RUN python3 -m pip install --upgrade pip
RUN pip install -U setuptools
RUN apk update
RUN apk add --no-cache --virtual .build-deps gcc python3-dev musl-dev openssl-dev libffi-dev g++ && \
python3 -m pip install -r requirements.pip --no-cache-dir && \
apk --purge del .build-deps
ARG APP_DIR=/app
RUN mkdir -p ${APP_DIR}
WORKDIR ${APP_DIR}
COPY app .
ENTRYPOINT [ "python3", "run.py" ]
and this is my requirements.pip file:
pysher~=0.5.0
redis~=2.10.6
flake8~=3.5.0
pandas==0.23.4
Because of pandas, the docker image has 461MB, without pandas 131MB.
I was thinking how to make it smaller, so I build binary file from my applicaiton using:
pyinstaller run.py --onefile
It build 38M binary file. When I run it, it works fine. So I build docker image from Dockerfile:
FROM alpine:3.4
ARG APP_DIR=/app
RUN mkdir -p ${APP_DIR}
WORKDIR ${APP_DIR}
COPY app/dist/run run
ENTRYPOINT [ "/bin/sh", "/app/run" ]
Basicaly, just copied my run binary file into /app directory. It looks fine, image has just 48.8MB. When I run the container, I receive error:
$ docker run --rm --name myapp myminimalimage:latest
/app/run: line 1: syntax error: unexpected "("
Then I was thinking, maybe there is problem with sh, so I installed bash, so I added 3 lines into Dockerfile:
RUN apk update
RUN apk upgrade
RUN apk add bash
Image was built, but when I run it there is error again:
$ $ docker run --rm --name myapp myminimalimage:latest
/app/run: /app/run: cannot execute binary file
My questions:
Why is the image in the first step so big? Can I minimize the size
somehow ? Like choose what to install from pandas package?
Why is my binary file working fine on my system (Kubuntu 18.10) but I
cant run it from alpine:3.4, should I use another image or install
something to run it?
What is the best way to build minimalistic image with my app? One of
mentioned above or is there other ways?
On sizes, make sure you always pass --no-cache-dir when using pip (you use it once, but not in other cases). Similarly, combine uses of apk and make sure the last step is to clear the apk cache so it never gets frozen in an image layer, e.g. replace your three separate RUNs with RUN apk update && apk upgrade && apk add bash && rm -rf /var/cache/apk/*; achieves the same effect in a single layer, that doesn't keep the apk cache around.
Example:
FROM python:3.7-alpine
COPY requirements.pip ./requirements.pip
# Avoid pip cache, use consistent command line with other uses, and merge simple layers
RUN python3 -m pip install --upgrade --no-cache-dir pip && \
python3 -m pip install --upgrade --no-cache-dir setuptools
# Combine update and add into same layer, clear cache explicitly at end
RUN apk update && apk add --no-cache --virtual .build-deps gcc python3-dev musl-dev openssl-dev libffi-dev g++ && \
python3 -m pip install -r requirements.pip --no-cache-dir && \
apk --purge del .build-deps && rm -rf /var/cache/apk/*
Don't expect it to do much (you already used --no-cache-dir on the big pip operation), but it's something. pandas is a huge monolithic package, dependent on other huge monolithic packages; there is a limit to what you can accomplish here.
Keep in mind that if you don't use Alpine, you won't need a compiler, since you can just use wheels. This makes everything simpler... e.g. you don't need to install and then uninstall compilers. Slightly bigger, but only slightly.
(See here for more about why I'm not a fan of Alpine Linux: https://pythonspeed.com/articles/base-image-python-docker-images/)
There is a python project in which I have dependencies defined with the help of "requirement.txt" file. One of the dependencies is gmpy2. When I am running docker build -t myimage . command it is giving me following error at the step when setup.py install is getting executed.
In file included from src/gmpy2.c:426:0:
src/gmpy.h:252:20: fatal error: mpfr.h: No such file or directory
include "mpfr.h"
similarly other two errors are:
In file included from appscript_3x/ext/ae.c:14:0:
appscript_3x/ext/ae.h:26:27: fatal error: Carbon/Carbon.h: No such file
or directory
#include <Carbon/Carbon.h>
In file included from src/buffer.cpp:12:0:
src/pyodbc.h:56:17: fatal error: sql.h: No such file or directory
#include <sql.h>
Now question is how i can define or install these internal dependencies required for successful build of image. As per my understanding gmpy2 is written in C and depends on three other C libraries: GMP, MPFR, and MPC and it is unable to find this.
Following is my docker-file:
FROM python:3
COPY . .
RUN pip install -r requirement.txt
CMD [ "python", "./mike/main.py" ]
Install this apt install libgmp-dev libmpfr-dev libmpc-dev extra dependency and then RUN pip install -r requirement.txt
i think it will work and you will be able to install all the dependency and build docker image.
FROM python:3
COPY . .
RUN apt-get update -qq && \
apt-get install -y --no-install-recommends \
libmpc-dev \
libgmp-dev \
libmpfr-dev
RUN pip install -r requirement.txt
CMD [ "python", "./mike/main.py" ]
if apt not run you can use Linux as base image.
You will need to modify your Dockerfile to install the additional C libraries using apt-get install. (The default Python 3 image is based on a Debian image).
sudo apt-get install libgmp3-dev
sudo apt-get install libmpfr-dev
It looks like you can install the dependencies for pyodbc using
sudo apt-get install unixodbc-dev
However, I'm really unsure about the requirement for Carbon.h as that's an OS X specific header file. You may have an OS X specific dependency in your requirements file that won't work on a Linux based image.