Here is my Dockerfile:
FROM python:3.7-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN apk --update add gcc build-base freetype-dev libpng-dev openblas-dev musl-dev
RUN apk update
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 5000
CMD ["uwsgi", "app.ini"]
When building uwsgi wheel got and error:
In file included from core/utils.c:1:
./uwsgi.h:238:10: fatal error: linux/limits.h: No such file or directory
238 | #include <linux/limits.h>
| ^~~~~~~~~~~~~~~~
What package am i need to add to Dockerfile?+
Try adding apk add linux-headers, looks like uwsgi is missing some headers during the build, might be due to alpine being very bare-bones
I also ran in the situation that the container was not building the uwsgi.
After changing
"FROM python:3.8-alpine" to "FROM python:3.8"
everything worked fine.
Then i think you also don't need to install the linux-packages.
Related
I am learning about Docker and I have a Dockerfile with a simple app such as this:
FROM python:3.8-alpine
WORKDIR /code
ENV FLASK_APP App.py
ENV FLASK_RUN_HOST 0.0.0.0
ENV FLASK_RUN_PORT :3001
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add --no-cache mariadb-dev
COPY ./myapp/requirements.txt requirements.txt
RUN pip install --no-cache-dir -vv -r requirements.txt
ADD ./myapp .
EXPOSE 3001
CMD ["flask", "run"]
I want to use multistage to have a smaller image, so checking this https://pythonspeed.com/articles/multi-stage-docker-python/ I have change my Dockerfile to this:
FROM python:3.8-alpine as builder
COPY ./myapp/requirements.txt requirements.txt
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add --no-cache mariadb-dev
RUN pip install --user -r requirements.txt
FROM python:3.8-alpine
ADD ./myapp .
COPY --from=builder /root/.local /root/.local
ENV PATH=/root/.local:$PATH
ENV FLASK_APP App.py
ENV FLASK_RUN_HOST 0.0.0.0
ENV FLASK_RUN_PORT 3000
CMD ["python", "-m", "flask", "run"]
But when running the container I get an error telling me the MySQL dp dependecy is not installed (it is in requirements.txt), but it is within the requirements.txt file and in the first Dockerfile works, so I do not know what I am missing as if I get it right the COPY step in the second stage should copy the dependencies installed in the first stage right?. This is the output I get when trying to spin the container:
Traceback (most recent call last):
File "/root/.local/lib/python3.8/site-packages/MySQLdb/__init__.py", line 18, in <module>
from . import _mysql
ImportError: Error loading shared library libmariadb.so.3: No such file or directory (needed by /root/.local/lib/python3.8/site-packages/MySQLdb/_mysql.cpython-38-x86_64-linux-gnu.so)
apk add --no-cache mariadb-dev also install MariaDB libraries, which you don't install in the final image. Their lack is the cause of the errors you get.
Is mysql getting installed from requirements.txt or is it installed by apk MariahDb? If the latter then that’s what is missing in the second image; it’s not pip installed —-user under .local it’s installed systemwide in the first image but not in the second.
In my docker django project i need for read/write purpose to create a volumes in my Dockerile and install/run app on it.
i found this article : DockerFile on StackOverflow but sincerly i don't understand more about it.
Here my Dockerfile:
FROM python:3.6-alpine
EXPOSE 8000
RUN apk update
RUN apk add --no-cache make linux-headers libffi-dev jpeg-dev zlib-dev
RUN apk add postgresql-dev gcc python3-dev musl-dev
RUN mkdir /Code
VOLUME /var/lib/cathstudio/data
WORKDIR /Code
COPY ./requirements.txt .
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ENV PYTHONUNBUFFERED 1
COPY . /Code/
ENTRYPOINT python /Code/core/manage.py runserver 0.0.0.0:8000
at my original file i add the VOLUME /var/lib/cathstudio/data instruction, but after that how can i say to the rest of my code to use that volumes for WORKDIR, install requirements.txt, copy code and run app?
i don't what to specify it in RUN statement with -v directive after build, i would integrate the volume creation and manage directly in dockerfile.
So many thanks in advance
for anything expect pip you may specify workdir once:
WORKDIR /var/lib/cathstudio/data
for pip use -t or --target:
pip install -t /var/lib/cathstudio/data
-t, --target
Install packages into <dir>. By default this will not replace existing files/folders in <dir>. Use --upgrade to replace existing
packages in with new versions
I would to create a docker image from my django app but I don't want to pull my .py file.
For doing this I need a way to compile my python file in docker then remove all *.py and left only *.pyc file.
If I simply write **/*.py in the .dokerignore file, I get the error
ImportError: bad magic number in 'ajaxfuncs': b'\x03\xf3\r\n'
because my original *.py file was compiled with a different python version (local python3.6.0 docker python3.6 for alpine)
So as workaround I would build my python file in dockerfile and then remove all py
First in my .dockerignorefile i put: **/*.pyc
Then in my Dockerfile think to use python -m py_compile for generate my new *.pyc files:
FROM python:3.6-alpine
EXPOSE 8000
RUN apk update
RUN apk add --no-cache make linux-headers libffi-dev jpeg-dev zlib-dev
RUN apk add postgresql-dev gcc python3-dev musl-dev
RUN mkdir /Code
WORKDIR /Code
COPY ./requirements.txt .
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ENV PYTHONUNBUFFERED 1
COPY . /Code/
RUN python -m py_compile /Code/ajaxfuncs/ajax.py /Code/ajaxfuncs/asktempl.py /Code/ajaxfuncs/group.py /Code/ajaxfuncs/history.py /Code/ajaxfuncs/template_import.py
RUN rm -rf /Code/ajaxfuncs/*.py
ENTRYPOINT python /Code/core/manage.py runserver 0.0.0.0:8000
but when I run my application seem that compiler does not compile my files, because no pyc was found.
If I remove the pyc entry in .dockerignore I again get the error:
ImportError: bad magic number in 'ajaxfuncs': b'\x03\xf3\r\n'
Does someone know how I can compile python file during docker container creation or another method for avoid *.py file in container?
So many thanks in advance
I would like to dockerize python program with this Dockerfile:
FROM python:3.7-alpine
COPY requirements.pip ./requirements.pip
RUN python3 -m pip install --upgrade pip
RUN pip install -U setuptools
RUN apk update
RUN apk add --no-cache --virtual .build-deps gcc python3-dev musl-dev openssl-dev libffi-dev g++ && \
python3 -m pip install -r requirements.pip --no-cache-dir && \
apk --purge del .build-deps
ARG APP_DIR=/app
RUN mkdir -p ${APP_DIR}
WORKDIR ${APP_DIR}
COPY app .
ENTRYPOINT [ "python3", "run.py" ]
and this is my requirements.pip file:
pysher~=0.5.0
redis~=2.10.6
flake8~=3.5.0
pandas==0.23.4
Because of pandas, the docker image has 461MB, without pandas 131MB.
I was thinking how to make it smaller, so I build binary file from my applicaiton using:
pyinstaller run.py --onefile
It build 38M binary file. When I run it, it works fine. So I build docker image from Dockerfile:
FROM alpine:3.4
ARG APP_DIR=/app
RUN mkdir -p ${APP_DIR}
WORKDIR ${APP_DIR}
COPY app/dist/run run
ENTRYPOINT [ "/bin/sh", "/app/run" ]
Basicaly, just copied my run binary file into /app directory. It looks fine, image has just 48.8MB. When I run the container, I receive error:
$ docker run --rm --name myapp myminimalimage:latest
/app/run: line 1: syntax error: unexpected "("
Then I was thinking, maybe there is problem with sh, so I installed bash, so I added 3 lines into Dockerfile:
RUN apk update
RUN apk upgrade
RUN apk add bash
Image was built, but when I run it there is error again:
$ $ docker run --rm --name myapp myminimalimage:latest
/app/run: /app/run: cannot execute binary file
My questions:
Why is the image in the first step so big? Can I minimize the size
somehow ? Like choose what to install from pandas package?
Why is my binary file working fine on my system (Kubuntu 18.10) but I
cant run it from alpine:3.4, should I use another image or install
something to run it?
What is the best way to build minimalistic image with my app? One of
mentioned above or is there other ways?
On sizes, make sure you always pass --no-cache-dir when using pip (you use it once, but not in other cases). Similarly, combine uses of apk and make sure the last step is to clear the apk cache so it never gets frozen in an image layer, e.g. replace your three separate RUNs with RUN apk update && apk upgrade && apk add bash && rm -rf /var/cache/apk/*; achieves the same effect in a single layer, that doesn't keep the apk cache around.
Example:
FROM python:3.7-alpine
COPY requirements.pip ./requirements.pip
# Avoid pip cache, use consistent command line with other uses, and merge simple layers
RUN python3 -m pip install --upgrade --no-cache-dir pip && \
python3 -m pip install --upgrade --no-cache-dir setuptools
# Combine update and add into same layer, clear cache explicitly at end
RUN apk update && apk add --no-cache --virtual .build-deps gcc python3-dev musl-dev openssl-dev libffi-dev g++ && \
python3 -m pip install -r requirements.pip --no-cache-dir && \
apk --purge del .build-deps && rm -rf /var/cache/apk/*
Don't expect it to do much (you already used --no-cache-dir on the big pip operation), but it's something. pandas is a huge monolithic package, dependent on other huge monolithic packages; there is a limit to what you can accomplish here.
Keep in mind that if you don't use Alpine, you won't need a compiler, since you can just use wheels. This makes everything simpler... e.g. you don't need to install and then uninstall compilers. Slightly bigger, but only slightly.
(See here for more about why I'm not a fan of Alpine Linux: https://pythonspeed.com/articles/base-image-python-docker-images/)
I'm writing a python application to push some results onto elasticsearch.
I've written a Dockerfile to build it & am deploying it over Kubernetes.
Things seems to be working without any problem on my local machine, when I execute docker run.
The application is running and it is pushing data onto ElasticSearch.
But when I run it on K8S, I'm getting below error:
Traceback (most recent call last):
File "application.py", line 2, in <module>
from elasticsearch import Elasticsearch
ModuleNotFoundError: No module named 'elasticsearch'
I'm installing elasticsearch, using pip.
Dockerfile:
FROM python:3.7.3-alpine
RUN apk update && apk upgrade && apk add gcc libc-dev g++ libffi-dev libxml2 unixodbc-dev mariadb-dev postgresql-dev \
python-dev vim
RUN addgroup -S -g 1000 docker \
&& adduser -D -S -h /var/cache/docker -s /sbin/nologin -G docker -u 1000 docker \
&& chown docker:docker -R /usr/local/lib/python3.7/site-packages/
WORKDIR /app/
COPY application.py /app/
COPY lib.txt /app/
RUN chown docker:docker -R /app/
USER docker
# Install the dependencies
RUN ["pip", "install", "-r", "lib.txt", "--user"]
ENV PYTHONPATH=/usr/local/lib/python2.7/site-packages
RUN echo $PYTHONPATH
CMD [ "python", "application.py"]
lib.txt
Flask==1.0.2
prometheus_client>=0.6.0
requests>=2.21.0
six>=1.12.0
# Elasticsearch 7.x
elasticsearch>=7.0.0,<8.0.0
pyodbc
As suggested in one answer, I'm also setting PYTHONPATH in Dockerfile.
Any suggestions, what am I missing?
Example code here.
Thanks
Try with this Dockerfile:
FROM python:3.7.3-alpine
RUN apk update && apk upgrade && apk add gcc libc-dev g++ libffi-dev libxml2 unixodbc-dev mariadb-dev postgresql-dev \
python-dev vim
# Install the dependencies
RUN pip install --upgrade pip
RUN mkdir /app
COPY lib.txt /app/lib.txt
RUN pip install -r lib.txt
RUN addgroup -S -g 1000 docker \
&& adduser -D -S -h /var/cache/docker -s /sbin/nologin -G docker -u 1000 docker
WORKDIR /app
COPY application.py /app/
RUN chown docker:docker -R /app/
USER docker
CMD [ "python", "application.py"]
Changes:
Updated pip before install dependencies. This remove some warnings into my containers, and keep pip package with the last version when building the image.
Installed the pip packages as part of the system, when still root user is executing.
Removed PYTHONPATH, which seems pointing to wrong place.
Removed unnecessary owner changing.