I would to create a docker image from my django app but I don't want to pull my .py file.
For doing this I need a way to compile my python file in docker then remove all *.py and left only *.pyc file.
If I simply write **/*.py in the .dokerignore file, I get the error
ImportError: bad magic number in 'ajaxfuncs': b'\x03\xf3\r\n'
because my original *.py file was compiled with a different python version (local python3.6.0 docker python3.6 for alpine)
So as workaround I would build my python file in dockerfile and then remove all py
First in my .dockerignorefile i put: **/*.pyc
Then in my Dockerfile think to use python -m py_compile for generate my new *.pyc files:
FROM python:3.6-alpine
EXPOSE 8000
RUN apk update
RUN apk add --no-cache make linux-headers libffi-dev jpeg-dev zlib-dev
RUN apk add postgresql-dev gcc python3-dev musl-dev
RUN mkdir /Code
WORKDIR /Code
COPY ./requirements.txt .
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ENV PYTHONUNBUFFERED 1
COPY . /Code/
RUN python -m py_compile /Code/ajaxfuncs/ajax.py /Code/ajaxfuncs/asktempl.py /Code/ajaxfuncs/group.py /Code/ajaxfuncs/history.py /Code/ajaxfuncs/template_import.py
RUN rm -rf /Code/ajaxfuncs/*.py
ENTRYPOINT python /Code/core/manage.py runserver 0.0.0.0:8000
but when I run my application seem that compiler does not compile my files, because no pyc was found.
If I remove the pyc entry in .dockerignore I again get the error:
ImportError: bad magic number in 'ajaxfuncs': b'\x03\xf3\r\n'
Does someone know how I can compile python file during docker container creation or another method for avoid *.py file in container?
So many thanks in advance
Related
I am learning about Docker and I have a Dockerfile with a simple app such as this:
FROM python:3.8-alpine
WORKDIR /code
ENV FLASK_APP App.py
ENV FLASK_RUN_HOST 0.0.0.0
ENV FLASK_RUN_PORT :3001
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add --no-cache mariadb-dev
COPY ./myapp/requirements.txt requirements.txt
RUN pip install --no-cache-dir -vv -r requirements.txt
ADD ./myapp .
EXPOSE 3001
CMD ["flask", "run"]
I want to use multistage to have a smaller image, so checking this https://pythonspeed.com/articles/multi-stage-docker-python/ I have change my Dockerfile to this:
FROM python:3.8-alpine as builder
COPY ./myapp/requirements.txt requirements.txt
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add --no-cache mariadb-dev
RUN pip install --user -r requirements.txt
FROM python:3.8-alpine
ADD ./myapp .
COPY --from=builder /root/.local /root/.local
ENV PATH=/root/.local:$PATH
ENV FLASK_APP App.py
ENV FLASK_RUN_HOST 0.0.0.0
ENV FLASK_RUN_PORT 3000
CMD ["python", "-m", "flask", "run"]
But when running the container I get an error telling me the MySQL dp dependecy is not installed (it is in requirements.txt), but it is within the requirements.txt file and in the first Dockerfile works, so I do not know what I am missing as if I get it right the COPY step in the second stage should copy the dependencies installed in the first stage right?. This is the output I get when trying to spin the container:
Traceback (most recent call last):
File "/root/.local/lib/python3.8/site-packages/MySQLdb/__init__.py", line 18, in <module>
from . import _mysql
ImportError: Error loading shared library libmariadb.so.3: No such file or directory (needed by /root/.local/lib/python3.8/site-packages/MySQLdb/_mysql.cpython-38-x86_64-linux-gnu.so)
apk add --no-cache mariadb-dev also install MariaDB libraries, which you don't install in the final image. Their lack is the cause of the errors you get.
Is mysql getting installed from requirements.txt or is it installed by apk MariahDb? If the latter then that’s what is missing in the second image; it’s not pip installed —-user under .local it’s installed systemwide in the first image but not in the second.
Here is my Dockerfile:
FROM python:3.7-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN apk --update add gcc build-base freetype-dev libpng-dev openblas-dev musl-dev
RUN apk update
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 5000
CMD ["uwsgi", "app.ini"]
When building uwsgi wheel got and error:
In file included from core/utils.c:1:
./uwsgi.h:238:10: fatal error: linux/limits.h: No such file or directory
238 | #include <linux/limits.h>
| ^~~~~~~~~~~~~~~~
What package am i need to add to Dockerfile?+
Try adding apk add linux-headers, looks like uwsgi is missing some headers during the build, might be due to alpine being very bare-bones
I also ran in the situation that the container was not building the uwsgi.
After changing
"FROM python:3.8-alpine" to "FROM python:3.8"
everything worked fine.
Then i think you also don't need to install the linux-packages.
I'm trying to create a Python webapp docker image using multi-stage, to shrink the image size... right now it's around 300mb... it's also using virtual enviroment.
The docker image builds and runs fine up untill the point I need to add multi-stage so I know something is going wrong after that.... Could you help me out identifying what's wrong?
FROM python:3.8.3-alpine AS origin
RUN apk update && apk add git
RUN apk --no-cache add py3-pip build-base
RUN pip install -U pip
RUN pip install virtualenv
RUN virtualenv venv
RUN source venv/bin/activate
WORKDIR /opt/app
COPY . .
RUN pip install -r requirements.txt
## Works fine until this point ""
FROM alpine:latest
WORKDIR /opt/app
COPY --from=origin /opt/venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH" VIRTUAL_ENV="/opt/venv"
COPY . /opt/app/
CMD [ "file.py" ]
ENTRYPOINT ["python"]
Without the VENV it looks something like this (still throwing error "sh: python: not found"):
FROM python:3.8.3-alpine AS origin
WORKDIR /opt/app
RUN apk update && apk add git
RUN apk --no-cache add py3-pip build-base
RUN pip install -U pip
COPY . .
RUN pip install -r requirements.txt
FROM alpine:latest
WORKDIR /home
COPY --from=origin /opt/app .
CMD sh -c 'python file.py'
You still need pyhton in your runtime container, since you changed your last image to just alpine it wouldn't work. Just a tip, combine your CMD and ENTRYPOINT under one of them, there is generally no need for having two of them. Try to use only ENTRYPOINT since you can pass CMD easily in runtime for example to activate debug mode more easily.
EDIT: Please stay away from alpine for python apps as you can get some weird issues about it. You can use "python_version-slim-buster" images, they are small enough.
In my docker django project i need for read/write purpose to create a volumes in my Dockerile and install/run app on it.
i found this article : DockerFile on StackOverflow but sincerly i don't understand more about it.
Here my Dockerfile:
FROM python:3.6-alpine
EXPOSE 8000
RUN apk update
RUN apk add --no-cache make linux-headers libffi-dev jpeg-dev zlib-dev
RUN apk add postgresql-dev gcc python3-dev musl-dev
RUN mkdir /Code
VOLUME /var/lib/cathstudio/data
WORKDIR /Code
COPY ./requirements.txt .
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ENV PYTHONUNBUFFERED 1
COPY . /Code/
ENTRYPOINT python /Code/core/manage.py runserver 0.0.0.0:8000
at my original file i add the VOLUME /var/lib/cathstudio/data instruction, but after that how can i say to the rest of my code to use that volumes for WORKDIR, install requirements.txt, copy code and run app?
i don't what to specify it in RUN statement with -v directive after build, i would integrate the volume creation and manage directly in dockerfile.
So many thanks in advance
for anything expect pip you may specify workdir once:
WORKDIR /var/lib/cathstudio/data
for pip use -t or --target:
pip install -t /var/lib/cathstudio/data
-t, --target
Install packages into <dir>. By default this will not replace existing files/folders in <dir>. Use --upgrade to replace existing
packages in with new versions
I have a requirements.txt file which contains the following package:
git+https://username:password#gitlab.mycompany.com/mypackage.git#master#egg=mypackage
I am able to build my docker image using a basic dockerfile.
However, I'm trying to use a more complex docker file to get my docker image to be as slim as possible:
FROM python:3.7-alpine as base
COPY . /app
WORKDIR /app
FROM base AS dependencies
COPY requirements.txt ./
RUN apk add --no-cache make automake gcc g++ git && \
pip install -r requirements.txt
FROM base
WORKDIR /app
COPY . /app
COPY --from=dependencies /root/.cache /root/.cache
COPY requirements.txt ./
RUN pip install -r requirements.txt && rm -rf /root/.cache
EXPOSE 8000
CMD python main.py
The problem is that during the last phase of the build I get error which 'git' cannot be found, i.e The build tries to pull 'mypackage' instead of taking it from the "dependencies" part. Any idea how to fix this?
The error:
Error [Errno 2] No such file or directory: 'git': 'git' while executing command git clone -q Cannot find command 'git' - do you have 'git' installed and in your PATH?
You don't have git in your last (3rd) image, because you only have git in dependencies, while the last one derives from base, which is pure alpine python.
So when you try to RUN pip install -r requirements.txt && rm -rf /root/.cache, you fail on requirement with git protocol.
If you need your final image to be slim, there are few options how to fix it:
use venv (Python's virtual environment); create it on 2nd step and COPY to last one. Then there no need to install requirements.
download reqs from repository to local disk on 2nd step, then COPY them to 3rd step and install (may need gcc on 3rd step, but not git)