Specified python version in dockerfile not reflecting in the container - python

I have a python-django web app that I want to run in a docker container. I am using MySql for my database, so I need to use mysqlclient, which does not work for python 3.10 and above when installing using pip, so I am using python 3.9. The following is my dockerfile:
FROM python:3.9.13
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt
COPY . /app
And the docker-compose.yaml file looks like this:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: 'python manage.py runserver 0.0.0.0:8000'
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
When running with docker-compose up everything gets created as expected, except for the python version, which when I check from the containers terminal tells me it is running 3.10.8. I tried using other images of python 3.9 from https://hub.docker.com/_/python, but still, I get the same result. Thus I cannot run my Django project there because mysqlclient cannot get installed with pip3.10 and above.
The interesting thing is, I have the exact same dockerfile using a flask application, and that container works as I expect it to.
Is something missing here?
EDIT:
For clarification, the Dockerfile and docker-compose.yaml is located at the django projects root directory, if that matters.

Using FROM python:3.9 instead of FROM python:3.9.13 seems to solve the issue. Still not sure why it would go and take python 3.10+. Guess there might have been some problems pulling the image and it defaulted to something.

Related

standard_init_linux.go:211: exec user process caused "operation not permitted"

I was trying to run a django-rest-framework app on docker using python 2.7 & django = 1.11 image and postgerss images. here is my docker-compose.yml file.
I am running Docker on windows 10 Enterprise build 1909
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_USER: xxxxxxx
POSTGRES_PASSWORD: xxxxxx
POSTGRES_DB: xxxxxx
PGDATA: /var/lib/postgresql/data/pgdata
volumes:
- pgdata_v:/var/lib/postgresql/data/pgdata:Z
ports:
- "5433:5432"
web:
build: .
command: /app/scripts/runserver.sh
environment:
ENV: local
WERKZEUG_DEBUG_PIN: 'off'
DB_USER: xxxxxx
DB_PASSWORD: xxxxxx
DB_NAME: xxxxxx
DB_PORT: 5432
volumes:
- ./config/:/app/config/
- ./v1/:/app/v1/
- ./scripts/:/app/scripts/
ports:
- "8005:8000"
depends_on:
- db
links:
- db:db
volumes:
pgdata_v:
external: true
And here is my Dockerfile
FROM python:2.7
ENV PYTHONUNBUFFERED 1
ENV ENV local
RUN mkdir -p /app/scripts/
WORKDIR /app
ADD ./requirements /app/requirements/
RUN pip install -U setuptools
RUN pip install distribute==0.7.3
RUN pip install urllib3==1.21.1 --force-reinstall
RUN pip install -r /app/requirements/base.txt
RUN mkdir -p /app/static/
ADD ./manage.py /app/
ADD ./config/ /app/config/
ADD ./scripts/ /app/scripts/
ADD ./cert/ /app/cert/
ADD ./v1/ /app/v1/
RUN chmod 755 /app/scripts/runserver.sh
EXPOSE 8000
CMD ["/app/scripts/server.sh"]
while running it i get the error standard_init_linux.go:211: exec user process caused "operation not permitted"
I have looked at some answers on StackOverflow and github but could not fix it.
I tried many fixes but none could work for me so I moved to WSL(Windows sub-system for Linux). I set up my environment there, cloned my repository there and it is working now. To use Docker on the WSL I used this post
This might help others facing similar issue like me.

Docker so slow while installing pip requirements

I am trying to implement a docker for a dummy local Django project. I am using docker-compose as a tool for defining and running multiple containers. Here I tried to containerize the Django-web-app and PostgreSQL two services.
Configuration used in Dockerfile and docker-compose.yml
Dockerfile
# Pull base image
FROM python:3.7-alpine
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY requirements.txt /code/
RUN pip install -r requirements.txt
# Copy project
COPY . /code/
docker-compose.yml
version: '3.7'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres:11
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
postgres_data:
All seems okay. The path postgres integrations and all except one thing pip install -r requirements.txt. This is taking too much time to install from requirements. Last time I was giving up on this but at last the installation does completed but takes lots of time to complete.
In my scenario, the only issue is why the pip install so slow. If there is anything that I am missing? I am new to docker and any help on this topic will be highly appreciated. Thank you.
I was following this Link.
Probably this is because PyPI wheels don’t work on Alpine. Instead of using precompile files Alpine downloads the source code and compile it. Try to use python:3.7-slim image instead:
# Pull base image
FROM python:3.7-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY requirements.txt /code/
RUN pip install -r requirements.txt
# Copy project
COPY . /code/
Check this article for more details: Alpine makes Python Docker builds 50× slower.

Docker compose errors using volumes

Here's the deal:
I want to create an image based on python:latest, for Django development.
I want to create the Django project INSIDE THE CONTAINER and make it reflect on a host folder, via docker volumes.
I want to use the python interpreter from the container for development.
This way, I can have only my Dockerfile, docker-compose.yml and requirements.txt on my project folder, not depending on Python, virtualenvs or anything like that on my host.
Here's my Dockerfile:
FROM python:latest
ARG requirements=requirements/production.txt
COPY ./app /app
WORKDIR /app
RUN pip install --upgrade pip && \
pip install --no-cache-dir -r $requirements && \
django-admin startproject myproject .
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
And here's my docker-compose.yml:
version: '3'
services:
web:
build:
context: .
args:
requirements: requirements/development.txt
networks:
- main
depends_on:
- postgres
environment:
- PYTHONUNBUFFERED=1
ports:
- "8000:8000"
volumes:
- "./app:/app:rw"
postgres:
image: postgres:latest
networks:
- main
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=123
volumes:
- ./data:/var/lib/postgresql/data
networks:
main:
The main issue are the volumes in web. If I build the image via docker build -t somename:sometag . the build works fine. If I run docker run -it my_image bash it shows me all the files created inside /app.
But if I try docker-compose up it fails the web part, saying that could not find manage.py, and exiting with code 2. Only Postgres is running after that.
So, finally, my questions are:
This kind of workflow is possible? Is it the best option, since it does not depend on Python on the host?
Thanks a lot.

Docker compose installing requirements.txt

In my docker image I am cloning the git master branch to retrieve code. I am using docker-compose for the development environment and running my containers with volumes. I ran across an issue when installing new project requirements from my python requirements.txt file. In the development environment, it will never install new requirements on dev environment because when re-building the image, the latest code is pulled from github.
Below is an example of my dockerfile:
FROM base
# Clone application
RUN git clone repo-url
# Install application requirements
RUN pip3 install -r app/requirements.txt
# ....
Here is my compose file:
myapp:
image: development
env_file: .env
ports:
- "8000:80"
volumes:
- .:/home/app
command: python3 manage.py runserver 0.0.0.0:8000
Is there any way to install newly added requirements after build on development?
There are two ways you can do this.
By hand
You can enter the container and do it yourself. Downside: not automated.
$ docker-compose exec myapp bash
2912d2cd9eab# pip3 install -r /home/app/requirements.txt
Using an entrypoint script
You can use an entrypoint script that runs prep work, then runs the command.
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod 755 /entrypoint.sh
# ... probably other stuff in here ...
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
#!/bin/sh
cd /home/app
pip3 install -r requirements.txt
# May as well do this too, while we're here.
python3 manage.py migrate
exec "$#"
The entrypoint is run like this at container startup:
/entrypoint.sh $CMD
Which expands to:
/entrypoint.sh python3 manage.py runserver 0.0.0.0:8000
The prep work is run first, then at the end of the entrypoint script, the passed-in argument(s) are exec'd. That's your command, so entrypoint.sh exits and is replaced by your Django app server.
UPDATE:
After taking comments to chat, it came up that it is important to use exec to run the command, instead of running it at the end of the entrypoint script like this:
python3 manage.py runserver 0.0.0.0:8000
I can't exactly recall why it matters, but I ran into this previously as well. You need to exec the command or it will not work properly.
The way I solved this is by running two services:
server: run the server depends on requirements
requirements: installs requirements prior to running server
And this is how the docker-compose.yml file would look like:
version: '3'
services:
django:
image: python:3.7-alpine
volumes:
- pip37:/usr/local/lib/python3.7/site-packages
- .:/project
ports:
- 8000:8000
working_dir: /project
command: python manage.py runserver
depends_on:
- requirements
requirements:
image: python:3.7-alpine
volumes:
- pip37:/usr/local/lib/python3.7/site-packages
- .:/project
working_dir: /project
command: pip install -r requirements.txt
volumes:
pip37:
external: true
PS: I created a named volume for the pip modules so I can preserve them across different projects. You can create one yourself by running:
docker volume create mypipivolume

Docker for windows10 run django fail: Can't open file 'manage.py': [Errno 2] No such file or directory

I just start a sample django app. And use docker to run it. My docker image like:
FROM python:3.5
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
My docker-compose.yml file:
version: '2'
services:
django:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
When I run docker-compose up command,it build successfully but failed in running command: python manage.py runserver 0.0.0.0:8000,it complained python: can't open file 'manage.py': [Errno 2] No such file or directory.
Is this a bug in docker for windows? Because I just follow the docs of docker Quickstart: Docker Compose and Django
Thank for you help!
I think you either missed this step: docker-compose run web django-admin.py startproject composeexample . or you're using a directory that isn't available to the Virtual Machine that is running docker.
If it works when you remove volumes: .:/code from the Compose file, then you know the issue is the volumes.
I believe by default only the users home directory is shared with the VM, so if you create a project outside of that tree, you won't have access to the files from volumes.

Categories

Resources