Missing Environment Vars in docker python:3 with docker-compose - python

Though my configuration looks good, my python:3 image does not seem to have the expected DJANGO_SECRET_KEY set, at least at the point that the Dockerfile attempts to run migrations
$ docker-compose config
services:
api:
build:
context: /Users/ben/Projects/falcon/falcon-backend
dockerfile: Dockerfile
depends_on:
- db
- redis
environment:
DJANGO_SECRET_KEY: 'some-secret-that-works-elsewhere'
$
$ docker-compose up --build api
[...]
Step 6/7 : RUN echo `$DJANGO_SECRET_KEY`
---> Running in fbfb569c0191
[...]
django.core.exceptions.ImproperlyConfigured: Set the DJANGO_SECRET_KEY env variable
ERROR: Service 'api' failed to build: The command '/bin/sh -c python manage.py migrate' returned a non-zero code: 1
however, the final line,
CMD python manage.py runserver 0.0.0.0:8001 --settings=falcon.settings.dev-microservice does start up as desired, with the necessary env vars set.
# Dockerfile -- api
FROM python:3
RUN pip3 -q install -r requirements.txt
RUN echo `$DJANGO_SECRET_KEY`
RUN python manage.py migrate --settings=falcon.settings.dev-microservice # <-- why does this not work
CMD python manage.py runserver 0.0.0.0:8001 --settings=falcon.settings.dev-microservice
Why does the penultimate line of the Dockerfile fail due to an unset environment variable while the final one works as expected?

The environment variables not declared inside the Dockerfile are not visible to the container when building the image. They are only passed to the container at runtime. Since the RUN instruction executes on build, the environment variable DJANGO_SECRET_KEY which is declared outside the Dockerfile won't be visible to the RUN command.
To solve the issue you can declare the env variable inside the Dockerfile and set it via a build argument:
FROM python:3
RUN pip3 -q install -r requirements.txt
ARG key
ENV DJANGO_SECRET_KEY=$key
RUN echo `$DJANGO_SECRET_KEY`
RUN python manage.py migrate --settings=falcon.settings.dev-microservice
CMD python manage.py runserver 0.0.0.0:8001 --settings=falcon.settings.dev-microservice
Then, you should modify the composefile as such:
build:
context: /Users/ben/Projects/falcon/falcon-backend
dockerfile: Dockerfile
args:
- key='secrete-key'

The RUN is used only when building image. The CMD is the command that is started when you start container from your image. If you run migrate when building image it is wrong, migrate is building your database and you want to run it each time before runserver
# Dockerfile -- api
FROM python:3
RUN pip3 -q install -r requirements.txt
RUN echo `$DJANGO_SECRET_KEY`
CMD /bin/bash -c "python manage.py migrate --settings=falcon.settings.dev-microservice && python manage.py runserver 0.0.0.0:8001 --settings=falcon.settings.dev-microservice"
This is the proper way how to start django in docker, because you want to run the migrations on production when starting server. Not on your PC when building image...

Related

Can't open localhost in the browser on port given in docker-compose

I am trying to build and run django application with docker and docker-compose.
docker-compose build example_app and docker-compose run example_app run without errors, but when I go to http://127.0.0.1:8000/ page doesn't open, I'm just getting "page is unavailable" error in the browser.
Here is my Dockeffile, docker-compose.yml and project structure
Dockerfile
FROM python:3.9-buster
RUN mkdir app
WORKDIR /app
COPY ./requirements.txt /app/requirements.txt
COPY ./requirements_dev.txt /app/requirements_dev.txt
RUN pip install --upgrade pip
RUN pip install -r /app/requirements.txt
docker-compose.yml
version: '3'
services:
example_app:
image: example_app
build:
context: ../
dockerfile: ./docker/Dockerfile
command: bash -c "cd app_examples/drf_example && python manage.py runserver"
volumes:
- ..:/app
ports:
- 8000:8000
project structure:
──app
──app_examples/drf_example/
────manage.py
────api
────drf_example
──requirements.txt
──requirements_dev.txt
──docker/
────docker-compose.yml
────Dockerfile
By default, Django apps bind to 127.0.0.1 meaning that they'll only accept connections from the local machine. In a container context, the local machine is the container, so your app won't accept connections from outside the container.
To get it to accept connections from anywhere, you add the bind address to the runserver command. In your case, you'd change the command in your docker-compose.yml file to
command: bash -c "cd app_examples/drf_example && python manage.py runserver 0.0.0.0:8000"
you need to expose port 8000 in your Docker file
FROM python:3.9-buster
EXPOSE 8000
RUN mkdir app
WORKDIR /app
COPY ./requirements.txt /app/requirements.txt
COPY ./requirements_dev.txt /app/requirements_dev.txt
RUN pip install --upgrade pip
RUN pip install -r /app/requirements.txt

Run Django test in a docker container on Travis CI

[Updated]
It seems that pytest was run by updating the docker-compose.yaml as mentioned in the comment but the build didn't finish successfully. What's wrong with it?
WARNING: Image for service app was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
============================= test session starts ==============================
platform linux -- Python 3.7.5, pytest-5.2.2, py-1.8.0, pluggy-0.13.0
rootdir: /api
plugins: django-3.6.0
collected 0 items
============================ no tests ran in 0.01s =============================
The command "docker-compose run --rm app sh -c "pytest && flake8"" exited with 5.
Done. Your build exited with 1.
I'd like to run a Django test in a docker container on Travis. However, I saw the error
python: can't open file 'manage.py': [Errno 2] No such file or directory
The command "docker-compose run --rm app sh -c "python manage.py test && flake8"" exited with 2.
I felt a bit weird because I didn't see any error when I tried docker-compose up -d with the docker-compose.yml. Could anyone let me know what is wrong with me, please?
Dockerfile
FROM python:3.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /api
WORKDIR /api
ADD . /api/
RUN pip3 install --upgrade pip
RUN pip3 install pipenv
RUN pipenv install --skip-lock --system --dev
docker-compose.yml
version: '3'
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/api
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
.travis.yml
language: python
python:
- "3.7"
services:
- docker
before_script: pip install docker-compose
script:
- docker-compose run --rm app sh -c "python manage.py test && flake8"

Django with Docker - Server not starting

I have followed the steps in the official docker tutorial for getting up and running with django: https://docs.docker.com/compose/django/
It works fine until I have to run docker-compose up
It doesn't directly give me an error, but it won't run the server either, stopping at this point:
(Screenshot of the Docker Quickstart Terminal)
docker-compose.yml:
version: '3'
services:
db:
image: postgres
web:
build: .
command: >
bash -c
"python3 manage.py migrate
python3 manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
I am on Windows and have therefore used docker-toolbox.
Thanks for your suggestions!
Start docker-compose in detached mode:
docker-compose up -d
check your django container id
docker ps
then log into container:
docker exec -it yourDjangoContainerID bash
then go to directory where manage.py file is, and type
python manage.py migrate
You can put the migration command into your docker-compose.yml file. Something like
web:
command: >
bash -c
"python3 manage.py migrate
python3 manage.py runserver 0.0.0.0:8000"
replacing
web:
command: python3 manage.py runserver 0.0.0.0:8000
This will apply migrations every time you do docker-compose up.

Docker compose installing requirements.txt

In my docker image I am cloning the git master branch to retrieve code. I am using docker-compose for the development environment and running my containers with volumes. I ran across an issue when installing new project requirements from my python requirements.txt file. In the development environment, it will never install new requirements on dev environment because when re-building the image, the latest code is pulled from github.
Below is an example of my dockerfile:
FROM base
# Clone application
RUN git clone repo-url
# Install application requirements
RUN pip3 install -r app/requirements.txt
# ....
Here is my compose file:
myapp:
image: development
env_file: .env
ports:
- "8000:80"
volumes:
- .:/home/app
command: python3 manage.py runserver 0.0.0.0:8000
Is there any way to install newly added requirements after build on development?
There are two ways you can do this.
By hand
You can enter the container and do it yourself. Downside: not automated.
$ docker-compose exec myapp bash
2912d2cd9eab# pip3 install -r /home/app/requirements.txt
Using an entrypoint script
You can use an entrypoint script that runs prep work, then runs the command.
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod 755 /entrypoint.sh
# ... probably other stuff in here ...
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
#!/bin/sh
cd /home/app
pip3 install -r requirements.txt
# May as well do this too, while we're here.
python3 manage.py migrate
exec "$#"
The entrypoint is run like this at container startup:
/entrypoint.sh $CMD
Which expands to:
/entrypoint.sh python3 manage.py runserver 0.0.0.0:8000
The prep work is run first, then at the end of the entrypoint script, the passed-in argument(s) are exec'd. That's your command, so entrypoint.sh exits and is replaced by your Django app server.
UPDATE:
After taking comments to chat, it came up that it is important to use exec to run the command, instead of running it at the end of the entrypoint script like this:
python3 manage.py runserver 0.0.0.0:8000
I can't exactly recall why it matters, but I ran into this previously as well. You need to exec the command or it will not work properly.
The way I solved this is by running two services:
server: run the server depends on requirements
requirements: installs requirements prior to running server
And this is how the docker-compose.yml file would look like:
version: '3'
services:
django:
image: python:3.7-alpine
volumes:
- pip37:/usr/local/lib/python3.7/site-packages
- .:/project
ports:
- 8000:8000
working_dir: /project
command: python manage.py runserver
depends_on:
- requirements
requirements:
image: python:3.7-alpine
volumes:
- pip37:/usr/local/lib/python3.7/site-packages
- .:/project
working_dir: /project
command: pip install -r requirements.txt
volumes:
pip37:
external: true
PS: I created a named volume for the pip modules so I can preserve them across different projects. You can create one yourself by running:
docker volume create mypipivolume

Django app server hangs / won't start in Docker Compose

I am trying to launch a straightforward Django app server in Docker Compose, paired with a Postgres container. It goes through as I would expect, launching the entrypoint script, but it never seems to actually run the Django app server (which should be the last step, and remain running).
I know it runs the entrypoint script, because the migrate step is run. The app server never outputs any of the expected output, and port 8000 never responds.
I am using Docker for Mac (stable), if it matters.
Dockerfile for my Django app container:
FROM ubuntu:16.04
COPY my_app /my_app
RUN apt-get update \
&& apt-get install -y python3 python3-psycopg2 python3-pip
RUN apt-get install -y nodejs npm
WORKDIR /my_app
RUN pip3 install -r requirements.txt
RUN npm install bower
RUN python3 manage.py bower install
RUN python3 manage.py collectstatic --no-input
EXPOSE 8000
COPY entrypoint.sh /
RUN chmod 755 /entrypoint.sh
CMD python3 manage.py runserver 0.0.0.0:8000
ENTRYPOINT ["/entrypoint.sh"]
Django entrypoint script:
#!/bin/sh
# Allow database container to start up or recover from a crash
sleep 10
cd /my_app
# Run any pending migrations
python3 manage.py migrate
exec $#
docker-compose.yml:
version: '2'
services:
db:
image: postgres:9.6
volumes:
- ./db/pgdata:/pgdata
environment:
- POSTGRES_USER=my_user
- POSTGRES_PASSWORD=my_password
- PGDATA=/pgdata
- POSTGRES_DB=my_database
appserver:
image: my-image
command: python3 manage.py runserver 0.0.0.0:8000
ports:
- '8000:8000'
environment:
- POSTGRES_USER=my_user
- POSTGRES_PASSWORD=my_password
- POSTGRES_DB=my_database
links:
- db
depends_on:
- db
Use the exec form for CMD in your Dockerfile
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
The entrypoint.sh scripts exec is currently trying to run:
/bin/sh -c python3 manage.py runserver 0.0.0.0:8000
Which doesn't seem to work, I think it's just running python3.
You should quote the positional parameters variable so the shell maintains each parameter, even if there are spaces.
exec "$#"
But it's best not to have sh in between docker and your app, so always use the exec form for a CMD.

Categories

Resources