I have followed the steps in the official docker tutorial for getting up and running with django: https://docs.docker.com/compose/django/
It works fine until I have to run docker-compose up
It doesn't directly give me an error, but it won't run the server either, stopping at this point:
(Screenshot of the Docker Quickstart Terminal)
docker-compose.yml:
version: '3'
services:
db:
image: postgres
web:
build: .
command: >
bash -c
"python3 manage.py migrate
python3 manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
I am on Windows and have therefore used docker-toolbox.
Thanks for your suggestions!
Start docker-compose in detached mode:
docker-compose up -d
check your django container id
docker ps
then log into container:
docker exec -it yourDjangoContainerID bash
then go to directory where manage.py file is, and type
python manage.py migrate
You can put the migration command into your docker-compose.yml file. Something like
web:
command: >
bash -c
"python3 manage.py migrate
python3 manage.py runserver 0.0.0.0:8000"
replacing
web:
command: python3 manage.py runserver 0.0.0.0:8000
This will apply migrations every time you do docker-compose up.
Related
We've been using python3 and Docker as our framework. Our main issue is that while we try to run the docker container it redirects us to the browser but the website can not be reached. But it is working when we run the commands python manage.py runserver manualy from the terminal of VS code
here is the docker-compose.yml file
version: "2.12.2"
services:
web:
tty: true
build:
dockerfile: Dockerfile
context: .
command: bash -c "cd happy_traveller && python manage.py runserver 0.0.0.0:8000 "
ports:
\- 8000:8000
restart: always
the docker file
FROM python:3.10
EXPOSE 8000
WORKDIR /
COPY happy_traveller .
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
and the app structure
|_App_Folder
|_happy_traveller
|_API
|_paycache
|_core
|_settings
|_templates
|_folder
|_folder
|_folder
|_manage.py
|_dockerfile
|_docker-compose.yml
|_requirements.txt
|_readmme.md
|_get-pip.py
We would really apreciate the help. thank you for your time
As you copied the source folder(happy_traveller) in your docker file, you don't need to run the cd command again, so the docker-compose file would look like this:
version: "2.12.2"
services:
web:
tty: true
build:
dockerfile: Dockerfile
context: .
command: bash -c "python manage.py runserver 0.0.0.0:8000 "
ports:
- 8000:8000
restart: always
I have created a simple Docker file for DJango project and when I issue docker run, I am able to access through browser.
docker run -p 8000:8000 s3bucket-ms:1
Here is the Docker File:
FROM python:3.6.7-alpine
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN pip install --upgrade pip
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
RUN pip install -r requirements.txt
RUN mkdir /app
WORKDIR /app
ADD ./s3bucket /app/
EXPOSE 8000
CMD ["python", "./manage.py", "runserver", "0.0.0.0:8000"]
However, when I am using Docker Compose , i can't access the project through the browser.
Here is the docker-compose.yml
version: '3'
services:
web:
build: .
command: python ./manage.py runserver 8000
ports:
- "8000:8000"
With Docker Compose , I also commented CMD within Docker File.
Output from Docker Compose UP
web_1 | Run 'python manage.py migrate' to apply them.
web_1 | February 17, 2020 - 14:29:22
web_1 | Django version 3.0.3, using settings 's3bucket.settings'
web_1 | Starting development server at http://127.0.0.1:8000/
web_1 | Quit the server with CONTROL-C.
Am I missing something? Any help is appreciated.
Thanks,
Add EXPOSE 8000 on your Dockerfile
Update1:
With docker run
docker run \
-v /path/to/s3bucket:/app \ # absolute path
-p 8000:8000 \
s3bucket-ms:1
With docker-compose
version: '3'
services:
web:
build: .
command: python ./manage.py runserver 8000
volumes:
- /path/to/s3bucket:/app # absolute path
ports:
- "8000:8000"
More infos on https://docs.docker.com/storage/volumes/
My Dockerfile is:
FROM python:3.5
RUN apt-get update
USER root
WORKDIR /app
ADD . /app
RUN pip install -r requirements.txt
WORKDIR /app/etalentNET
EXPOSE 8000
CMD ["python", "manage.py", "makemigrations"]
CMD ["python", "manage.py", "migrate"]
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
My docker-compose.yaml is:
version: '3'
services:
db:
image: sqlite3
web:
build:
image: demo:latest
dockerfile: Dockerfile
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
When I run docker run -p 8000:8000 demo it doesn't do anything.
CONTAINER ID IMAGE COMMAND CREATED STATUS
4829d420c560 demo "python manage.py ru…" 9 minutes ago Exited (0)
But when I run docker run -p 8000:8000 -it demo bash and then there python manage.py runserver 0.0.0.0:8000 the servers start running (but can't access it via <host_ip>:8000. I don't know why).
I'm running in a Google Cloud Compute Engine virtual machine with Ubuntu 16.04, and Django-2.0.6.
Put allowed_hosts='*' in settings.py
In my docker image I am cloning the git master branch to retrieve code. I am using docker-compose for the development environment and running my containers with volumes. I ran across an issue when installing new project requirements from my python requirements.txt file. In the development environment, it will never install new requirements on dev environment because when re-building the image, the latest code is pulled from github.
Below is an example of my dockerfile:
FROM base
# Clone application
RUN git clone repo-url
# Install application requirements
RUN pip3 install -r app/requirements.txt
# ....
Here is my compose file:
myapp:
image: development
env_file: .env
ports:
- "8000:80"
volumes:
- .:/home/app
command: python3 manage.py runserver 0.0.0.0:8000
Is there any way to install newly added requirements after build on development?
There are two ways you can do this.
By hand
You can enter the container and do it yourself. Downside: not automated.
$ docker-compose exec myapp bash
2912d2cd9eab# pip3 install -r /home/app/requirements.txt
Using an entrypoint script
You can use an entrypoint script that runs prep work, then runs the command.
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod 755 /entrypoint.sh
# ... probably other stuff in here ...
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
#!/bin/sh
cd /home/app
pip3 install -r requirements.txt
# May as well do this too, while we're here.
python3 manage.py migrate
exec "$#"
The entrypoint is run like this at container startup:
/entrypoint.sh $CMD
Which expands to:
/entrypoint.sh python3 manage.py runserver 0.0.0.0:8000
The prep work is run first, then at the end of the entrypoint script, the passed-in argument(s) are exec'd. That's your command, so entrypoint.sh exits and is replaced by your Django app server.
UPDATE:
After taking comments to chat, it came up that it is important to use exec to run the command, instead of running it at the end of the entrypoint script like this:
python3 manage.py runserver 0.0.0.0:8000
I can't exactly recall why it matters, but I ran into this previously as well. You need to exec the command or it will not work properly.
The way I solved this is by running two services:
server: run the server depends on requirements
requirements: installs requirements prior to running server
And this is how the docker-compose.yml file would look like:
version: '3'
services:
django:
image: python:3.7-alpine
volumes:
- pip37:/usr/local/lib/python3.7/site-packages
- .:/project
ports:
- 8000:8000
working_dir: /project
command: python manage.py runserver
depends_on:
- requirements
requirements:
image: python:3.7-alpine
volumes:
- pip37:/usr/local/lib/python3.7/site-packages
- .:/project
working_dir: /project
command: pip install -r requirements.txt
volumes:
pip37:
external: true
PS: I created a named volume for the pip modules so I can preserve them across different projects. You can create one yourself by running:
docker volume create mypipivolume
I am trying to launch a straightforward Django app server in Docker Compose, paired with a Postgres container. It goes through as I would expect, launching the entrypoint script, but it never seems to actually run the Django app server (which should be the last step, and remain running).
I know it runs the entrypoint script, because the migrate step is run. The app server never outputs any of the expected output, and port 8000 never responds.
I am using Docker for Mac (stable), if it matters.
Dockerfile for my Django app container:
FROM ubuntu:16.04
COPY my_app /my_app
RUN apt-get update \
&& apt-get install -y python3 python3-psycopg2 python3-pip
RUN apt-get install -y nodejs npm
WORKDIR /my_app
RUN pip3 install -r requirements.txt
RUN npm install bower
RUN python3 manage.py bower install
RUN python3 manage.py collectstatic --no-input
EXPOSE 8000
COPY entrypoint.sh /
RUN chmod 755 /entrypoint.sh
CMD python3 manage.py runserver 0.0.0.0:8000
ENTRYPOINT ["/entrypoint.sh"]
Django entrypoint script:
#!/bin/sh
# Allow database container to start up or recover from a crash
sleep 10
cd /my_app
# Run any pending migrations
python3 manage.py migrate
exec $#
docker-compose.yml:
version: '2'
services:
db:
image: postgres:9.6
volumes:
- ./db/pgdata:/pgdata
environment:
- POSTGRES_USER=my_user
- POSTGRES_PASSWORD=my_password
- PGDATA=/pgdata
- POSTGRES_DB=my_database
appserver:
image: my-image
command: python3 manage.py runserver 0.0.0.0:8000
ports:
- '8000:8000'
environment:
- POSTGRES_USER=my_user
- POSTGRES_PASSWORD=my_password
- POSTGRES_DB=my_database
links:
- db
depends_on:
- db
Use the exec form for CMD in your Dockerfile
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
The entrypoint.sh scripts exec is currently trying to run:
/bin/sh -c python3 manage.py runserver 0.0.0.0:8000
Which doesn't seem to work, I think it's just running python3.
You should quote the positional parameters variable so the shell maintains each parameter, even if there are spaces.
exec "$#"
But it's best not to have sh in between docker and your app, so always use the exec form for a CMD.