My Dockerfile is:
FROM python:3.5
RUN apt-get update
USER root
WORKDIR /app
ADD . /app
RUN pip install -r requirements.txt
WORKDIR /app/etalentNET
EXPOSE 8000
CMD ["python", "manage.py", "makemigrations"]
CMD ["python", "manage.py", "migrate"]
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
My docker-compose.yaml is:
version: '3'
services:
db:
image: sqlite3
web:
build:
image: demo:latest
dockerfile: Dockerfile
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
When I run docker run -p 8000:8000 demo it doesn't do anything.
CONTAINER ID IMAGE COMMAND CREATED STATUS
4829d420c560 demo "python manage.py ru…" 9 minutes ago Exited (0)
But when I run docker run -p 8000:8000 -it demo bash and then there python manage.py runserver 0.0.0.0:8000 the servers start running (but can't access it via <host_ip>:8000. I don't know why).
I'm running in a Google Cloud Compute Engine virtual machine with Ubuntu 16.04, and Django-2.0.6.
Put allowed_hosts='*' in settings.py
Related
We've been using python3 and Docker as our framework. Our main issue is that while we try to run the docker container it redirects us to the browser but the website can not be reached. But it is working when we run the commands python manage.py runserver manualy from the terminal of VS code
here is the docker-compose.yml file
version: "2.12.2"
services:
web:
tty: true
build:
dockerfile: Dockerfile
context: .
command: bash -c "cd happy_traveller && python manage.py runserver 0.0.0.0:8000 "
ports:
\- 8000:8000
restart: always
the docker file
FROM python:3.10
EXPOSE 8000
WORKDIR /
COPY happy_traveller .
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
and the app structure
|_App_Folder
|_happy_traveller
|_API
|_paycache
|_core
|_settings
|_templates
|_folder
|_folder
|_folder
|_manage.py
|_dockerfile
|_docker-compose.yml
|_requirements.txt
|_readmme.md
|_get-pip.py
We would really apreciate the help. thank you for your time
As you copied the source folder(happy_traveller) in your docker file, you don't need to run the cd command again, so the docker-compose file would look like this:
version: "2.12.2"
services:
web:
tty: true
build:
dockerfile: Dockerfile
context: .
command: bash -c "python manage.py runserver 0.0.0.0:8000 "
ports:
- 8000:8000
restart: always
I have created a simple Docker file for DJango project and when I issue docker run, I am able to access through browser.
docker run -p 8000:8000 s3bucket-ms:1
Here is the Docker File:
FROM python:3.6.7-alpine
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN pip install --upgrade pip
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
RUN pip install -r requirements.txt
RUN mkdir /app
WORKDIR /app
ADD ./s3bucket /app/
EXPOSE 8000
CMD ["python", "./manage.py", "runserver", "0.0.0.0:8000"]
However, when I am using Docker Compose , i can't access the project through the browser.
Here is the docker-compose.yml
version: '3'
services:
web:
build: .
command: python ./manage.py runserver 8000
ports:
- "8000:8000"
With Docker Compose , I also commented CMD within Docker File.
Output from Docker Compose UP
web_1 | Run 'python manage.py migrate' to apply them.
web_1 | February 17, 2020 - 14:29:22
web_1 | Django version 3.0.3, using settings 's3bucket.settings'
web_1 | Starting development server at http://127.0.0.1:8000/
web_1 | Quit the server with CONTROL-C.
Am I missing something? Any help is appreciated.
Thanks,
Add EXPOSE 8000 on your Dockerfile
Update1:
With docker run
docker run \
-v /path/to/s3bucket:/app \ # absolute path
-p 8000:8000 \
s3bucket-ms:1
With docker-compose
version: '3'
services:
web:
build: .
command: python ./manage.py runserver 8000
volumes:
- /path/to/s3bucket:/app # absolute path
ports:
- "8000:8000"
More infos on https://docs.docker.com/storage/volumes/
Here's the deal:
I want to create an image based on python:latest, for Django development.
I want to create the Django project INSIDE THE CONTAINER and make it reflect on a host folder, via docker volumes.
I want to use the python interpreter from the container for development.
This way, I can have only my Dockerfile, docker-compose.yml and requirements.txt on my project folder, not depending on Python, virtualenvs or anything like that on my host.
Here's my Dockerfile:
FROM python:latest
ARG requirements=requirements/production.txt
COPY ./app /app
WORKDIR /app
RUN pip install --upgrade pip && \
pip install --no-cache-dir -r $requirements && \
django-admin startproject myproject .
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
And here's my docker-compose.yml:
version: '3'
services:
web:
build:
context: .
args:
requirements: requirements/development.txt
networks:
- main
depends_on:
- postgres
environment:
- PYTHONUNBUFFERED=1
ports:
- "8000:8000"
volumes:
- "./app:/app:rw"
postgres:
image: postgres:latest
networks:
- main
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=123
volumes:
- ./data:/var/lib/postgresql/data
networks:
main:
The main issue are the volumes in web. If I build the image via docker build -t somename:sometag . the build works fine. If I run docker run -it my_image bash it shows me all the files created inside /app.
But if I try docker-compose up it fails the web part, saying that could not find manage.py, and exiting with code 2. Only Postgres is running after that.
So, finally, my questions are:
This kind of workflow is possible? Is it the best option, since it does not depend on Python on the host?
Thanks a lot.
I have followed the steps in the official docker tutorial for getting up and running with django: https://docs.docker.com/compose/django/
It works fine until I have to run docker-compose up
It doesn't directly give me an error, but it won't run the server either, stopping at this point:
(Screenshot of the Docker Quickstart Terminal)
docker-compose.yml:
version: '3'
services:
db:
image: postgres
web:
build: .
command: >
bash -c
"python3 manage.py migrate
python3 manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
I am on Windows and have therefore used docker-toolbox.
Thanks for your suggestions!
Start docker-compose in detached mode:
docker-compose up -d
check your django container id
docker ps
then log into container:
docker exec -it yourDjangoContainerID bash
then go to directory where manage.py file is, and type
python manage.py migrate
You can put the migration command into your docker-compose.yml file. Something like
web:
command: >
bash -c
"python3 manage.py migrate
python3 manage.py runserver 0.0.0.0:8000"
replacing
web:
command: python3 manage.py runserver 0.0.0.0:8000
This will apply migrations every time you do docker-compose up.
I am trying to launch a straightforward Django app server in Docker Compose, paired with a Postgres container. It goes through as I would expect, launching the entrypoint script, but it never seems to actually run the Django app server (which should be the last step, and remain running).
I know it runs the entrypoint script, because the migrate step is run. The app server never outputs any of the expected output, and port 8000 never responds.
I am using Docker for Mac (stable), if it matters.
Dockerfile for my Django app container:
FROM ubuntu:16.04
COPY my_app /my_app
RUN apt-get update \
&& apt-get install -y python3 python3-psycopg2 python3-pip
RUN apt-get install -y nodejs npm
WORKDIR /my_app
RUN pip3 install -r requirements.txt
RUN npm install bower
RUN python3 manage.py bower install
RUN python3 manage.py collectstatic --no-input
EXPOSE 8000
COPY entrypoint.sh /
RUN chmod 755 /entrypoint.sh
CMD python3 manage.py runserver 0.0.0.0:8000
ENTRYPOINT ["/entrypoint.sh"]
Django entrypoint script:
#!/bin/sh
# Allow database container to start up or recover from a crash
sleep 10
cd /my_app
# Run any pending migrations
python3 manage.py migrate
exec $#
docker-compose.yml:
version: '2'
services:
db:
image: postgres:9.6
volumes:
- ./db/pgdata:/pgdata
environment:
- POSTGRES_USER=my_user
- POSTGRES_PASSWORD=my_password
- PGDATA=/pgdata
- POSTGRES_DB=my_database
appserver:
image: my-image
command: python3 manage.py runserver 0.0.0.0:8000
ports:
- '8000:8000'
environment:
- POSTGRES_USER=my_user
- POSTGRES_PASSWORD=my_password
- POSTGRES_DB=my_database
links:
- db
depends_on:
- db
Use the exec form for CMD in your Dockerfile
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
The entrypoint.sh scripts exec is currently trying to run:
/bin/sh -c python3 manage.py runserver 0.0.0.0:8000
Which doesn't seem to work, I think it's just running python3.
You should quote the positional parameters variable so the shell maintains each parameter, even if there are spaces.
exec "$#"
But it's best not to have sh in between docker and your app, so always use the exec form for a CMD.