Docker compose installing requirements.txt - python

In my docker image I am cloning the git master branch to retrieve code. I am using docker-compose for the development environment and running my containers with volumes. I ran across an issue when installing new project requirements from my python requirements.txt file. In the development environment, it will never install new requirements on dev environment because when re-building the image, the latest code is pulled from github.
Below is an example of my dockerfile:
FROM base
# Clone application
RUN git clone repo-url
# Install application requirements
RUN pip3 install -r app/requirements.txt
# ....
Here is my compose file:
myapp:
image: development
env_file: .env
ports:
- "8000:80"
volumes:
- .:/home/app
command: python3 manage.py runserver 0.0.0.0:8000
Is there any way to install newly added requirements after build on development?

There are two ways you can do this.
By hand
You can enter the container and do it yourself. Downside: not automated.
$ docker-compose exec myapp bash
2912d2cd9eab# pip3 install -r /home/app/requirements.txt
Using an entrypoint script
You can use an entrypoint script that runs prep work, then runs the command.
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod 755 /entrypoint.sh
# ... probably other stuff in here ...
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
#!/bin/sh
cd /home/app
pip3 install -r requirements.txt
# May as well do this too, while we're here.
python3 manage.py migrate
exec "$#"
The entrypoint is run like this at container startup:
/entrypoint.sh $CMD
Which expands to:
/entrypoint.sh python3 manage.py runserver 0.0.0.0:8000
The prep work is run first, then at the end of the entrypoint script, the passed-in argument(s) are exec'd. That's your command, so entrypoint.sh exits and is replaced by your Django app server.
UPDATE:
After taking comments to chat, it came up that it is important to use exec to run the command, instead of running it at the end of the entrypoint script like this:
python3 manage.py runserver 0.0.0.0:8000
I can't exactly recall why it matters, but I ran into this previously as well. You need to exec the command or it will not work properly.

The way I solved this is by running two services:
server: run the server depends on requirements
requirements: installs requirements prior to running server
And this is how the docker-compose.yml file would look like:
version: '3'
services:
django:
image: python:3.7-alpine
volumes:
- pip37:/usr/local/lib/python3.7/site-packages
- .:/project
ports:
- 8000:8000
working_dir: /project
command: python manage.py runserver
depends_on:
- requirements
requirements:
image: python:3.7-alpine
volumes:
- pip37:/usr/local/lib/python3.7/site-packages
- .:/project
working_dir: /project
command: pip install -r requirements.txt
volumes:
pip37:
external: true
PS: I created a named volume for the pip modules so I can preserve them across different projects. You can create one yourself by running:
docker volume create mypipivolume

Related

Can't open localhost in the browser on port given in docker-compose

I am trying to build and run django application with docker and docker-compose.
docker-compose build example_app and docker-compose run example_app run without errors, but when I go to http://127.0.0.1:8000/ page doesn't open, I'm just getting "page is unavailable" error in the browser.
Here is my Dockeffile, docker-compose.yml and project structure
Dockerfile
FROM python:3.9-buster
RUN mkdir app
WORKDIR /app
COPY ./requirements.txt /app/requirements.txt
COPY ./requirements_dev.txt /app/requirements_dev.txt
RUN pip install --upgrade pip
RUN pip install -r /app/requirements.txt
docker-compose.yml
version: '3'
services:
example_app:
image: example_app
build:
context: ../
dockerfile: ./docker/Dockerfile
command: bash -c "cd app_examples/drf_example && python manage.py runserver"
volumes:
- ..:/app
ports:
- 8000:8000
project structure:
──app
──app_examples/drf_example/
────manage.py
────api
────drf_example
──requirements.txt
──requirements_dev.txt
──docker/
────docker-compose.yml
────Dockerfile
By default, Django apps bind to 127.0.0.1 meaning that they'll only accept connections from the local machine. In a container context, the local machine is the container, so your app won't accept connections from outside the container.
To get it to accept connections from anywhere, you add the bind address to the runserver command. In your case, you'd change the command in your docker-compose.yml file to
command: bash -c "cd app_examples/drf_example && python manage.py runserver 0.0.0.0:8000"
you need to expose port 8000 in your Docker file
FROM python:3.9-buster
EXPOSE 8000
RUN mkdir app
WORKDIR /app
COPY ./requirements.txt /app/requirements.txt
COPY ./requirements_dev.txt /app/requirements_dev.txt
RUN pip install --upgrade pip
RUN pip install -r /app/requirements.txt

Inside Docker Container - python: can't open file './services/web/manage.py': [Errno 2] No such file or directory

I am trying to create 2x Docker containers:
For my WEB API
For PostgreSQL DB
I am using docker-compose in order to build these containers. Even though I can successfully build them using docker-compose build command, whenever I go to inspect the logs using docker-compose logs -f command, I am getting the following error message:
...
db_1 | 2020-08-19 12:39:07.681 UTC [45] LOG: database system was shut down at 2020-08-19 12:39:07 UTC
db_1 | 2020-08-19 12:39:07.686 UTC [1] LOG: database system is ready to accept connections
web_1 | python: can't open file 'manage.py': [Errno 2] No such file or directory
nlp-influencertextanalysis_web_1 exited with code 2
Everything seems fine with db container, but for some reason inside web container Python cannot locate manage.py file. Here is my file structure:
And here is code for my docker-compose.yml:
version: '3.7'
services:
web:
build: ./services/web
command: python manage.py run -h 0.0.0.0
volumes:
- ./services/web/:/usr/src/app/
ports:
- 5000:5000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=user1
- POSTGRES_PASSWORD=test123
- POSTGRES_DB=influencer_analysis
volumes:
postgres_data:
And here is my code for Dockerfile:
FROM python:3.8.1-slim-buster AS training
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install system dependencies
RUN apt-get update && apt-get install -y netcat
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# install NLTK dependencies
RUN python -c "import nltk; nltk.download('punkt')"
# copy project
COPY . /usr/src/app/
WORKDIR /usr/src/app/experiments
RUN python train.py --data data/HaInstagramPostDetails.xlsx --update 1
I should note that I've printed out all fines that are located in /usr/src/app when train.py is executed with RUN command from Docker file, and manage.py is there.
I believe the problem is that you have changed the working directory at the end of your Docker file.
You can try to give an exact path to your manage.py file or.
Change the working directory in the Docker file at the end that directs to the app directory.
I think there is problem while changing the Working directory. It should have been
WORKDIR /usr/src/app/web/experiments based on the folder structure.

Missing Environment Vars in docker python:3 with docker-compose

Though my configuration looks good, my python:3 image does not seem to have the expected DJANGO_SECRET_KEY set, at least at the point that the Dockerfile attempts to run migrations
$ docker-compose config
services:
api:
build:
context: /Users/ben/Projects/falcon/falcon-backend
dockerfile: Dockerfile
depends_on:
- db
- redis
environment:
DJANGO_SECRET_KEY: 'some-secret-that-works-elsewhere'
$
$ docker-compose up --build api
[...]
Step 6/7 : RUN echo `$DJANGO_SECRET_KEY`
---> Running in fbfb569c0191
[...]
django.core.exceptions.ImproperlyConfigured: Set the DJANGO_SECRET_KEY env variable
ERROR: Service 'api' failed to build: The command '/bin/sh -c python manage.py migrate' returned a non-zero code: 1
however, the final line,
CMD python manage.py runserver 0.0.0.0:8001 --settings=falcon.settings.dev-microservice does start up as desired, with the necessary env vars set.
# Dockerfile -- api
FROM python:3
RUN pip3 -q install -r requirements.txt
RUN echo `$DJANGO_SECRET_KEY`
RUN python manage.py migrate --settings=falcon.settings.dev-microservice # <-- why does this not work
CMD python manage.py runserver 0.0.0.0:8001 --settings=falcon.settings.dev-microservice
Why does the penultimate line of the Dockerfile fail due to an unset environment variable while the final one works as expected?
The environment variables not declared inside the Dockerfile are not visible to the container when building the image. They are only passed to the container at runtime. Since the RUN instruction executes on build, the environment variable DJANGO_SECRET_KEY which is declared outside the Dockerfile won't be visible to the RUN command.
To solve the issue you can declare the env variable inside the Dockerfile and set it via a build argument:
FROM python:3
RUN pip3 -q install -r requirements.txt
ARG key
ENV DJANGO_SECRET_KEY=$key
RUN echo `$DJANGO_SECRET_KEY`
RUN python manage.py migrate --settings=falcon.settings.dev-microservice
CMD python manage.py runserver 0.0.0.0:8001 --settings=falcon.settings.dev-microservice
Then, you should modify the composefile as such:
build:
context: /Users/ben/Projects/falcon/falcon-backend
dockerfile: Dockerfile
args:
- key='secrete-key'
The RUN is used only when building image. The CMD is the command that is started when you start container from your image. If you run migrate when building image it is wrong, migrate is building your database and you want to run it each time before runserver
# Dockerfile -- api
FROM python:3
RUN pip3 -q install -r requirements.txt
RUN echo `$DJANGO_SECRET_KEY`
CMD /bin/bash -c "python manage.py migrate --settings=falcon.settings.dev-microservice && python manage.py runserver 0.0.0.0:8001 --settings=falcon.settings.dev-microservice"
This is the proper way how to start django in docker, because you want to run the migrations on production when starting server. Not on your PC when building image...

Docker for windows10 run django fail: Can't open file 'manage.py': [Errno 2] No such file or directory

I just start a sample django app. And use docker to run it. My docker image like:
FROM python:3.5
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
My docker-compose.yml file:
version: '2'
services:
django:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
When I run docker-compose up command,it build successfully but failed in running command: python manage.py runserver 0.0.0.0:8000,it complained python: can't open file 'manage.py': [Errno 2] No such file or directory.
Is this a bug in docker for windows? Because I just follow the docs of docker Quickstart: Docker Compose and Django
Thank for you help!
I think you either missed this step: docker-compose run web django-admin.py startproject composeexample . or you're using a directory that isn't available to the Virtual Machine that is running docker.
If it works when you remove volumes: .:/code from the Compose file, then you know the issue is the volumes.
I believe by default only the users home directory is shared with the VM, so if you create a project outside of that tree, you won't have access to the files from volumes.

Django app server hangs / won't start in Docker Compose

I am trying to launch a straightforward Django app server in Docker Compose, paired with a Postgres container. It goes through as I would expect, launching the entrypoint script, but it never seems to actually run the Django app server (which should be the last step, and remain running).
I know it runs the entrypoint script, because the migrate step is run. The app server never outputs any of the expected output, and port 8000 never responds.
I am using Docker for Mac (stable), if it matters.
Dockerfile for my Django app container:
FROM ubuntu:16.04
COPY my_app /my_app
RUN apt-get update \
&& apt-get install -y python3 python3-psycopg2 python3-pip
RUN apt-get install -y nodejs npm
WORKDIR /my_app
RUN pip3 install -r requirements.txt
RUN npm install bower
RUN python3 manage.py bower install
RUN python3 manage.py collectstatic --no-input
EXPOSE 8000
COPY entrypoint.sh /
RUN chmod 755 /entrypoint.sh
CMD python3 manage.py runserver 0.0.0.0:8000
ENTRYPOINT ["/entrypoint.sh"]
Django entrypoint script:
#!/bin/sh
# Allow database container to start up or recover from a crash
sleep 10
cd /my_app
# Run any pending migrations
python3 manage.py migrate
exec $#
docker-compose.yml:
version: '2'
services:
db:
image: postgres:9.6
volumes:
- ./db/pgdata:/pgdata
environment:
- POSTGRES_USER=my_user
- POSTGRES_PASSWORD=my_password
- PGDATA=/pgdata
- POSTGRES_DB=my_database
appserver:
image: my-image
command: python3 manage.py runserver 0.0.0.0:8000
ports:
- '8000:8000'
environment:
- POSTGRES_USER=my_user
- POSTGRES_PASSWORD=my_password
- POSTGRES_DB=my_database
links:
- db
depends_on:
- db
Use the exec form for CMD in your Dockerfile
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
The entrypoint.sh scripts exec is currently trying to run:
/bin/sh -c python3 manage.py runserver 0.0.0.0:8000
Which doesn't seem to work, I think it's just running python3.
You should quote the positional parameters variable so the shell maintains each parameter, even if there are spaces.
exec "$#"
But it's best not to have sh in between docker and your app, so always use the exec form for a CMD.

Categories

Resources