I am deploying a Django project on AWS. I am running Postgres, Redis, Nginx as well as my project on Docker there.
Everything is working fine, but when I change something on my local machine, push changes to git and then pull them on the AWS instance, the code is changing, files are updated but they are not showing on the website. Only the static files are updating automatically (I guess because of Nginx). Here is my docker-compose config:
version: '3.9'
services:
redis:
image: redis
command: redis-server
ports:
- "6379:6379"
postgres:
image: postgres
environment:
- POSTGRES_USER=
- POSTGRES_PASSWORD=
- POSTGRES_DB=
ports:
- "5432:5432"
web:
image: image_name
build: .
restart: always
command: gunicorn project.wsgi:application --bind 0.0.0.0:8000
env_file:
- envs/.env.prod
ports:
- "8000:8000"
volumes:
- ./staticfiles/:/tmp/project/staticfiles
depends_on:
- postgres
- redis
nginx:
image: nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./staticfiles:/home/app/web/staticfiles
- ./nginx/conf.d:/etc/nginx/conf.d
- ./nginx/logs:/var/log/nginx
- ./certbot/www:/var/www/certbot/:ro
- ./certbot/conf/:/etc/nginx/ssl/:ro
depends_on:
- web
Can you please tell me what to do?
I tried deleting everything from docker and compose up again but nothing happened.
I looked all over in here but I still don't understand... instance restart is not working as well. I tried cleaning redis cache because I have template caching and still nothing.
After updating the code on the EC2 instance, you need to build a new web docker image from that new code. If you are just restarting things then docker-compose is going to continue to pick up the last docker image you built.
You need to run the following sequence of commands (on the EC2 instance):
docker-compose build web
docker-compose up -d
You are seeing the static files change immediately, without rebuilding the docker image, because you are mapping to those files via docker volume.
I found the issue... it was because I had template caching.
If I remove the cache and do what #MarkB suggested, all is updating.
I don't understand why this happens since I tried flushing all redis cache after changes but I guess it solves my issues.
Related
I have my Django project with structure like this:
myapp/
manage.py
Dockerfile
docker-compose.yml
my-database1.sql
my-database2.sql
requirements.txt
pgadmin/
pgadmin-data/
myapp/
__init__.py
settings.py
urls.py
wsgi.py
This is my docker-compose.yml file:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
- ./my-database1.sql:/docker-entrypoint-initdb.d/my-database1.sql
- ./my-database2.sql:/docker-entrypoint-initdb.d/my-database2.sql
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- PGDATA=/var/lib/postgresql/data
pgadmin:
image: dpage/pgadmin4:4.18
restart: unless-stopped
environment:
- PGADMIN_DEFAULT_EMAIL=admin#domain.com
- PGADMIN_DEFAULT_PASSWORD=admin
- PGADMIN_LISTEN_PORT=80
ports:
- "8090:80"
volumes:
- ./pgadmin-data:/var/lib/pgadmin
links:
- "db:pgsql-server"
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
volumes:
db-data:
pgadmin-data:
I have three problems with my app:
1 - how can I import my my-database1.sql and my-database2.sql databases into postgresql? The solution (I mean ./my-database1.sql:/docker-entrypoint-initdb.d/my-database1.sql) in my code doesn't work.
2 - after successful import databases from previous step how can I see them inside pgadmin?
3 - my code should write something inside tables of my-database1.sql. How should I connect to it after import to postgresql?
The postgres image will only attempt to run the files provided inside the /docker-entrypoint-initdb.d directory while running on an empty folder. By your docker-compose.yml configuration, you have a persistent volume for the database data. This means that Postgres will not take updates to the SQL files into account on later deployments. Something similar happens when one of the scripts fails. Here is the excerpt from the documentation:
Warning: scripts in /docker-entrypoint-initdb.d are only run if you start the container with an empty data directory; any pre-existing database will be left untouched on container startup. One common problem is that if one of your /docker-entrypoint-initdb.d scripts fails (which will cause the entrypoint script to exit) and your orchestrator restarts the container with the already initialized data directory, it will not continue with your scripts.
Check the site documentation to see how you can make your initialization scripts more robust so they can handle failures.
To solve your issue, try deleting the volume manually or by using the -v flag while running docker-compose down, and then redeploy your application:
-v, --volumes Remove named volumes declared in the `volumes`
section of the Compose file and anonymous volumes
attached to containers.
I have two services, on two different GitLab repositories, deployed to the same host. I am currently using supervisord to run all of the services. The CI/CD for each repository pushes the code to the host.
I am trying to replace supervisord with Docker. What I did was the following:
Set up a Dockerfile for each service.
Created a third repository with only a docker-compose.yml, that runs docker-compose up in its CI to build and run the two services. I expect this repository to only be deployed once.
I am looking for a way to have the docker-compose automatically update when I deploy one of the two services.
Edit: Essentially, I am trying to figure out the best way to use docker-compose with a multi repository setup and one host.
My docker-compose:
version: "3.4"
services:
redis:
image: "redis:alpine"
api:
build: .
command: gunicorn -c gunicorn_conf.py --bind 0.0.0.0:5000 --chdir server "app:app" --timeout 120
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- redis
celery-worker:
build: .
command: celery worker -A server.celery_config:celery
volumes:
- .:/app
depends_on:
- redis
celery-beat:
build: .
command: celery beat -A server.celery_config:celery --loglevel=INFO
volumes:
- .:/app
depends_on:
- redis
other-service:
build: .
command: python other-service.py
volumes:
- .:/other-service
depends_on:
- redis
If you're setting this up in the context of a CI system, the docker-compose.yml file should just run the images; it shouldn't also take responsibility for building them.
Do not overwrite the code in a container using volumes:.
You mention each service's repository has a Dockerfile, which is a normal setup. Your CI system should run docker build there (and typically docker push). Then your docker-compose.yml file just needs to mention the image: that the CI system builds:
version: "3.4"
services:
redis:
image: "redis:alpine"
api:
image: "me/django:${DJANGO_VERSION:-latest}"
ports:
- "8000:8000"
depends_on:
- redis
celery-worker:
image: "me/django:${DJANGO_VERSION:-latest}"
command: celery worker -A server.celery_config:celery
depends_on:
- redis
I hint at docker push above. If you're using Docker Hub, or a cloud-hosted Docker image repository, or are running a private repository, the CI system should run docker push after it builds each image, and (if it's not Docker Hub) the image: lines need to include the repository address.
The other important question here is what to do on rebuilds. I'd recommend giving each build a unique Docker image tag, a timestamp or a source control commit ID both work well. In the docker-compose.yml file I show above, I use an environment variable to specify the actual image tag, so your CI system can run
DJANGO_VERSION=20200113.1114 docker-compose up -d
Then Compose will know about the changed image tag, and will be able to recreate the containers based on the new images.
(This approach is highly relevant in the context of cluster systems like Kubernetes. Pushing images to a registry is all but required there. In Kubernetes changing the name of an image: triggers a redeployment, so it's also all but required to use a unique image tag per build. Except that there are multiple and more complex YAML files, the overall approach in Kubernetes would be very similar to what I've laid out here.)
I'm running a Django application in Docker with NGINX, and Gunicorn on Ubuntu Server. It is a fairly large 'legacy' project so I am not the one who configured it. Why do I have to run docker-compose down before running docker-compose up -d to reflect the changes I have made to Django's static files? The docker file runs django's CLI command to collect the static files across project directories.
In both cases I ran docker-compose build before putting up or taking down.
Stopping the NGINX container first, and then putting up the container seemed to allow NGINX to find static files but why was NGINX unable to serve files added to another container?
The docker compose file being run:
version: '2'
services:
nginx:
image: nginx:latest
container_name: smi-nginx
ports:
- "8080:8080"
volumes:
- ./src:/src
- ./config/nginx:/etc/nginx/conf.d
- /static:/static
depends_on:
- web
web:
build: .
container_name: smi-App
volumes:
- /static:/static
command: bash -c "gunicorn -w 3 -t 14400 --max-requests 75 -b 0.0.0.0:5000 project.wsgi:application"
environment:
- APPLEVEL=DEVELOPMENT
- MachineID=99
- DbUser=djangouser
- Password=secret
expose:
- "5000"
Thanks!
I have been working with Docker previously using services to run a website made with Django.
Now I would like to know how I should create a Docker to just run Python scripts without a web server and any service related with websited.
An example of normal docker which I am used to work is:
version: '2'
services:
nginx:
image: nginx:latest
container_name: nz01
ports:
- "8001:8000"
volumes:
- ./src:/src
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- web
web:
build: .
container_name: dz01
depends_on:
- db
volumes:
- ./src:/src
expose:
- "8000"
db:
image: postgres:latest
container_name: pz01
ports:
- "5433:5432"
volumes:
- postgres_database:/var/lib/postgresql/data:Z
volumes:
postgres_database:
external: true
How should be the docker-compose.yml file?
Simply remove everything from your Dockerfile that has nothing to do with your script and start with something simple, like
FROM python:3
ADD my_script.py /
CMD [ "python", "./my_script.py" ]
You do not need Docker compose for containerizing a single python script.
The example is taken from this simple tutorial about containerizing Python applications: https://runnable.com/docker/python/dockerize-your-python-application
You can easily overwrite the command specified in the Dockerfile (via CMD) when starting a container from the image. Just append the desired command to your docker run command, e.g:
docker run IMAGE /path/to/script.py
You can easily run Python interactively without even having to build a container:
docker run -it python
If you want to have access to some code you have written within the container, simply change that to:
docker run -it -v /path/to/code:/app: python
Making a Dockerfile is unnecessary for this simple application.
Most Linux distributions come with Python preinstalled. Using Docker here adds significant complexity and I'd pretty strongly advise against Docker just to run a simple script. You can use a virtual environment to isolate a particular Python package's dependencies from the rest of the system.
(There is a pretty consistent stream of SO questions around getting filesystem permissions and user IDs right for scripts that principally want to interact with the host system. Also remember that running docker anything implies root-equivalent permissions. If you don't want Docker's filesystem and user namespace isolation, IMHO it's easier to just not use Docker where it doesn't make sense.)
I am running a python application as a docker container and in my python application I am using pythons logging class to log execution steps using logger.info, logger.debug and logger.error. The problem with this is the log file is only persistent within the docker container and if the container goes away then the log file is also lost and also that every time I have to view the log file I have to manually copy the container log file to local system.What I want to do is that whatever log is being written to container log file, it should be persistent on the local system - so write to a local system log file or Auto mount the docker log file to local system.
Few things about my host machine:
I run multiple docker containers on the machine.
My sample docker-core file is:
FROM server-base-v1
ADD . /app
WORKDIR /app
ENV PATH /app:$PATH
CMD ["python","-u","app.py"]
My sample docker-base file is:
FROM python:3
ADD ./setup /app/setup
WORKDIR /app
RUN pip install -r setup/requirements.txt
A sample of my docker-compose.yml file is:
`
version: "2"
networks:
server-net:
services:
mongo:
container_name: mongodb
image: mongodb
hostname: mongodb
networks:
- server-net
volumes:
- /dockerdata/mongodb:/data/db
ports:
- "27017:27017"
- "28017:28017"
server-core-v1:
container_name: server-core-v1
image: server-core-v1:latest
depends_on:
- mongo
networks:
- server-net
ports:
- "8000:8000"
volumes:
- /etc/localtime:/etc/localtime:ro
`
Above yml file sample is just a part of my actual yml file. I have multiple server-core-v1 containers(with different names) running parallel with each having their own logging file.
I would also appreciate if there are some better strategies for doing logging in python with docker and make it persistent. I read few articles which mentioned using sys.stderr.write() and sys.stdout.write() but not sure how to use that especially with multiple containers running and logging.
Volumes are what you need.
You can create volumes to bind an internal container folder with a local system folder. So that you can store your logs in a logs folder and map this as a volume to any folder on your local system.
You can specify a volume in the docker-compose.yml file for each service you are creating. See the docs.
Bind-mounts are what you need.
As you can see, bind-mounts are accesible from yours host file system. It is very simmilar to shared folders in VM architecture.
You can simple achieve that with mounting your volume directly to path on your PC.
In yours case:
version: "2"
networks:
server-net:
services:
mongo:
container_name: mongodb
image: mongodb
hostname: mongodb
networks:
- server-net
volumes:
- /dockerdata/mongodb:/data/db
ports:
- "27017:27017"
- "28017:28017"
server-core-v1:
container_name: server-core-v1
image: server-core-v1:latest
depends_on:
- mongo
networks:
- server-net
ports:
- "8000:8000"
volumes:
- ./yours/example/host/path:/etc/localtime:ro
Just replace ./yours/example/host/path with target directory on yours host.
In this scenario, i belive that logger is on server side.
If you are working on windows remember to bind in current user directory!