I am building a Python app and I am trying to run it inside Docker. (My works completely in virtualenv with no Docker.)
So, my app consists of config.py file and other files which make everything work, but are in this case unimportant.
My config file looks like this:
SQLALCHEMY_DATABASE_URI = "postgresql+psycopg2://gimtest:gimtest#localhost:5432/gimtest-users"
App requires postgresql to be run before the app starts, so I tried using Docker-Compose. My docker-compose.yml looks like this:
version: '2'
services:
web:
build: ./web
ports:
- "5000:5000"
depends_on:
- postgres
links:
- postgres
- elastic
expose:
- 5000
command:
"python run.py"
postgres:
image: postgres:9.5
expose:
- 5432
environment:
POSTGRES_USER: "gimtest"
POSTGRES_PASSWORD: "gimtest"
POSTGRES_DB: "gimtest-users"
elastic:
image: elasticsearch:2.3
Now the problem is that I can't connect to postgresql database. (At least that's what compile says because it throws an error from python app that no postgresql is running on specified url (up in config.py).
How can I fix the url so that it will work?
Related
I am deploying a Django project on AWS. I am running Postgres, Redis, Nginx as well as my project on Docker there.
Everything is working fine, but when I change something on my local machine, push changes to git and then pull them on the AWS instance, the code is changing, files are updated but they are not showing on the website. Only the static files are updating automatically (I guess because of Nginx). Here is my docker-compose config:
version: '3.9'
services:
redis:
image: redis
command: redis-server
ports:
- "6379:6379"
postgres:
image: postgres
environment:
- POSTGRES_USER=
- POSTGRES_PASSWORD=
- POSTGRES_DB=
ports:
- "5432:5432"
web:
image: image_name
build: .
restart: always
command: gunicorn project.wsgi:application --bind 0.0.0.0:8000
env_file:
- envs/.env.prod
ports:
- "8000:8000"
volumes:
- ./staticfiles/:/tmp/project/staticfiles
depends_on:
- postgres
- redis
nginx:
image: nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./staticfiles:/home/app/web/staticfiles
- ./nginx/conf.d:/etc/nginx/conf.d
- ./nginx/logs:/var/log/nginx
- ./certbot/www:/var/www/certbot/:ro
- ./certbot/conf/:/etc/nginx/ssl/:ro
depends_on:
- web
Can you please tell me what to do?
I tried deleting everything from docker and compose up again but nothing happened.
I looked all over in here but I still don't understand... instance restart is not working as well. I tried cleaning redis cache because I have template caching and still nothing.
After updating the code on the EC2 instance, you need to build a new web docker image from that new code. If you are just restarting things then docker-compose is going to continue to pick up the last docker image you built.
You need to run the following sequence of commands (on the EC2 instance):
docker-compose build web
docker-compose up -d
You are seeing the static files change immediately, without rebuilding the docker image, because you are mapping to those files via docker volume.
I found the issue... it was because I had template caching.
If I remove the cache and do what #MarkB suggested, all is updating.
I don't understand why this happens since I tried flushing all redis cache after changes but I guess it solves my issues.
I have a Django project hosted on an IIS server with a Postgresql database that I am migrating to Docker/Heroku project. I have found a few good resources online, but no luck yet completely. I have tried to use the dumpdata/loaddata function but always run into constraint errors, missing relations, or content type errors. I would like to just dump the whole database and then restore the whole thing to Docker. Here is my docker-compose:
version: "3.7"
services:
db:
image: postgres
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- "5432:5432"
environment:
- POSTGRES_NAME=${DATABASE_NAME}
- POSTGRES_USER=${DATABASE_USER}
- POSTGRES_PASSWORD=${DATABASE_PASSWORD}
- POSTGRES_DB=${DATABASE_NAME}
networks:
- hello-world
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- '.:/code'
ports:
- "8000:8000"
env_file:
- .env
depends_on:
- db
networks:
- hello-world
networks:
hello-world:
driver: bridge
volumes:
postgres:
driver: local
I was actually able to resolve this I believe with the following command: "docker exec -i postgres pg_restore --verbose --clean --no-acl --no-owner -h localhost -U postgres -d < ./latest.dump"
This is a question to understand what should be the best practice to follow in order to connect the dots on the argument. Currently I am developing a Dockerized Django website. In this website, one of the apps will be named 'dashboards', where I wish to publish data which is currently stored locally in .csv (updated every day through scheduled tasks).
Now, I am trying to understand what should be the next steps to follow in order to connect these data to the Dockerized Django website. My first guess would be to schedule locally .sql scripts to 'append' the new data into the db that I can create locally. Then, connect the db to the Dockerized Django website through volumes belonging the postgreSQL service. Just a guess that I need to test. But, is there a way to skip everything locally and just do the work inside my Docker container?
You can find the Github repository here. Many thanks!
docker-compose.yml:
version: '3.8'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- web-static:/www/static
links:
- web:web
postgres:
restart: always
image: postgres:latest
hostname: postgres
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
pgadmin:
image: dpage/pgadmin4
depends_on:
- postgres
ports:
- "5050:80"
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin4#pgadmin.org
PGADMIN_DEFAULT_PASSWORD: admin
restart: unless-stopped
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
volumes:
- redisdata:/data
volumes:
web-django:
web-static:
pgdata:
redisdata:
Dockefile:
FROM python:3.7-slim
RUN python -m pip install --upgrade pip
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . .
I have my Django project with structure like this:
myapp/
manage.py
Dockerfile
docker-compose.yml
my-database1.sql
my-database2.sql
requirements.txt
pgadmin/
pgadmin-data/
myapp/
__init__.py
settings.py
urls.py
wsgi.py
This is my docker-compose.yml file:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
- ./my-database1.sql:/docker-entrypoint-initdb.d/my-database1.sql
- ./my-database2.sql:/docker-entrypoint-initdb.d/my-database2.sql
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- PGDATA=/var/lib/postgresql/data
pgadmin:
image: dpage/pgadmin4:4.18
restart: unless-stopped
environment:
- PGADMIN_DEFAULT_EMAIL=admin#domain.com
- PGADMIN_DEFAULT_PASSWORD=admin
- PGADMIN_LISTEN_PORT=80
ports:
- "8090:80"
volumes:
- ./pgadmin-data:/var/lib/pgadmin
links:
- "db:pgsql-server"
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
volumes:
db-data:
pgadmin-data:
I have three problems with my app:
1 - how can I import my my-database1.sql and my-database2.sql databases into postgresql? The solution (I mean ./my-database1.sql:/docker-entrypoint-initdb.d/my-database1.sql) in my code doesn't work.
2 - after successful import databases from previous step how can I see them inside pgadmin?
3 - my code should write something inside tables of my-database1.sql. How should I connect to it after import to postgresql?
The postgres image will only attempt to run the files provided inside the /docker-entrypoint-initdb.d directory while running on an empty folder. By your docker-compose.yml configuration, you have a persistent volume for the database data. This means that Postgres will not take updates to the SQL files into account on later deployments. Something similar happens when one of the scripts fails. Here is the excerpt from the documentation:
Warning: scripts in /docker-entrypoint-initdb.d are only run if you start the container with an empty data directory; any pre-existing database will be left untouched on container startup. One common problem is that if one of your /docker-entrypoint-initdb.d scripts fails (which will cause the entrypoint script to exit) and your orchestrator restarts the container with the already initialized data directory, it will not continue with your scripts.
Check the site documentation to see how you can make your initialization scripts more robust so they can handle failures.
To solve your issue, try deleting the volume manually or by using the -v flag while running docker-compose down, and then redeploy your application:
-v, --volumes Remove named volumes declared in the `volumes`
section of the Compose file and anonymous volumes
attached to containers.
My docker-compose file is as follows:
version: '3'
services:
db:
image: mongo:4.2
container_name: mongo-db
restart: always
environment:
MONGO_INITDB_DATABASE: VMcluster
ports:
- "16006:27017"
volumes:
- ./initdb.js:/docker-entrypoint-initdb.d/initdb.js
web:
build:
context: .
dockerfile: Dockerfile_Web
command: python manage.py runserver 0.0.0.0:8000
container_name: cluster-monitor-web
volumes:
- .:/vmCluster_service
ports:
- "9900:8000"
depends_on:
- db
cronjobs:
build:
context: .
dockerfile: Dockerfile_Cron
command: ["cron", "-f"]
container_name: cluster-monitor-cron
I want to implement a feature where the user should be able to update the crontab from the Django web. I'm done with Django part i.e python code for backend. But the Django web container is not able to access the crontab container.
how can I make Django web container to access the crontab container and update the crontab?
Im using python-crontab module in django which throws an error :
"[Errno 2] No such file or directory: '/usr/bin/crontab':
'/usr/bin/crontab'"