Properly migrating Postgres database to Docker/Django/Heroku/Postgres - python

I have a Django project hosted on an IIS server with a Postgresql database that I am migrating to Docker/Heroku project. I have found a few good resources online, but no luck yet completely. I have tried to use the dumpdata/loaddata function but always run into constraint errors, missing relations, or content type errors. I would like to just dump the whole database and then restore the whole thing to Docker. Here is my docker-compose:
version: "3.7"
services:
db:
image: postgres
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- "5432:5432"
environment:
- POSTGRES_NAME=${DATABASE_NAME}
- POSTGRES_USER=${DATABASE_USER}
- POSTGRES_PASSWORD=${DATABASE_PASSWORD}
- POSTGRES_DB=${DATABASE_NAME}
networks:
- hello-world
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- '.:/code'
ports:
- "8000:8000"
env_file:
- .env
depends_on:
- db
networks:
- hello-world
networks:
hello-world:
driver: bridge
volumes:
postgres:
driver: local

I was actually able to resolve this I believe with the following command: "docker exec -i postgres pg_restore --verbose --clean --no-acl --no-owner -h localhost -U postgres -d < ./latest.dump"

Related

Connecting local data to Docker Django project

This is a question to understand what should be the best practice to follow in order to connect the dots on the argument. Currently I am developing a Dockerized Django website. In this website, one of the apps will be named 'dashboards', where I wish to publish data which is currently stored locally in .csv (updated every day through scheduled tasks).
Now, I am trying to understand what should be the next steps to follow in order to connect these data to the Dockerized Django website. My first guess would be to schedule locally .sql scripts to 'append' the new data into the db that I can create locally. Then, connect the db to the Dockerized Django website through volumes belonging the postgreSQL service. Just a guess that I need to test. But, is there a way to skip everything locally and just do the work inside my Docker container?
You can find the Github repository here. Many thanks!
docker-compose.yml:
version: '3.8'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- web-static:/www/static
links:
- web:web
postgres:
restart: always
image: postgres:latest
hostname: postgres
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
pgadmin:
image: dpage/pgadmin4
depends_on:
- postgres
ports:
- "5050:80"
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin4#pgadmin.org
PGADMIN_DEFAULT_PASSWORD: admin
restart: unless-stopped
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
volumes:
- redisdata:/data
volumes:
web-django:
web-static:
pgdata:
redisdata:
Dockefile:
FROM python:3.7-slim
RUN python -m pip install --upgrade pip
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . .

Django docker migrations not working after renaming model

I have a Django Docker setup using postgresql in RDS.
I managed to run the project successfully once and edited some model names. After that I built and launched a new container.
I noticed that instead of getting the typical:
"We have detected changes in your database. Did you renamed XXX to YYY?"
I got all my models migrating for the first time and everything seemed to work until I got to the Django admin.
ProgrammingError at /admin/upload/earnings/
relation "upload_earnings" does not exist
LINE 1: SELECT COUNT(*) AS "__count" FROM "upload_earnings"
This is my Dockerfile.
version: '3.8'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
env_file:
- ./.env.prod
nginx-proxy:
container_name: nginx-proxy
build: nginx
restart: always
ports:
- 443:443
- 80:80
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- web
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- ./.env.prod.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- acme:/etc/acme.sh
depends_on:
- nginx-proxy
volumes:
postgres_data:
static_volume:
media_volume:
certs:
html:
vhost:
acme:
So to reproduce I first created the container.
docker-compose -f docker-compose.yml build
docker-compose -f docker-compose.yml up -d
docker exec -it container_id sh
python manage.py makemigrations
python manage.py migrate
-Created Model1
-Created XXXX
then
I changed the model names.
docker-compose -f docker-compose.yml build
docker-compose -f docker-compose.yml up -d
docker exec -it container_id sh
python manage.py makemigrations
python manage.py migrate
-Created Model1
-Created ZZZ

Django app exited with code 252 on Docker

I am new to Docker, currently I am trying to deploy my app in a container. I have made 2 containers one for the DB and one for the app. But when I am trying to run my docker-compose file the app container exists with exit code 252. Here are the logs -
web_1 | Watching for file changes with StatReloader
web_1 | Performing system checks...
web_1 |
mushroomxpert_web_1 exited with code 252
This is my docker-compose file
version: '3.7'
services:
web:
image: mushroomxpert
build:
context: ./web
# command: 'gunicorn MushroomXpert.wsgi --log-file -'
command: python manage.py runserver 0.0.0.0:8000
ports:
- '8000:8000'
environment:
- ALLOWED_HOSTS=localhost
- DEBUF=False
- DB_NAME=mushroomxpert_db
- DB_USER=mushroom_admin
- DB_PASSWORD=chikchik1
- DB_HOST=db
- DB_PORT=5432
depends_on:
- db
db:
image: postgres
environment:
- POSTGRES_PASSWORD=chikchik1
- POSTGRES_USER=mushroom_admin
- POSTGRES_DB=mushroomxpert_db
EDIT 1- the problem seems to be occuring from Tensorflow, so I downgraded it's version to 2.2 after that the app worked. I am marking this as solved.
Please use something like this
version: "3.7"
db:
image: postgres
environment:
- POSTGRES_PASSWORD=chikchik1
- POSTGRES_USER=mushroom_admin
- POSTGRES_DB=mushroomxpert_db
expose:
- "5432"
web:
build:
context: ./web
dockerfile: YOUR DOCKERFILE
ports:
- "0.0.0.0:8000:8000"
volumes:
- "./backend/:/app/"
environment:
- ALLOWED_HOSTS: "localhost"
- DEBUG: "False"
- DB_NAME: "mushroomxpert_db"
- DB_USER: "mushroom_admin"
- DB_PASSWORD: "chikchik1"
- DB_HOST: "db"
- DB_PORT: "5432"
env_file:
- config.env
depends_on:
- db
command: >-
sh -c "
pip install -r requirements.txt &&
python manage.py runserver 0.0.0.0:8000
"
The above configuration contains everything that you need and moreover the connection between the db and your django application.

Celery not running at times in docker

I have been going bonkers over this one, the celery service in my docker-compose.yml just does not pick up tasks (sometimes). It works at times though
Dockerfile:
FROM python:3.6
RUN apt-get update
RUN mkdir /web_back
WORKDIR /web_back
COPY web/requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY web/ .
docker-compose.yml
(Taken out a few services for the sake of understanding)
version: '3'
services:
web_serv:
restart: always
build: .
container_name: web_back_01
env_file:
- ./envs/web_back_01.env
volumes:
- ./web/:/web_back
depends_on:
- web_postgres
expose:
- 8282
extra_hosts:
- "dockerhost:104.10.4.11"
command: bash -c "./initiate.sh"
service_A:
restart: always
build: ../../web-service-A/A/
container_name: web_back_service_a_01
volumes:
- ../../web-service-A/A.:/web-service-A
depends_on:
- web
ports:
- '5100:5100'
command: bash -c "python server.py"
service_B:
restart: always
build: ../../web-service-B/B/
container_name: web_back_service_b_01
volumes:
- ../../web-service-B/B.:/web-service-B
depends_on:
- web
ports:
- '5200:5200'
command: bash -c "python server.py"
web_postgres:
restart: always
build: ./postgres
container_name: web_postgres_01
# restart: unless-stopped
ports:
- "5433:5432"
environment: # will be used by the init script
LC_ALL: C.UTF-8
POSTGRES_USER: web
POSTGRES_PASSWORD: web
POSTGRES_DB: web
volumes:
- pgdata:/var/lib/postgresql/data/
nginx:
restart: always
build: ./nginx/
container_name: web_nginx_01
volumes:
- ./nginx/:/etc/nginx/conf.d
- ./logs/:/code/logs
- ./web/static/:/static_cdn/
- ./web/media/:/media_cdn/
ports:
- "80:80"
links:
- web_serv
redis:
restart: always
container_name: web_redis_01
ports:
- "6379:6379"
links:
- web_serv
image: redis
celery:
build: .
volumes:
- ./web/:/web_back
container_name: web_celery_01
command: celery -A web worker -l info
links:
- redis
depends_on:
- redis
volumes:
pgdata:
media:
static:
settings.py
CELERY_BROKER_URL = 'redis://redis:6379'
CELERY_RESULT_BACKEND = 'redis://redis:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
Notice the service_A and service_B, those are the two servies that at times do not get fired up.
Any help in understanding the odd behavior would be very helpful! Thanks
So, I think I ran into a similar problem. I was pulling my hair out because I was updating my worker.py and then not only would the autoload not reflect any changes, but, the when I'd rerun my docker-compose up my changes would still not be reflected.
Sometimes when I'd run docker-compose up --build --force-recreate my changes would be reflected, but not reliably.
I was able to resolve this problem by doing two things:
Remove the __pycache__ in my worker's directory.
Run $ find . -name "*.pyc" -exec rm {} \; before doing docker-compose up --build --force-recreate when caching behavior persists.
I'm not 100% sure what's going on myself, but its clear that Celery + Docker w/o autoload means that Docker has a tendency to use a cached version of the compiled task. I see a bit of chatter regarding ways to setup autoload with Celery + Docker with things like webdog or modd, but, I have yet to set that up for my project.

docker-compose postgresql implementation in python app

I am building a Python app and I am trying to run it inside Docker. (My works completely in virtualenv with no Docker.)
So, my app consists of config.py file and other files which make everything work, but are in this case unimportant.
My config file looks like this:
SQLALCHEMY_DATABASE_URI = "postgresql+psycopg2://gimtest:gimtest#localhost:5432/gimtest-users"
App requires postgresql to be run before the app starts, so I tried using Docker-Compose. My docker-compose.yml looks like this:
version: '2'
services:
web:
build: ./web
ports:
- "5000:5000"
depends_on:
- postgres
links:
- postgres
- elastic
expose:
- 5000
command:
"python run.py"
postgres:
image: postgres:9.5
expose:
- 5432
environment:
POSTGRES_USER: "gimtest"
POSTGRES_PASSWORD: "gimtest"
POSTGRES_DB: "gimtest-users"
elastic:
image: elasticsearch:2.3
Now the problem is that I can't connect to postgresql database. (At least that's what compile says because it throws an error from python app that no postgresql is running on specified url (up in config.py).
How can I fix the url so that it will work?

Categories

Resources