how can I save docker's database data locally on my server? - python

I'm running an app inside a docker container. That app uses docker Postgres image to save data in a database. I need to keep a local copy of this database's data to avoid losing data if the container is removed or purged somehow ..so I am using volumes inside my `docker-compose.YAML file,, but still the local DB folder is always empty .. so whenever I move the container or purge it the data are lost
docker-compose.yaml
version: "2"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
ports:
- '5433:5432'
restart: always
command: -p 5433
environment:
- POSTGRES_DB=mydata
- POSTGRES_USER=mydata
- POSTGRES_PASSWORD=mydata#
- PGDATA=/tmp
django-apache2:
build: .
container_name: rolla_django
restart: always
environment:
- POSTGRES_DB=mydata
- POSTGRES_USER=mydata
- POSTGRES_PASSWORD=mydata#
- PGDATA=/tmp
ports:
- '4002:80'
- '4003:443'
volumes:
- ./www/:/var/www/html
- ./www/demo_app/static_files:/var/www/html/demo_app/static_files
- ./www/demo_app/media:/var/www/html/demo_app/media
# command: sh -c 'python manage.py migrate && python manage.py loaddata db_backkup.json && apache2ctl -D FOREGROUND'
command: sh -c 'wait-for-it db:5433 -- python manage.py migrate && apache2ctl -D FOREGROUND'
depends_on:
- db
as you can see i used ./data/db:/var/lib/postgresql/data , but locally the ./data/db directory is always empty !!
NOTE
when I use docker volume list it shows no volumes at all

According to your setup, the data is in /tmp: PGDATA=/tmp. Remove this and you volume mapping should work.
Also your command -p 5433 makes postgres run on port 5433, but you still map the port 5432. So if you cant reach the database it might be because of that.

Related

Properly migrating Postgres database to Docker/Django/Heroku/Postgres

I have a Django project hosted on an IIS server with a Postgresql database that I am migrating to Docker/Heroku project. I have found a few good resources online, but no luck yet completely. I have tried to use the dumpdata/loaddata function but always run into constraint errors, missing relations, or content type errors. I would like to just dump the whole database and then restore the whole thing to Docker. Here is my docker-compose:
version: "3.7"
services:
db:
image: postgres
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- "5432:5432"
environment:
- POSTGRES_NAME=${DATABASE_NAME}
- POSTGRES_USER=${DATABASE_USER}
- POSTGRES_PASSWORD=${DATABASE_PASSWORD}
- POSTGRES_DB=${DATABASE_NAME}
networks:
- hello-world
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- '.:/code'
ports:
- "8000:8000"
env_file:
- .env
depends_on:
- db
networks:
- hello-world
networks:
hello-world:
driver: bridge
volumes:
postgres:
driver: local
I was actually able to resolve this I believe with the following command: "docker exec -i postgres pg_restore --verbose --clean --no-acl --no-owner -h localhost -U postgres -d < ./latest.dump"

Django docker migrations not working after renaming model

I have a Django Docker setup using postgresql in RDS.
I managed to run the project successfully once and edited some model names. After that I built and launched a new container.
I noticed that instead of getting the typical:
"We have detected changes in your database. Did you renamed XXX to YYY?"
I got all my models migrating for the first time and everything seemed to work until I got to the Django admin.
ProgrammingError at /admin/upload/earnings/
relation "upload_earnings" does not exist
LINE 1: SELECT COUNT(*) AS "__count" FROM "upload_earnings"
This is my Dockerfile.
version: '3.8'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
env_file:
- ./.env.prod
nginx-proxy:
container_name: nginx-proxy
build: nginx
restart: always
ports:
- 443:443
- 80:80
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- web
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- ./.env.prod.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- acme:/etc/acme.sh
depends_on:
- nginx-proxy
volumes:
postgres_data:
static_volume:
media_volume:
certs:
html:
vhost:
acme:
So to reproduce I first created the container.
docker-compose -f docker-compose.yml build
docker-compose -f docker-compose.yml up -d
docker exec -it container_id sh
python manage.py makemigrations
python manage.py migrate
-Created Model1
-Created XXXX
then
I changed the model names.
docker-compose -f docker-compose.yml build
docker-compose -f docker-compose.yml up -d
docker exec -it container_id sh
python manage.py makemigrations
python manage.py migrate
-Created Model1
-Created ZZZ

Docker Compose & Postgres Connection Refused

I know this question has been asked a million times, and I've read as many of the answers as I can find. They all seem to come to one conclusion (db hostname is the container service name).
I got it to work in my actual code base, but it started failing when I added ffmpeg install to the Dockerfile. Nothing else had to be done, just installing FFPMEG via apt-get install -y ffmpeg would cause my python code to get the connection refused message. If I removed the ffmpeg install line, then my code would connect to the db just fine. Although re-running the container would trigger the dreaded connection refused error.
So I created a quick sample app so I could post here and try to get some thoughts on what's going on. But now this sample code won't connect to the db no matter what I do.
So here goes - And thanks in advance for any help:
myapp.py
# import ffmpeg
import psycopg2
if __name__ == "__main__":
print("Starting app...")
# probe = ffmpeg.probe("131698249.mp4")
# print(probe)
try:
connection = psycopg2.connect(
user="docker", password="docker", host="db", port="5432", database="docker")
cursor = connection.cursor()
postgreSQL_select_Query = "select * from test_table"
cursor.execute(postgreSQL_select_Query)
print("Selecting rows from table using cursor.fetchall")
records = cursor.fetchall()
print("Print each row and it's columns values")
for row in records:
print(row)
cursor.close()
connection.close()
except (Exception, psycopg2.Error) as error:
print("Error while fetching data from PostgreSQL", error)
Dockerfile
WORKDIR /usr/src/app
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
COPY . .
CMD ["python", "myapp.py"]
docker-compose.yml
version: '3.8'
services:
db:
container_name: pg_container
image: postgres:14.1
restart: always
environment:
POSTGRES_USER: docker
POSTGRES_PASSWORD: docker
POSTGRES_DB: docker
ports:
- "8000:5432"
expose:
- "5432"
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
- pg_data:/var/lib/postgresql/data
myapp:
container_name: myapp
build:
context: .
dockerfile: ./Dockerfile
restart: "no"
depends_on:
- db
volumes:
pg_data:
If I build and run the code: docker compose up --detach
Everything gets built and started. The Database starts up and gets populated with table/data from the init.sql (not included here)
The app container starts and the code executes, but immediately fails with the Connection refused error.
However, if from my computer I run: psql -U docker -h localhost -p 8000 -d docker
it connects without any error and I can query the database as expected.
But the app in the container won't connect and if I run the container with docker run -it myapp /bin/bash and then from inside the container run: psql -U docker -h db -p 5432 -d docker I get the Connection refused error.
If anyone has any thoughts or ideas I would be so grateful. I've been wrestling with this for three days now.
Looks like I've resolved it. I was sure I'd tried this before, but regardless adding a networks section to the docker-compose.yml seemed to fix the issue.
I also had to do the second docker-compose up -d as suggested by David Maze's comment. But the combination of the two seem to have fixed my issue.
Here's my updated docker-compose.yml for complete clarity:
version: '3.8'
services:
postgres-db:
container_name: pg_container
image: postgres:14.1
restart: always
environment:
POSTGRES_USER: docker
POSTGRES_PASSWORD: docker
POSTGRES_DB: docker
ports:
- "5500:5432"
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- dock-db-test
myapp:
container_name: myapp
build:
context: .
dockerfile: ./Dockerfile
restart: "no"
depends_on:
- db
networks:
- dock-db-test
networks:
dock-db-test:
external: false
name: dock-db-test

Setting up a dockerized python server on local machine gives Session data corrupted

I'm trying to set up a dockerized Python server named Bullet Train on my local machine:
It has 3 components:
A Postgres database
A Python server
A React frontend
All of these 3 need to work together to get the server up and running, so this is the docker-compose file which sits at the top level of both the frontend and api-server:
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: bullettrain
ports:
- "5432:5432"
api:
build:
context: ./bullet-train-api
dockerfile: docker/Dockerfile
command: bash -c "pipenv run python manage.py migrate --noinput
&& pipenv run python manage.py collectstatic --noinput
&& pipenv run gunicorn --bind 0.0.0.0:8000 -w 3 app.wsgi
&& pipenv run python src/manage.py createsuperuser"
environment:
DJANGO_DB_NAME: bullettrain
DJANGO_DB_USER: postgres
DJANGO_DB_PASSWORD: password
DJANGO_DB_PORT: 5432
DJANGO_ALLOWED_HOSTS: localhost
ports:
- "8000:8000"
depends_on:
- db
links:
- db:db
frontend:
build:
context: ./bullet-train-frontend
dockerfile: Dockerfile
ports:
- "8080:8080"
This way, all the 3 components run in parallel. So far so good! Now to initialize it, I run the createsuperuser as stated here by following these steps:
docker exec -it research_api_1 bash ## go to the context of the API server terminal
run python manage.py createsuperuser ## run the createsuperuser command
The command runs successfully and I get this output:
To confirm, I went to the database:
docker exec -it research_db_1 bash ## go to the database instance
psql bullettrain postgres ## connect to the bullettrain database
select * from public.users_ffadminuser; ## check if the super user is created
The results show that the user is indeed created:
Now, if I go to the admin panel as per the docs, nothing happens and the server logs always throw Session data corrupted:

django docker-compose deleting data from mongo database when i am doing "docker-compose down" and again"up"

DockerFile:
FROM python:3.6
WORKDIR /usr/src/jobsterapi
COPY ./ ./
RUN pip install -r requirements.txt
CMD ["/bin/bash"]
docker-compose.yml
version: "3"
services:
jobster_api:
container_name: jobster
build: ./
# command: python manage.py runserver 0.0.0.0:8000
command: "bash -c 'python src/manage.py makemigrations --no-input && python src/manage.py migrate --no-input && python src/manage.py runserver 0.0.0.0:8000'"
working_dir: /usr/src/jobster_api
environment:
REDIS_URI: redis://redis:6379
MONGO_URI: mongodb://jobster:27017
ports:
- "8000:8000"
volumes:
- ./:/usr/src/jobster_api
links:
- redis
- elasticsearch
- mongo
#redis
redis:
image: redis
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- "6379:6379"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.0
ports:
- "9200:9200"
- "9300:9300"
mongo:
image: mongo
ports:
- "27017:27017"
I have done setup django with mongodb inside docker using following docker-compose
command. it is working fine every thing. but when i am adding any records
using "docker exec -it 'img id' /bin/bash" it is inserting data(i tried creating superuser for django
admin panel). but, when i am again making it "docker-compose up" after "docker-compose down" it is
deleting all data from database showing empty records. so i am not able to access admin panel also for next timeself.
Please have a look.........
Add a volumes to
mongo:
image: mongo
ports:
- "27017:27017"
volumes:
- insert_mongos_stored_area
https://docs.docker.com/storage/volumes/

Categories

Resources