Django app exited with code 252 on Docker - python

I am new to Docker, currently I am trying to deploy my app in a container. I have made 2 containers one for the DB and one for the app. But when I am trying to run my docker-compose file the app container exists with exit code 252. Here are the logs -
web_1 | Watching for file changes with StatReloader
web_1 | Performing system checks...
web_1 |
mushroomxpert_web_1 exited with code 252
This is my docker-compose file
version: '3.7'
services:
web:
image: mushroomxpert
build:
context: ./web
# command: 'gunicorn MushroomXpert.wsgi --log-file -'
command: python manage.py runserver 0.0.0.0:8000
ports:
- '8000:8000'
environment:
- ALLOWED_HOSTS=localhost
- DEBUF=False
- DB_NAME=mushroomxpert_db
- DB_USER=mushroom_admin
- DB_PASSWORD=chikchik1
- DB_HOST=db
- DB_PORT=5432
depends_on:
- db
db:
image: postgres
environment:
- POSTGRES_PASSWORD=chikchik1
- POSTGRES_USER=mushroom_admin
- POSTGRES_DB=mushroomxpert_db
EDIT 1- the problem seems to be occuring from Tensorflow, so I downgraded it's version to 2.2 after that the app worked. I am marking this as solved.

Please use something like this
version: "3.7"
db:
image: postgres
environment:
- POSTGRES_PASSWORD=chikchik1
- POSTGRES_USER=mushroom_admin
- POSTGRES_DB=mushroomxpert_db
expose:
- "5432"
web:
build:
context: ./web
dockerfile: YOUR DOCKERFILE
ports:
- "0.0.0.0:8000:8000"
volumes:
- "./backend/:/app/"
environment:
- ALLOWED_HOSTS: "localhost"
- DEBUG: "False"
- DB_NAME: "mushroomxpert_db"
- DB_USER: "mushroom_admin"
- DB_PASSWORD: "chikchik1"
- DB_HOST: "db"
- DB_PORT: "5432"
env_file:
- config.env
depends_on:
- db
command: >-
sh -c "
pip install -r requirements.txt &&
python manage.py runserver 0.0.0.0:8000
"
The above configuration contains everything that you need and moreover the connection between the db and your django application.

Related

Django crontab can’t connect database(Postgresql) with docker; no such table err

I am using django and postgresql. I am using django-crontab to change the data.
It runs well in the local environment, but we use docker to deploy and watch, and I confirmed that when cron runs, we refer to sqlite3.
I also made a separate cron container in docker composite and ran it, I am using it incorrectly because I am a beginner. Help me
#goods/cron.py
from goods.models import Goods
def test():
print(Goods.objects.all())
./docker-compose.yml
version: '3.8'
volumes:
postgres: {}
django_media: {}
django_static: {}
static_volume: {}
services:
postgres:
container_name: postgres
image: postgres:14.5
volumes:
- postgres:/var/lib/postgresql/data/
environment:
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_DB
restart: always
nginx:
container_name: nginx
image: nginx:1.23.2
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
- django_media:/media/
- django_static:/static/
depends_on:
- asgiserver
- backend
restart: always
django_backend:/app/media
backend: host:container
container_name: django_backend
build: .
entrypoint: sh -c "python manage.py migrate && gunicorn handsup.wsgi --workers=5 -b 0.0.0.0:8000"
restart: always
volumes:
- ./:/app/
- /etc/localtime:/etc/localtime:ro
- django_media:/app/media/
- django_static:/app/static/
environment: #
- DEBUG
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_HOST
- POSTGRES_PORT
depends_on:
- postgres
redis:
image: redis:5
asgiserver:
build: .
command: daphne -b 0.0.0.0 -p 8080 handsup.asgi:application
volumes:
- ./:/app/
restart: always
environment:
- DEBUG
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_HOST
- POSTGRES_PORT
depends_on:
- redis
- postgres
cron:
build: .
restart: always
volumes:
- ./:/app/
depends_on:
- postgres
- backend
environment: #
- DEBUG
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_HOST
- POSTGRES_PORT
command: cron -f # as a long-running foreground process
./Dockerfile
FROM python:3.10.8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN mkdir /app/
WORKDIR /app/
RUN apt-get update -y
RUN apt-get install -y cron
COPY ./requirements.txt .
COPY ./ /app/
RUN pip install --no-cache-dir -r requirements.txt
# RUN service cron start
ENTRYPOINT ["./docker-entrypoint.sh"]
RUN pip install gunicorn psycopg2
./docker-entrypoint.sh
# If this is going to be a cron container, set up the crontab.
if [ "$1" = cron ]; then
./manage.py crontab add
fi
# Launch the main container command passed as arguments.
exec "$#"
I referred to the contents below.
How to make django-crontab execute commands in Docker container?

IntelliJ IDEA not loading dependencies from docker compose - python django

I have set up a remote interpreter from the docker-compose option for a Django project. Still, it is showing me red squiggly lines under the package imports. How can I fix this?
docker-compose.yml
services:
app:
build:
context: .
args:
- DEV=true
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py wait_for_db &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=db
- DB_NAME=devdb
- DB_USER=devuser
- DB_PASS=changeme
depends_on:
- db
db:
image: postgres:14.5-alpine3.16
volumes:
- dev-db-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=devdb
- POSTGRES_USER=devuser
- POSTGRES_PASSWORD=changeme
volumes:
dev-db-data:

flask, neo4j and docker : Unable to retrieve routing information

I try to develop a mindmap API with flask and neo4j, i would like to dockerize all my project.
All services are started but backend dont want to communicate with noe4J ...
I have this error :
neo4j.exceptions.ServiceUnavailable: Unable to retrieve routing information
Here is my code : https://github.com/lquastana/mindmaps
To reproduce the error just run a docker compose command and reach this endpoint : http://localhost:5000/mindmaps
On my web service declaration I change NEO4J_URL from localhost to neo4j ( name of my service )
version: '3'
services:
web:
build: ./backend
command: flask run --host=0.0.0.0 #gunicorn --bind 0.0.0.0:5000 mindmap_api:app
ports:
- 5000:5000
environment:
- FLASK_APP=mindmap_api
- FLASK_ENV=development
- NEO4J_USERNAME=neo4j
- NEO4J_PASSWORD=airline-mexico-archer-ecology-bahama-7381
- NEO4J_URL=neo4j://neo4j:7687 # HERE
- NEO4J_DATABASE=neo4j
depends_on:
- neo4j
volumes:
- ./backend:/usr/src/app
neo4j:
image: neo4j
restart: unless-stopped
ports:
- 7474:7474
- 7687:7687
volumes:
- ./neo4j/conf:/neo4j/conf
- ./neo4j/data:/neo4j/data
- ./neo4j/import:/neo4j/import
- ./neo4j/logs:/neo4j/logs
- ./neo4j/plugins:/neo4j/plugins
environment:
# Raise memory limits
- NEO4J_dbms_memory_pagecache_size=1G
- NEO4J_dbms.memory.heap.initial_size=1G
- NEO4J_dbms_memory_heap_max__size=1G
- NEO4J_AUTH=neo4j/airline-mexico-archer-ecology-bahama-7381

Properly migrating Postgres database to Docker/Django/Heroku/Postgres

I have a Django project hosted on an IIS server with a Postgresql database that I am migrating to Docker/Heroku project. I have found a few good resources online, but no luck yet completely. I have tried to use the dumpdata/loaddata function but always run into constraint errors, missing relations, or content type errors. I would like to just dump the whole database and then restore the whole thing to Docker. Here is my docker-compose:
version: "3.7"
services:
db:
image: postgres
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- "5432:5432"
environment:
- POSTGRES_NAME=${DATABASE_NAME}
- POSTGRES_USER=${DATABASE_USER}
- POSTGRES_PASSWORD=${DATABASE_PASSWORD}
- POSTGRES_DB=${DATABASE_NAME}
networks:
- hello-world
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- '.:/code'
ports:
- "8000:8000"
env_file:
- .env
depends_on:
- db
networks:
- hello-world
networks:
hello-world:
driver: bridge
volumes:
postgres:
driver: local
I was actually able to resolve this I believe with the following command: "docker exec -i postgres pg_restore --verbose --clean --no-acl --no-owner -h localhost -U postgres -d < ./latest.dump"

Django docker migrations not working after renaming model

I have a Django Docker setup using postgresql in RDS.
I managed to run the project successfully once and edited some model names. After that I built and launched a new container.
I noticed that instead of getting the typical:
"We have detected changes in your database. Did you renamed XXX to YYY?"
I got all my models migrating for the first time and everything seemed to work until I got to the Django admin.
ProgrammingError at /admin/upload/earnings/
relation "upload_earnings" does not exist
LINE 1: SELECT COUNT(*) AS "__count" FROM "upload_earnings"
This is my Dockerfile.
version: '3.8'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
env_file:
- ./.env.prod
nginx-proxy:
container_name: nginx-proxy
build: nginx
restart: always
ports:
- 443:443
- 80:80
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- web
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- ./.env.prod.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- acme:/etc/acme.sh
depends_on:
- nginx-proxy
volumes:
postgres_data:
static_volume:
media_volume:
certs:
html:
vhost:
acme:
So to reproduce I first created the container.
docker-compose -f docker-compose.yml build
docker-compose -f docker-compose.yml up -d
docker exec -it container_id sh
python manage.py makemigrations
python manage.py migrate
-Created Model1
-Created XXXX
then
I changed the model names.
docker-compose -f docker-compose.yml build
docker-compose -f docker-compose.yml up -d
docker exec -it container_id sh
python manage.py makemigrations
python manage.py migrate
-Created Model1
-Created ZZZ

Categories

Resources