I have set up a remote interpreter from the docker-compose option for a Django project. Still, it is showing me red squiggly lines under the package imports. How can I fix this?
docker-compose.yml
services:
app:
build:
context: .
args:
- DEV=true
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py wait_for_db &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=db
- DB_NAME=devdb
- DB_USER=devuser
- DB_PASS=changeme
depends_on:
- db
db:
image: postgres:14.5-alpine3.16
volumes:
- dev-db-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=devdb
- POSTGRES_USER=devuser
- POSTGRES_PASSWORD=changeme
volumes:
dev-db-data:
Related
I am using django and postgresql. I am using django-crontab to change the data.
It runs well in the local environment, but we use docker to deploy and watch, and I confirmed that when cron runs, we refer to sqlite3.
I also made a separate cron container in docker composite and ran it, I am using it incorrectly because I am a beginner. Help me
#goods/cron.py
from goods.models import Goods
def test():
print(Goods.objects.all())
./docker-compose.yml
version: '3.8'
volumes:
postgres: {}
django_media: {}
django_static: {}
static_volume: {}
services:
postgres:
container_name: postgres
image: postgres:14.5
volumes:
- postgres:/var/lib/postgresql/data/
environment:
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_DB
restart: always
nginx:
container_name: nginx
image: nginx:1.23.2
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
- django_media:/media/
- django_static:/static/
depends_on:
- asgiserver
- backend
restart: always
django_backend:/app/media
backend: host:container
container_name: django_backend
build: .
entrypoint: sh -c "python manage.py migrate && gunicorn handsup.wsgi --workers=5 -b 0.0.0.0:8000"
restart: always
volumes:
- ./:/app/
- /etc/localtime:/etc/localtime:ro
- django_media:/app/media/
- django_static:/app/static/
environment: #
- DEBUG
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_HOST
- POSTGRES_PORT
depends_on:
- postgres
redis:
image: redis:5
asgiserver:
build: .
command: daphne -b 0.0.0.0 -p 8080 handsup.asgi:application
volumes:
- ./:/app/
restart: always
environment:
- DEBUG
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_HOST
- POSTGRES_PORT
depends_on:
- redis
- postgres
cron:
build: .
restart: always
volumes:
- ./:/app/
depends_on:
- postgres
- backend
environment: #
- DEBUG
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_HOST
- POSTGRES_PORT
command: cron -f # as a long-running foreground process
./Dockerfile
FROM python:3.10.8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN mkdir /app/
WORKDIR /app/
RUN apt-get update -y
RUN apt-get install -y cron
COPY ./requirements.txt .
COPY ./ /app/
RUN pip install --no-cache-dir -r requirements.txt
# RUN service cron start
ENTRYPOINT ["./docker-entrypoint.sh"]
RUN pip install gunicorn psycopg2
./docker-entrypoint.sh
# If this is going to be a cron container, set up the crontab.
if [ "$1" = cron ]; then
./manage.py crontab add
fi
# Launch the main container command passed as arguments.
exec "$#"
I referred to the contents below.
How to make django-crontab execute commands in Docker container?
I have a Django project hosted on an IIS server with a Postgresql database that I am migrating to Docker/Heroku project. I have found a few good resources online, but no luck yet completely. I have tried to use the dumpdata/loaddata function but always run into constraint errors, missing relations, or content type errors. I would like to just dump the whole database and then restore the whole thing to Docker. Here is my docker-compose:
version: "3.7"
services:
db:
image: postgres
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- "5432:5432"
environment:
- POSTGRES_NAME=${DATABASE_NAME}
- POSTGRES_USER=${DATABASE_USER}
- POSTGRES_PASSWORD=${DATABASE_PASSWORD}
- POSTGRES_DB=${DATABASE_NAME}
networks:
- hello-world
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- '.:/code'
ports:
- "8000:8000"
env_file:
- .env
depends_on:
- db
networks:
- hello-world
networks:
hello-world:
driver: bridge
volumes:
postgres:
driver: local
I was actually able to resolve this I believe with the following command: "docker exec -i postgres pg_restore --verbose --clean --no-acl --no-owner -h localhost -U postgres -d < ./latest.dump"
I have a Django Docker setup using postgresql in RDS.
I managed to run the project successfully once and edited some model names. After that I built and launched a new container.
I noticed that instead of getting the typical:
"We have detected changes in your database. Did you renamed XXX to YYY?"
I got all my models migrating for the first time and everything seemed to work until I got to the Django admin.
ProgrammingError at /admin/upload/earnings/
relation "upload_earnings" does not exist
LINE 1: SELECT COUNT(*) AS "__count" FROM "upload_earnings"
This is my Dockerfile.
version: '3.8'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
env_file:
- ./.env.prod
nginx-proxy:
container_name: nginx-proxy
build: nginx
restart: always
ports:
- 443:443
- 80:80
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- web
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- ./.env.prod.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- acme:/etc/acme.sh
depends_on:
- nginx-proxy
volumes:
postgres_data:
static_volume:
media_volume:
certs:
html:
vhost:
acme:
So to reproduce I first created the container.
docker-compose -f docker-compose.yml build
docker-compose -f docker-compose.yml up -d
docker exec -it container_id sh
python manage.py makemigrations
python manage.py migrate
-Created Model1
-Created XXXX
then
I changed the model names.
docker-compose -f docker-compose.yml build
docker-compose -f docker-compose.yml up -d
docker exec -it container_id sh
python manage.py makemigrations
python manage.py migrate
-Created Model1
-Created ZZZ
I am new to Docker, currently I am trying to deploy my app in a container. I have made 2 containers one for the DB and one for the app. But when I am trying to run my docker-compose file the app container exists with exit code 252. Here are the logs -
web_1 | Watching for file changes with StatReloader
web_1 | Performing system checks...
web_1 |
mushroomxpert_web_1 exited with code 252
This is my docker-compose file
version: '3.7'
services:
web:
image: mushroomxpert
build:
context: ./web
# command: 'gunicorn MushroomXpert.wsgi --log-file -'
command: python manage.py runserver 0.0.0.0:8000
ports:
- '8000:8000'
environment:
- ALLOWED_HOSTS=localhost
- DEBUF=False
- DB_NAME=mushroomxpert_db
- DB_USER=mushroom_admin
- DB_PASSWORD=chikchik1
- DB_HOST=db
- DB_PORT=5432
depends_on:
- db
db:
image: postgres
environment:
- POSTGRES_PASSWORD=chikchik1
- POSTGRES_USER=mushroom_admin
- POSTGRES_DB=mushroomxpert_db
EDIT 1- the problem seems to be occuring from Tensorflow, so I downgraded it's version to 2.2 after that the app worked. I am marking this as solved.
Please use something like this
version: "3.7"
db:
image: postgres
environment:
- POSTGRES_PASSWORD=chikchik1
- POSTGRES_USER=mushroom_admin
- POSTGRES_DB=mushroomxpert_db
expose:
- "5432"
web:
build:
context: ./web
dockerfile: YOUR DOCKERFILE
ports:
- "0.0.0.0:8000:8000"
volumes:
- "./backend/:/app/"
environment:
- ALLOWED_HOSTS: "localhost"
- DEBUG: "False"
- DB_NAME: "mushroomxpert_db"
- DB_USER: "mushroom_admin"
- DB_PASSWORD: "chikchik1"
- DB_HOST: "db"
- DB_PORT: "5432"
env_file:
- config.env
depends_on:
- db
command: >-
sh -c "
pip install -r requirements.txt &&
python manage.py runserver 0.0.0.0:8000
"
The above configuration contains everything that you need and moreover the connection between the db and your django application.
DockerFile:
FROM python:3.6
WORKDIR /usr/src/jobsterapi
COPY ./ ./
RUN pip install -r requirements.txt
CMD ["/bin/bash"]
docker-compose.yml
version: "3"
services:
jobster_api:
container_name: jobster
build: ./
# command: python manage.py runserver 0.0.0.0:8000
command: "bash -c 'python src/manage.py makemigrations --no-input && python src/manage.py migrate --no-input && python src/manage.py runserver 0.0.0.0:8000'"
working_dir: /usr/src/jobster_api
environment:
REDIS_URI: redis://redis:6379
MONGO_URI: mongodb://jobster:27017
ports:
- "8000:8000"
volumes:
- ./:/usr/src/jobster_api
links:
- redis
- elasticsearch
- mongo
#redis
redis:
image: redis
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- "6379:6379"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.0
ports:
- "9200:9200"
- "9300:9300"
mongo:
image: mongo
ports:
- "27017:27017"
I have done setup django with mongodb inside docker using following docker-compose
command. it is working fine every thing. but when i am adding any records
using "docker exec -it 'img id' /bin/bash" it is inserting data(i tried creating superuser for django
admin panel). but, when i am again making it "docker-compose up" after "docker-compose down" it is
deleting all data from database showing empty records. so i am not able to access admin panel also for next timeself.
Please have a look.........
Add a volumes to
mongo:
image: mongo
ports:
- "27017:27017"
volumes:
- insert_mongos_stored_area
https://docs.docker.com/storage/volumes/