Handle Django MIgrations - python

I'm working on a Django project with Postgres database using Docker. We are facing some issues in with our migrations, I did not add Django migrations inside .gitignore because I want everyone to have the same database fields and same migrations as well, but every time when someone changes the models or add a new model and push the code with migrations so migrations re not applying in our database as it should be, every time we faced this issue that sometimes ABC key doesn't exist or ABC table doesn't exist, so how can I overcome from it.
Dockerfile:
EXPOSE 8000
COPY ./core/ /app/
COPY ./scripts /scripts
RUN pip install --upgrade pip
COPY requirements.txt /app/
RUN pip install -r requirements.txt && \
adduser --disabled-password --no-create-home app && \
mkdir -p /vol/web/static && \
mkdir -p /vol/web/media && \
chown -R app:app /vol && \
chmod -R 755 /vol && \
chmod -R +x /scripts
USER app
CMD ["/scripts/run.sh"]
run.sh
#!/bin/sh
set -e
ls -la /vol/
ls -la /vol/web
whoami
python manage.py collectstatic --noinput
python manage.py makemigrations
python manage.py migrate
uwsgi --socket :9000 --workers 4 --master --enable-threads --module myApp.wsgi
docker-compose.yml
version: "3.8"
services:
db:
container_name: db
image: "postgres"
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- dev.env
ports:
- "5432:5432"
environment:
- POSTGRES_DB=POSTGRES_DB
- POSTGRES_USER=POSTGRES_USER
- POSTGRES_PASSWORD=POSTGRES_PASSWORD
app:
container_name: app
build:
context: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./core:/app
- ./data/web:/vol/web
env_file:
- dev.env
ports:
- "8000:8000"
depends_on:
- db
volumes:
postgres_data:

Related

Docker django app running but cannot access the webpage

I am trying to run two separate django apps using docker (building on a linux server). The first application runs smoothly (using default ports) the second one apparently runs (it says starting development server at http://0.0.0.0:5000), I got no issues looking inside the portainer. Everything is running and no issue is there. When I try to connect to the page, it fails.
docker-compose:
version: '3'
services:
vrt:
build:
context: .
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- ./nuovoProgetto:/VehicleRammingTool
command: >
sh -c "python3 manage.py wait_for_db &&
python3 manage.py migrate &&
python3 manage.py runserver 0.0.0.0:5000"
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:14.1-alpine
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- db:/var/lib/postgresql/data
redis:
image: redis:alpine
celery:
restart: always
build:
context: .
command: celery -A nuovoProgetto worker --pool=solo --loglevel=info
volumes:
- ./nuovoProgetto:/VehicleRammingTool
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
depends_on:
- vrt
- redis
volumes:
db:
driver: local
Dockerfile:
FROM ubuntu:18.04
ENV http_proxy=http://++++++++++proxyhere
ENV https_proxy=http://+++++++++proxyhere
ENV PATH="/root/miniconda3/bin:${PATH}"
ARG PATH="/root/miniconda3/bin:${PATH}"
RUN apt-get update
RUN apt-get install -y wget && rm -rf /var/lib/apt/lists/*
RUN wget \
https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN python --version
RUN conda install -c conda-forge django psycopg2 celery redis-py django-leaflet django-celery- beat django-celery-results django-crispy-forms osmnx geopy geocoder pathos
RUN mkdir /VehicleRammingTool
COPY ./nuovoProgetto /VehicleRammingTool
WORKDIR /VehicleRammingTool
EXPOSE 5000
EDIT
I can cURL via command line the page using the proxy option, but still I can't get there via browser

Django Gitlab CI/CD Problem with WEB_IMAGE=$IMAGE:web

I am new to Gitlab CI/CD. I have a django project running on my local machine in docker. I want to configure Gitlab CI/CD with my django project (database is postgres, proxy server is nginx).
Here are my config files.
.env
DEBUG=1
SECRET_KEY=foo
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
DATABASE=postgres
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=foo
SQL_USER=foo
SQL_PASSWORD=foo
SQL_HOST=db
SQL_PORT=5432
POSTGRES_USER=pos
POSTGRES_PASSWORD=123456
POSTGRES_DB=foo
Dockerfile:
FROM python:3.9.6-alpine
ENV HOME=/web
ENV APP_HOME=/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
WORKDIR $APP_HOME
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /web/entrypoint.sh
RUN chmod +x /web/entrypoint.sh
COPY . /web/
RUN python manage.py collectstatic --no-input --clear
ENTRYPOINT ["/web/entrypoint.sh"]
docker-compose.yml
version: '3.8'
services:
web:
build: .
command: gunicorn pos.wsgi:application --bind 0.0.0.0:8000
volumes:
- .:/web/
- static_volume:/web/staticfiles
ports:
- 8000:8000
env_file:
- ./.env
db:
image: postgres:13.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env
nginx:
build: ./nginx
ports:
- 1337:80
volumes:
- static_volume:/web/staticfiles
depends_on:
- web
volumes:
postgres_data:
static_volume:
entrypoint.sh
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py flush --no-input
python manage.py migrate
exec "$#"
.gitlab-ci.yml
image:
name: docker/compose:1.29.1
entrypoint: [""]
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
before_script:
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE:web
- export NGINX_IMAGE=$IMAGE:nginx
- apk add --no-cache openssh-client bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
build:
stage: build
script:
- docker pull $IMAGE:web || true
- docker pull $IMAGE:nginx || true
- docker-compose -f docker-compose.yml build
- docker push $IMAGE:web
- docker push $IMAGE:nginx
deploy:
stage: deploy
script:
- mkdir -p ~/.ssh
- echo "$PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- cat ~/.ssh/id_rsa
- chmod 700 ~/.ssh/id_rsa
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
- chmod +x ./deploy.sh
- scp -o StrictHostKeyChecking=no -r ./.env ./docker-compose.yml user#$VPS_IP_ADDRESS:/web
- bash ./deploy.sh
setup_env.sh
echo DEBUG=$DEBUG >> .env
echo SQL_ENGINE=django.db.backends.postgresql >> .env
echo DATABASE=postgres >> .env
echo SECRET_KEY=$SECRET_KEY >> .env
echo SQL_DATABASE=$SQL_DATABASE >> .env
echo SQL_USER=$SQL_USER >> .env
echo SQL_PASSWORD=$SQL_PASSWORD >> .env
echo SQL_HOST=$SQL_HOST >> .env
echo SQL_PORT=$SQL_PORT >> .env
echo WEB_IMAGE=$IMAGE:web >> .env
echo NGINX_IMAGE=$IMAGE:nginx >> .env
echo CI_REGISTRY_USER=$CI_REGISTRY_USER >> .env
echo CI_JOB_TOKEN=$CI_JOB_TOKEN >> .env
echo CI_REGISTRY=$CI_REGISTRY >> .env
echo IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME >> .env
deploy.sh
#!/bin/sh
ssh -o StrictHostKeyChecking=no user#$VPS_IP_ADDRESS << 'ENDSSH'
cd /web
export $(cat .env | xargs)
docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
docker pull $IMAGE:web
docker pull $IMAGE:nginx
docker-compose -f docker-compose.yml up -d
ENDSSH
enter image description here
enter image description here
These are all the information I can provide, please help me, guys!!!
Thanks!
If i understand your question, the problem is in
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE:web
- export NGINX_IMAGE=$IMAGE:nginx
Doing this will give you a path to registry with .../PROJECTNAME:web, and this image does not exists.
You should create an image in registry and then tag latest or whatever you want, so that your image path will be .../PROJECTNAME/web:latest for example

How to deploy django app using bitbucket pipelines

I want to deploy my Django app which is dockerized using BitBucket pipelines to AWS EC2 instance. How can I deploy to EC2 using BitBucket pipelines?
docker-compose.yml
version: "3.8"
services:
db:
container_name: db
image: "postgres"
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- dev.env
ports:
- "5432:5432"
environment:
- POSTGRES_DB=POSTGRES_DB
- POSTGRES_USER=POSTGRES_USER
- POSTGRES_PASSWORD=POSTGRES_PASSWORD
app:
container_name: app
build:
context: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./core:/app
- ./data/web:/vol/web
env_file:
- dev.env
ports:
- "8000:8000"
depends_on:
- db
volumes:
postgres_data:
Dockerfile
FROM python:3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# COPY ./core /app
WORKDIR /app
EXPOSE 8000
COPY ./core/ /app/
COPY ./scripts /scripts
RUN pip install --upgrade pip
COPY requirements.txt /app/
RUN pip install -r requirements.txt && \
adduser --disabled-password --no-create-home app && \
mkdir -p /vol/web/static && \
mkdir -p /vol/web/media && \
chown -R app:app /vol && \
chmod -R 755 /vol && \
chmod -R +x /scripts
USER app
CMD ["/scripts/run.sh"]

how to install postgres extension with docker on django project

I want to add full text search to my Django project and I used PostgreSQL and docker,so want to add extension pg_trgm to PostgreSQL for trigram similarity search. how should I install this extension with dockerfile?
In shared my repository link.
FROM python:3.8.10-alpine
WORKDIR /Blog/
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
COPY . .
ENTRYPOINT ["./entrypoint.sh"]
docker-compose
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/Blog
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=helo
- POSTGRES_PASSWORD=helo
- POSTGRES_DB=helo
volumes:`enter code here`
postgres_data:
You can do this the hard way!
$ sudo docker-compose exec db bash
$ psql -U username -d database
$ create extension pg_trgm;
This is not a good method, because you have to be careful and reinstall it every time the image is created.
or
use default django solution:
https://docs.djangoproject.com/en/4.0/ref/contrib/postgres/operations/#trigramextension
from django.contrib.postgres.operations import TrigramExtension
class Migration(migrations.Migration):
...
operations = [
TrigramExtension(),
...
]

Django with Docker - Server not starting

I have followed the steps in the official docker tutorial for getting up and running with django: https://docs.docker.com/compose/django/
It works fine until I have to run docker-compose up
It doesn't directly give me an error, but it won't run the server either, stopping at this point:
(Screenshot of the Docker Quickstart Terminal)
docker-compose.yml:
version: '3'
services:
db:
image: postgres
web:
build: .
command: >
bash -c
"python3 manage.py migrate
python3 manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
I am on Windows and have therefore used docker-toolbox.
Thanks for your suggestions!
Start docker-compose in detached mode:
docker-compose up -d
check your django container id
docker ps
then log into container:
docker exec -it yourDjangoContainerID bash
then go to directory where manage.py file is, and type
python manage.py migrate
You can put the migration command into your docker-compose.yml file. Something like
web:
command: >
bash -c
"python3 manage.py migrate
python3 manage.py runserver 0.0.0.0:8000"
replacing
web:
command: python3 manage.py runserver 0.0.0.0:8000
This will apply migrations every time you do docker-compose up.

Categories

Resources