I am new to Gitlab CI/CD. I have a django project running on my local machine in docker. I want to configure Gitlab CI/CD with my django project (database is postgres, proxy server is nginx).
Here are my config files.
.env
DEBUG=1
SECRET_KEY=foo
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
DATABASE=postgres
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=foo
SQL_USER=foo
SQL_PASSWORD=foo
SQL_HOST=db
SQL_PORT=5432
POSTGRES_USER=pos
POSTGRES_PASSWORD=123456
POSTGRES_DB=foo
Dockerfile:
FROM python:3.9.6-alpine
ENV HOME=/web
ENV APP_HOME=/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
WORKDIR $APP_HOME
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /web/entrypoint.sh
RUN chmod +x /web/entrypoint.sh
COPY . /web/
RUN python manage.py collectstatic --no-input --clear
ENTRYPOINT ["/web/entrypoint.sh"]
docker-compose.yml
version: '3.8'
services:
web:
build: .
command: gunicorn pos.wsgi:application --bind 0.0.0.0:8000
volumes:
- .:/web/
- static_volume:/web/staticfiles
ports:
- 8000:8000
env_file:
- ./.env
db:
image: postgres:13.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env
nginx:
build: ./nginx
ports:
- 1337:80
volumes:
- static_volume:/web/staticfiles
depends_on:
- web
volumes:
postgres_data:
static_volume:
entrypoint.sh
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py flush --no-input
python manage.py migrate
exec "$#"
.gitlab-ci.yml
image:
name: docker/compose:1.29.1
entrypoint: [""]
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
before_script:
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE:web
- export NGINX_IMAGE=$IMAGE:nginx
- apk add --no-cache openssh-client bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
build:
stage: build
script:
- docker pull $IMAGE:web || true
- docker pull $IMAGE:nginx || true
- docker-compose -f docker-compose.yml build
- docker push $IMAGE:web
- docker push $IMAGE:nginx
deploy:
stage: deploy
script:
- mkdir -p ~/.ssh
- echo "$PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- cat ~/.ssh/id_rsa
- chmod 700 ~/.ssh/id_rsa
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
- chmod +x ./deploy.sh
- scp -o StrictHostKeyChecking=no -r ./.env ./docker-compose.yml user#$VPS_IP_ADDRESS:/web
- bash ./deploy.sh
setup_env.sh
echo DEBUG=$DEBUG >> .env
echo SQL_ENGINE=django.db.backends.postgresql >> .env
echo DATABASE=postgres >> .env
echo SECRET_KEY=$SECRET_KEY >> .env
echo SQL_DATABASE=$SQL_DATABASE >> .env
echo SQL_USER=$SQL_USER >> .env
echo SQL_PASSWORD=$SQL_PASSWORD >> .env
echo SQL_HOST=$SQL_HOST >> .env
echo SQL_PORT=$SQL_PORT >> .env
echo WEB_IMAGE=$IMAGE:web >> .env
echo NGINX_IMAGE=$IMAGE:nginx >> .env
echo CI_REGISTRY_USER=$CI_REGISTRY_USER >> .env
echo CI_JOB_TOKEN=$CI_JOB_TOKEN >> .env
echo CI_REGISTRY=$CI_REGISTRY >> .env
echo IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME >> .env
deploy.sh
#!/bin/sh
ssh -o StrictHostKeyChecking=no user#$VPS_IP_ADDRESS << 'ENDSSH'
cd /web
export $(cat .env | xargs)
docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
docker pull $IMAGE:web
docker pull $IMAGE:nginx
docker-compose -f docker-compose.yml up -d
ENDSSH
enter image description here
enter image description here
These are all the information I can provide, please help me, guys!!!
Thanks!
If i understand your question, the problem is in
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE:web
- export NGINX_IMAGE=$IMAGE:nginx
Doing this will give you a path to registry with .../PROJECTNAME:web, and this image does not exists.
You should create an image in registry and then tag latest or whatever you want, so that your image path will be .../PROJECTNAME/web:latest for example
Related
I am trying to run two separate django apps using docker (building on a linux server). The first application runs smoothly (using default ports) the second one apparently runs (it says starting development server at http://0.0.0.0:5000), I got no issues looking inside the portainer. Everything is running and no issue is there. When I try to connect to the page, it fails.
docker-compose:
version: '3'
services:
vrt:
build:
context: .
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- ./nuovoProgetto:/VehicleRammingTool
command: >
sh -c "python3 manage.py wait_for_db &&
python3 manage.py migrate &&
python3 manage.py runserver 0.0.0.0:5000"
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:14.1-alpine
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- db:/var/lib/postgresql/data
redis:
image: redis:alpine
celery:
restart: always
build:
context: .
command: celery -A nuovoProgetto worker --pool=solo --loglevel=info
volumes:
- ./nuovoProgetto:/VehicleRammingTool
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
depends_on:
- vrt
- redis
volumes:
db:
driver: local
Dockerfile:
FROM ubuntu:18.04
ENV http_proxy=http://++++++++++proxyhere
ENV https_proxy=http://+++++++++proxyhere
ENV PATH="/root/miniconda3/bin:${PATH}"
ARG PATH="/root/miniconda3/bin:${PATH}"
RUN apt-get update
RUN apt-get install -y wget && rm -rf /var/lib/apt/lists/*
RUN wget \
https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN python --version
RUN conda install -c conda-forge django psycopg2 celery redis-py django-leaflet django-celery- beat django-celery-results django-crispy-forms osmnx geopy geocoder pathos
RUN mkdir /VehicleRammingTool
COPY ./nuovoProgetto /VehicleRammingTool
WORKDIR /VehicleRammingTool
EXPOSE 5000
EDIT
I can cURL via command line the page using the proxy option, but still I can't get there via browser
I am trying to deploy docker application, I am getting following error in Application Start phase:
[stderr]unable to prepare context: unable to evaluate symlinks in
Dockerfile path: lstat /opt/codedeploy-agent/Dockerfile: no such file
or directory
My appspec.yml file is as follows:
version: 0.0
os: linux
files:
- source: /
destination: /
file_exists_behavior: OVERWRITE
hooks:
ApplicationStop:
- location: scripts/kill_container.sh
timeout: 300
runas: ec2-user
BeforeInstall:
- location: scripts/install_dependencies.sh
timeout: 300
runas: ec2-user
ApplicationStart:
- location: scripts/start_container.sh
timeout: 300
runas: ec2-user
I have tried setting different destinations, as follows:
destination: /
destination: /home/ec2-user/Deployment
None of the above works for me.
My kill_container.sh code:
#!/usr/bin/env bash
set -e
echo "Stopping and removing the running container========"
docker rm -f python-api-docker || true
My install_dependencies.sh code:
#!/usr/bin/env bash
set -e
echo "====================================================="
echo $PWD
pwd
ls
My start_container.sh code:
#!/usr/bin/env bash
echo "Starting container==="
set -e
echo $PWD
pwd
ls
docker build -t python-api-docker .
docker run -p 5000:5000 python-api-docker
The docker file is as follows:
FROM python:3.9-alpine
EXPOSE 5000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "api:app"]
I am attaching the error output:
I'm trying to Dockerize my FastApi app, but it crashes with this error right after I run the command:
docker-compose -f local.yml up -d
Can anyone help me, please?
Dockerfile:
FROM python:3.6.11-alpine3.11
ARG MYSQL_SERVER
ARG POSTGRES_SERVER
ENV ENVTYPE=local
ENV PYTHONUNBUFFERED 1
ENV APP_HOME=/home/app/web
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
RUN apk update && apk add --no-cache bash
ADD /compose/scripts.sh $APP_HOME
ADD /requirements/$ENVTYPE.txt $APP_HOME
RUN chmod +x scripts.sh
RUN ./scripts.sh
RUN pip install -r /home/app/web/$ENVTYPE.txt; mkdir /log;
COPY /src/ $APP_HOME
CMD ["uvicorn", "app.main:app", "--reload", "--host", "0.0.0.0", "--port", "8080"]
local.yml file:
version: '3.7'
services:
nginx:
env_file: .env
build:
context: .
dockerfile: ./compose/local/nginx.Dockerfile
restart: always
ports:
- "${EX_PORT_NGINX:-8030}:80"
volumes:
- ./nginx/site.conf:/etc/nginx/conf.d/default.conf
core:
env_file: .env
build:
context: .
dockerfile: ./compose/local/core.Dockerfile
args:
MYSQL_SERVER: ${MYSQL_SERVER:-}
POSTGRES_SERVER: ${POSTGRES_SERVER:-}
restart: always
volumes:
- ./src:/home/app/web/
logging:
driver: "json-file"
options:
max-size: "5m"
max-file: "10"
Error:
Cannot start service core: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "uvicorn": executable file not found in $PATH: unknown
Add to Dockerfile,
ENV PATH /home/${USERNAME}/.local/bin:${PATH},
before
RUN pip install -r /home/app/web/$ENVTYPE.txt; mkdir /log;,
by replacing ${USERNAME} with the container user.
If you don't know the current user, add RUN echo $(python3 -m site --user-base) somewhere in the Dockerfile. Then copy the output of that echo to replace /home/${USERNAME}/.local.
In my case I add commands poetry run and it's works.
services:
api:
...
command: [
"poetry", "run",
"uvicorn",
"app:main",
"--port", "5000"
]
I want to deploy my Django app which is dockerized using BitBucket pipelines to AWS EC2 instance. How can I deploy to EC2 using BitBucket pipelines?
docker-compose.yml
version: "3.8"
services:
db:
container_name: db
image: "postgres"
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- dev.env
ports:
- "5432:5432"
environment:
- POSTGRES_DB=POSTGRES_DB
- POSTGRES_USER=POSTGRES_USER
- POSTGRES_PASSWORD=POSTGRES_PASSWORD
app:
container_name: app
build:
context: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./core:/app
- ./data/web:/vol/web
env_file:
- dev.env
ports:
- "8000:8000"
depends_on:
- db
volumes:
postgres_data:
Dockerfile
FROM python:3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# COPY ./core /app
WORKDIR /app
EXPOSE 8000
COPY ./core/ /app/
COPY ./scripts /scripts
RUN pip install --upgrade pip
COPY requirements.txt /app/
RUN pip install -r requirements.txt && \
adduser --disabled-password --no-create-home app && \
mkdir -p /vol/web/static && \
mkdir -p /vol/web/media && \
chown -R app:app /vol && \
chmod -R 755 /vol && \
chmod -R +x /scripts
USER app
CMD ["/scripts/run.sh"]
I'm working on a Django project with Postgres database using Docker. We are facing some issues in with our migrations, I did not add Django migrations inside .gitignore because I want everyone to have the same database fields and same migrations as well, but every time when someone changes the models or add a new model and push the code with migrations so migrations re not applying in our database as it should be, every time we faced this issue that sometimes ABC key doesn't exist or ABC table doesn't exist, so how can I overcome from it.
Dockerfile:
EXPOSE 8000
COPY ./core/ /app/
COPY ./scripts /scripts
RUN pip install --upgrade pip
COPY requirements.txt /app/
RUN pip install -r requirements.txt && \
adduser --disabled-password --no-create-home app && \
mkdir -p /vol/web/static && \
mkdir -p /vol/web/media && \
chown -R app:app /vol && \
chmod -R 755 /vol && \
chmod -R +x /scripts
USER app
CMD ["/scripts/run.sh"]
run.sh
#!/bin/sh
set -e
ls -la /vol/
ls -la /vol/web
whoami
python manage.py collectstatic --noinput
python manage.py makemigrations
python manage.py migrate
uwsgi --socket :9000 --workers 4 --master --enable-threads --module myApp.wsgi
docker-compose.yml
version: "3.8"
services:
db:
container_name: db
image: "postgres"
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- dev.env
ports:
- "5432:5432"
environment:
- POSTGRES_DB=POSTGRES_DB
- POSTGRES_USER=POSTGRES_USER
- POSTGRES_PASSWORD=POSTGRES_PASSWORD
app:
container_name: app
build:
context: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./core:/app
- ./data/web:/vol/web
env_file:
- dev.env
ports:
- "8000:8000"
depends_on:
- db
volumes:
postgres_data: