Docker django app running but cannot access the webpage - python

I am trying to run two separate django apps using docker (building on a linux server). The first application runs smoothly (using default ports) the second one apparently runs (it says starting development server at http://0.0.0.0:5000), I got no issues looking inside the portainer. Everything is running and no issue is there. When I try to connect to the page, it fails.
docker-compose:
version: '3'
services:
vrt:
build:
context: .
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- ./nuovoProgetto:/VehicleRammingTool
command: >
sh -c "python3 manage.py wait_for_db &&
python3 manage.py migrate &&
python3 manage.py runserver 0.0.0.0:5000"
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:14.1-alpine
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- db:/var/lib/postgresql/data
redis:
image: redis:alpine
celery:
restart: always
build:
context: .
command: celery -A nuovoProgetto worker --pool=solo --loglevel=info
volumes:
- ./nuovoProgetto:/VehicleRammingTool
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
depends_on:
- vrt
- redis
volumes:
db:
driver: local
Dockerfile:
FROM ubuntu:18.04
ENV http_proxy=http://++++++++++proxyhere
ENV https_proxy=http://+++++++++proxyhere
ENV PATH="/root/miniconda3/bin:${PATH}"
ARG PATH="/root/miniconda3/bin:${PATH}"
RUN apt-get update
RUN apt-get install -y wget && rm -rf /var/lib/apt/lists/*
RUN wget \
https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN python --version
RUN conda install -c conda-forge django psycopg2 celery redis-py django-leaflet django-celery- beat django-celery-results django-crispy-forms osmnx geopy geocoder pathos
RUN mkdir /VehicleRammingTool
COPY ./nuovoProgetto /VehicleRammingTool
WORKDIR /VehicleRammingTool
EXPOSE 5000
EDIT
I can cURL via command line the page using the proxy option, but still I can't get there via browser

Related

Airflow docker-compose from another docker container on Azure Machine Learning Compute Cluster

I’m attempting to run Airflow in docker with docker-compose from inside another container. This container is created by default by an Azure Machine Learning compute cluster that I want to use to run my Airflow DAG.
The problem is that I have the following error when I try to execute docker-compose build through the ScriptRunConfig class (see script below).
error creating aufs mount to /var/lib/docker/aufs/mnt/3d54c61133023e1d3700ce9746529dc8328e739346d4123d632951f11acdf122-init: mount target=/var/lib/docker/aufs/mnt/3d54c61133023e1d3700ce9746529dc8328e739346d4123d632951f11acdf122-init data=br:/var/lib/docker/aufs/diff/3d54c61133023e1d3700ce9746529dc8328e739346d4123d632951f11acdf122-init=rw:/var/lib/docker/aufs/diff/d6e643e28d8729f8972b78bedd2de1de602879b65ff56e07e76a236d3096b709=ro+wh:/var/lib/docker/aufs/diff/8d3dde63564e16508dbf64e7baae79a6b2913b3262cb6dba7027e1fdf8bb6f8f=ro+wh:/var/lib/docker/aufs/diff/fa2e9c9ac8ff7af5f5d35e4d2081fac4bb5139d760675e9608d2b8d04f096837=ro+wh:/var/lib/docker/aufs/diff/d34747b3a1646124aa7a7a0fed4790e1264667aa58fc4de2288a10e4b68673ce=ro+wh:/var/lib/docker/aufs/diff/ea46ddfc022eb268aa428376132bc52b00b317e747d63078a38d044bae5d48ec=ro+wh:/var/lib/docker/aufs/diff/b3c155d801b49d8252a3926493328d471c8f4cfd72c553771374e3952a999d95=ro+wh:/var/lib/docker/aufs/diff/a067d04104f7a70c6194a3e742be72fc221759b18628285e7fd16a2d678120f3=ro+wh:/var/lib/docker/aufs/diff/3db994298571c09ee3d40abf21f657f9c8650a6fe0ea2c6c6c7590eb7c6c712f=ro+wh:/var/lib/docker/aufs/diff/273fc331f9ebae1d0a01963d46bf9edca6d91a089d4791066cb170954eb7609c=ro+wh:/var/lib/docker/aufs/diff/419a894bbee2b9b8ec05deed26dcfc21f234276e06d765d03ed939b918d3908f=ro+wh:/var/lib/docker/aufs/diff/e91d472c4e53f2a2eae97aca59f7dcacdf57a4b22d64991c348528e4081500d6=ro+wh:/var/lib/docker/aufs/diff/c23cc64e903b5254c19446e8ddc6db0253cbd19e239860c1dc95440ca65aae94=ro+wh:/var/lib/docker/aufs/diff/5794fefdefed444bf17de20f6c3ecf93743cccef83169c94aba62ec902c8380f=ro+wh,dio,xino=/dev/shm/aufs.xino: invalid argument
What I tried so far:
Discarded option : Using a docker:dind custom image where I installed curl and docker-compose. The problem is that it fails when building the image because:
it seems that Azure Machine Learning adds other steps to the Dockerfile to setup the conda environment needed for my project … but conda is not installed by default in the docker:dind image.
The Official docker image is based on Alpine Linux but Azure Machine Learning is only compatible with systems specifications including Ubuntu, Conda … as per the documentation
The option presented in this question: Using an image already available to Azure Machine Learning where I installed the Docker engine and docker-compose. The image is built successfully but I get the error shown above during the execution of my script by the compute cluster
Here's the Dockerfile for the custom Image (ubuntu os) used by the compute cluster to setup the environment. It is referred to as Dockerfile_cluster in the python script below.
FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20211029.v1
USER root
# install docker
RUN apt-get update -y \
&& apt-get install -y \
ca-certificates \
curl \
gnupg \
lsb-release \
&& mkdir -p /etc/apt/keyrings \
&& curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg \
&& echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null \
&& apt-get update -y \
&& apt-get install -y docker-ce docker-ce-cli containerd.io
# install docker-compose
RUN curl -L https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
RUN ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
And my docker-compose.yml file (Different dockerfile than the first one) :
services:
postgres:
image: postgres:13
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
ports:
- "5434:5432"
init_db:
build:
context: .
dockerfile: Dockerfile
command: bash -c "airflow db init && airflow db upgrade"
env_file: .env
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- postgres
scheduler:
build:
context: .
dockerfile: Dockerfile
restart: on-failure
command: bash -c "airflow scheduler"
env_file: .env
depends_on:
- postgres
ports:
- "8080:8793"
volumes:
- ./airflow_dags:/opt/airflow/dags
- ./data:/opt/airflow/data
- ./.git:/opt/airflow/.git
- ./conf:/opt/airflow/conf
- ./airflow_logs:/opt/airflow/logs
- /var/run/docker.sock:/var/run/docker.sock
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
webserver:
build:
context: .
dockerfile: Dockerfile
hostname: webserver
restart: always
env_file: .env
depends_on:
- postgres
command: bash -c "airflow users create -r Admin -u admin -e admin#example.com -f admin -l user -p admin && airflow webserver"
volumes:
- ./airflow_dags:/opt/airflow/dags
- ./data:/opt/airflow/data
- ./.git:/opt/airflow/.git
- ./conf:/opt/airflow/conf
- ./airflow_logs:/opt/airflow/logs
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "5000:8080"
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 32
And finally the script I use to submit my experiment to the compute cluster.
from azureml.core import Workspace, Experiment, Environment, ScriptRunConfig
from azureml.core.authentication import ServicePrincipalAuthentication
# Create an environment from conda reqs
env = Environment.from_conda_specification(name = "env_name", file_path = "./src/conda.yml")
# Use custom image from Dockerfile
env.docker.base_image = None
env.docker.base_dockerfile = "./Dockerfile_cluster"
# Instantiate ServicePrincipalAuth object
svc_pr = ServicePrincipalAuthentication(tenant_id=tenant_id,
service_principal_id=sp_id,
service_principal_password=sp_pwd)
# Instantiate AML Workspace object
ws = Workspace(
subscription_id=sub_id,
resource_group=rg_name,
workspace_name=ws_name,
auth=svc_pr
)
command ="bash -c 'service docker start && docker-compose build --no-cache'".split()
experiment = Experiment(workspace=ws, name='exp-test')
config = ScriptRunConfig(source_directory='.', command = command, compute_target='cpt-cluster', environment=env)
# Submit experiment
run = experiment.submit(config)
aml_url = run.get_portal_url()
print(aml_url)
run.wait_for_completion(show_output=True)
In the script above, I only tried to build first before running the services.
Normally, the commands I use to run my Airflow DAG locally or using a compute instance are:
docker-compose build –no-cache
docker-compose up postgres
In another terminal:
docker-compose up init_db
docker-compose up scheduler webserver
Thank you very much for your help.

Handle Django MIgrations

I'm working on a Django project with Postgres database using Docker. We are facing some issues in with our migrations, I did not add Django migrations inside .gitignore because I want everyone to have the same database fields and same migrations as well, but every time when someone changes the models or add a new model and push the code with migrations so migrations re not applying in our database as it should be, every time we faced this issue that sometimes ABC key doesn't exist or ABC table doesn't exist, so how can I overcome from it.
Dockerfile:
EXPOSE 8000
COPY ./core/ /app/
COPY ./scripts /scripts
RUN pip install --upgrade pip
COPY requirements.txt /app/
RUN pip install -r requirements.txt && \
adduser --disabled-password --no-create-home app && \
mkdir -p /vol/web/static && \
mkdir -p /vol/web/media && \
chown -R app:app /vol && \
chmod -R 755 /vol && \
chmod -R +x /scripts
USER app
CMD ["/scripts/run.sh"]
run.sh
#!/bin/sh
set -e
ls -la /vol/
ls -la /vol/web
whoami
python manage.py collectstatic --noinput
python manage.py makemigrations
python manage.py migrate
uwsgi --socket :9000 --workers 4 --master --enable-threads --module myApp.wsgi
docker-compose.yml
version: "3.8"
services:
db:
container_name: db
image: "postgres"
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- dev.env
ports:
- "5432:5432"
environment:
- POSTGRES_DB=POSTGRES_DB
- POSTGRES_USER=POSTGRES_USER
- POSTGRES_PASSWORD=POSTGRES_PASSWORD
app:
container_name: app
build:
context: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./core:/app
- ./data/web:/vol/web
env_file:
- dev.env
ports:
- "8000:8000"
depends_on:
- db
volumes:
postgres_data:

how to install postgres extension with docker on django project

I want to add full text search to my Django project and I used PostgreSQL and docker,so want to add extension pg_trgm to PostgreSQL for trigram similarity search. how should I install this extension with dockerfile?
In shared my repository link.
FROM python:3.8.10-alpine
WORKDIR /Blog/
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
COPY . .
ENTRYPOINT ["./entrypoint.sh"]
docker-compose
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/Blog
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=helo
- POSTGRES_PASSWORD=helo
- POSTGRES_DB=helo
volumes:`enter code here`
postgres_data:
You can do this the hard way!
$ sudo docker-compose exec db bash
$ psql -U username -d database
$ create extension pg_trgm;
This is not a good method, because you have to be careful and reinstall it every time the image is created.
or
use default django solution:
https://docs.djangoproject.com/en/4.0/ref/contrib/postgres/operations/#trigramextension
from django.contrib.postgres.operations import TrigramExtension
class Migration(migrations.Migration):
...
operations = [
TrigramExtension(),
...
]

Uknown command bash or /bin/bash on Docker Ubuntu:20.04 with docker-compose version 3.5

I am setting up docker with the Django application but the bash command is not working with docker Ubuntu:20.04 and docker-compose version 3.5.
My docker version is Docker version 20.10.7, build f0df350 and docker-compose version is Docker Compose version 2.0.0-beta.4
Can anyone help me to resolve the issue?
Below are my docker files:
Dockerfile:
FROM ubuntu:20.04
ENV PYTHONUNBUFFERED 1
RUN apt-get -y update
RUN apt-get install -y --no-install-recommends default-libmysqlclient-dev
RUN apt-get install -y gcc git libc-dev python3-dev python3-pip
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN mkdir /app
WORKDIR /app
ADD . /app
RUN pip install --upgrade pip && pip install -r requirements.txt
EXPOSE 8000
ENTRYPOINT [ "/app/manage.py" ]
docker-compose.yml
version: '3.5'
services:
db:
image: mysql:5.6
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: "mydb"
MYSQL_ROOT_PASSWORD: "root"
volumes:
- mysql_data:/var/lib/mysql
restart: always
networks:
default:
aliases:
- app-db
django:
build: .
command: bash -c "while true; do runserver 0.0.0.0:8000; sleep 10; done"
stdin_open: true
tty: true
volumes:
- .:/app
depends_on:
- db
ports:
- "8000:8000"
restart: always
environment:
MYSQL_DATABASE: "mydb"
MYSQL_USER: "root"
MYSQL_ROOT_PASSWORD: "root"
MYSQL_HOST: "app-db"
volumes:
mysql_data: {}
I am getting an error on command while running docker-compose up --build:
bash -c "while true; do runserver 0.0.0.0:8000; sleep 10; done"
Error:
Unknown command: 'bash'
Thanks in advance
When you run the container, the ENTRYPOINT and CMD are combined together into a single command; it doesn't matter if the command part is overridden by Docker Compose, the container still runs only a single command this way. So in effect you're asking for the main container process to be
/app/manage.py bash -c "while true; do runserver 0.0.0.0:8000; sleep 10; done"
and the complaint is that the Django runner doesn't understand manage.py bash as a subcommand.
In your Dockerfile itself, you probably want the default command to be to launch the server. Having ENTRYPOINT as an arbitrary "half of the command" tends to be a little more confusing, and leads to needing to override that too; it's probably better to just put this as the standard container CMD.
# No ENTRYPOINT, but
CMD ["/app/manage.py", "runserver", "0.0.0.0:8000"]
You don't need to put the restart loop into the container command since Docker already allows you to specify a restart policy for containers. You should be able to trim the docker-compose.yml section down to:
django:
build: .
# command: is built into the image
# don't usually need stdin_open: or tty:
# don't overwrite the image code with volumes:
depends_on:
- db
ports:
- "8000:8000"
restart: always # replaces the "while true ... done" shell loop
environment: *as-in-the-question

Add PHP functionality to Docker Compose Python/Flask Container

I have a Flask-based webapp built in Docker Compose using Gunicorn, Redis, Celery and Postgres. The app needs to call a 3rd-party math function that is written in PHP and will be hosted within the app structure as a PHP file. It's not possible to rewrite this function in Python unfortunately. I need therefore to get PHP running inside my main webapp container so I can access the file. I have the relevant subprocess code ready but am struggling with how to get PHP running within the relevant container. The important files are as follows:
docker-compose.yml:
version: '2'
services:
postgres:
image: 'postgres:9.5'
restart: always
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5432:5432'
redis:
image: 'redis:3.0-alpine'
command: redis-server --requirepass devpassword
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6379:6379'
web:
image: my_app_web:rv19
build: .
restart: always
command: >
gunicorn -c "python:config.gunicorn" --reload --timeout 5 "my_app.app:create_app()"
env_file:
- '.env'
volumes:
- '.:/my_app'
ports:
- '8000:8000'
depends_on:
- postgres
links:
- redis
- celery
celery:
build: .
command: celery worker -B -l info -A my_app.blueprints.contact.tasks
env_file:
- '.env'
volumes:
- '.:/my_app'
links:
- redis
depends_on:
- redis
volumes:
postgres:
redis:
And the Dockerfile:
FROM python:3.7-slim
MAINTAINER AAAAA AAAAA <aaaa#aaaa.a>
RUN apt-get update && apt-get install -qq -y \
build-essential libpq-dev --no-install-recommends
ENV INSTALL_PATH /my_app
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
RUN pip install --editable .
CMD gunicorn -c "python:config.gunicorn" "my_app.app:create_app()"
.env file:
COMPOSE_PROJECT_NAME=my_app
POSTGRES_USER=my_app
POSTGRES_PASSWORD=devpassword
PYTHONUNBUFFERED=true
Checking for availability of PHP within the app:
php = os.system('php --version')
print(php)
returns:
php: not found
web_1 | 32512
Can anybody please advise on how I can get PHP running alongside Python within my main web container so that I can call the function from my Python code?
If you don't need to serve php file and only execute it via cli, just install php in the container. You can do that in your Dockerfile by adding php in apt-get command:
RUN apt-get update && apt-get install -qq -y \
build-essential libpq-dev php --no-install-recommends

Categories

Resources