Long story short
I would like to run aiohttp backend services on a nginx webserver. Both should be running in docker containers. Furthermore my frontend angular application should access my backend services.
Expected behaviour
I expect that the nginx webserver could connect to my backend system aiohttp, running in docker.
Actual behaviour
I am always getting an error in the docker logs while I am trying to call a GET request on my aiohttp backend service.
nginx_1 | 2018/09/29 13:48:03 [error] 6#6: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: , request: "GET /toolservice/volatility?command=pslist HTTP/1.1", upstream: "http://172.19.0.2:80/toolservice/volatility?command=pslist", host: "localhost"
nginx_1 | 172.19.0.1 - - [29/Sep/2018:13:48:03 +0000] "GET /toolservice/volatility?command=pslist HTTP/1.1" 502 576 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" "-"
Docker-compose.yml
version: '3'
services:
nginx:
build: ./nginx
restart: always
depends_on:
- toolservice
- ifs
ports:
- "80:80"
ifs:
restart: always
build: ../ifsbackend
ports:
- "8002:8000"
toolservice:
restart: always
build: ../ToolService
ports:
- "8001:8000"
Dockerfile nginx webserver
FROM nginx:1.13-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY conf/server.conf /etc/nginx/conf.d/
Dockerfile aiohttp backend
FROM python:3.6.6-alpine
COPY tool /
COPY requirements.txt /
COPY toolservice_config.yaml /
RUN apk update && apk add \
python3-dev \
musl-dev \
gcc \
&& pip install -r requirements.txt \
&& pip install virtualenv
RUN python3 -m virtualenv --python=python3 virtualenv
EXPOSE 8080
CMD [ "python", "server.py" ]
Nginx webserver config
#upstream toolservice {
# server 0.0.0.0:8001 fail_timeout=0;
#}
server {
listen 80;
#server_name localhost;
proxy_buffers 8 16k;
proxy_buffer_size 32k;
location /toolservice {
proxy_pass http://toolservice;
proxy_redirect default;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /ifs {
proxy_pass http://ifs;
proxy_redirect default;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Aiohttp toolservice backend
from aiohttp import web
from routes import setup_routes
from settings import config
app = web.Application()
setup_routes(app)
app['config'] = config
web.run_app(app, port=8001)
Aiohttp is running on the port 8001 in the container toolservice, but your proxying to the port 80.
proxy_pass http://toolservice;
Try proxying to 8001:
proxy_pass http://toolservice:8001;
Maybe you will need to fix publishing of the port for toolservice container - I'm not 100% sure:
ports:
- "8001:8001"
Related
I've been dealing with nginx for about 1 week. I have 4 services that I set up with docker, django postgresql fastapi and nginx, but nginx does not serve django's static files. I'm facing 403 error. I tried the solutions of all similar error fields, but it doesn't work (file permission granting file path control) below I am sharing the files I use please help.
docker-compose.yml:
django_gunicorn:
build: .
command: gunicorn sunucu.wsgi:application --bind 0.0.0.0:7800 --workers 3
volumes:
- ./static:/root/Kodlar/kodlar/static
env_file:
- .env
environment:
- DATABASE_URL="**"
ports:
- "7800"
depends_on:
- db
nginx:
build: ./nginx
volumes:
- ./static:/root/Kodlar/kodlar/static
ports:
- "80:80"
depends_on:
- django_gunicorn
volumes:
static_files:
Django Docker File:
FROM python:3.8-slim-buster
WORKDIR /app
ENV PYTHONUNBUFFERED=1
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
RUN python manage.py migrate --no-input
RUN python manage.py collectstatic --no-input
CMD ["gunicorn", "server.wsgi:application", "--bind", "0.0.0.0:7800","--workers","3"]
Django Settings py:
STATIC_URL = '/static/'
STATIC_ROOT = '/root/Kodlar/kodlar/static/'
DEBUG = False
Nginx Docker File:
FROM nginx:1.19.0-alpine
COPY ./default.conf /etc/nginx/conf.d/default.conf
Nginx Conf File:
upstream django {
server django_gunicorn:7800;
}
server {
listen 80;
server_name mydomain.com;
error_page 404 /404.html;
location = /404.html {
root /root/Kodlar/kodlar/templates;
internal;
}
if ($host != 'mydomain.com') {
return 404;
}
location / {
proxy_pass http://django;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /static/ {
alias /root/Kodlar/kodlar/static/;
}
}
I want my django service to run actively with gunicorn and serve static files in nginx
I have a program consisting of Fast API, Celery, Flower and nginx written in Python. I use docker compose to build the images and deploy them to Azure App Service as a multi-container app.
My issue is that I cannot access Flower when deployed to Azure App Service. Locally, it works fine.
My docker-compose-build.yml which is used to build the images which are then pushed to ACR:
version: '3.4'
services:
fast_api:
container_name: fast_api
build:
context: .
dockerfile: ./Dockerfile
volumes:
- .:/app
ports:
- 8080:8080
depends_on:
- redis
celery_worker:
container_name: celery_worker
build: .
command: celery -A app.celery.worker worker --loglevel=warning --pool=eventlet --concurrency=1000 -O fair
volumes:
- .:/app
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- fast_api
- redis
redis:
container_name: redis
image: redis:6-alpine
ports:
- 6379:6379
flower:
container_name: flower
build: .
command: celery -A app.celery.worker flower --port=5555 --url_prefix=flower
ports:
- 5555:5555
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- fast_api
- redis
- celery_worker
nginx:
container_name: nginx
build:
context: .
dockerfile: ./Dockerfile.nginx
ports:
- 80:80
depends_on:
- flower
- fast_api
My docker-compose.yml which is used by Azure App Service:
version: '3.4'
services:
fast_api:
container_name: fast_api
image: name.azurecr.io/name_fast_api:latest
volumes:
- ${WEBAPP_STORAGE_HOME}/app
ports:
- 8080:8080
depends_on:
- redis
celery_worker:
container_name: celery_worker
image: name.azurecr.io/name_celery_worker:latest
command: celery -A app.celery.worker worker --loglevel=warning --pool=eventlet --concurrency=1000 -O fair
volumes:
- ${WEBAPP_STORAGE_HOME}/app
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- fast_api
- redis
redis:
container_name: redis
image: name.azurecr.io/redis:6-alpine
ports:
- 6379:6379
flower:
container_name: flower
image: name.azurecr.io/name_flower:latest
command: celery -A app.celery.worker flower --port=5555 --url_prefix=flower
ports:
- 5555:5555
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- fast_api
- redis
- celery_worker
nginx:
container_name: nginx
image: name.azurecr.io/name_nginx:latest
# volumes:
# - ${WEBAPP_STORAGE_HOME}/etc/nginx/nginx.conf # Storage:/etc/nginx/nginx.conf #/etc/nginx/ #/usr/share/nginx/html ##/etc/nginx/nginx.conf
ports:
- 80:80
depends_on:
- flower
- fast_api
Initially, in the docker-compose.yml, I pulled the image directly from Docker Hub, and then stored the nginx.conf file in Azure File Share which I mounted to the App Service.
I had a suspicion that the nginx.conf file was not used by nginx. Therefore, I build a custom nginx image by creating Dockerfile.nginx where I copy in the nginx.conf file.
FROM nginx:stable
COPY nginx.conf /etc/nginx/nginx.conf
My nginx.conf file:
events {
worker_connections 4096; ## Default: 1024
}
http {
server {
listen 80 default_server;
#listen [::]:80;
server_name _; #app-name.azurewebsites.net;
location / {
proxy_pass http://fast_api:8080/;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /test {
proxy_pass http://fast_api:8080/;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /flower {
proxy_pass http://flower:5555/flower;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
When I then go to https://app-name.azurewebsites.net/flower I get:
{
"detail": "Not Found"
}
I can without problem access https://app-name.azurewebsites.net/docs and the API works perfectly.
Does anyone have an idea why I cannot access Flower when deployed to Azure?
Any help and ideas is appreciated as I have run out of things to try!
The issue has been solved. The issue was that all the docker services' ports were exposed externally, and that port 8080 was used by the fast_api service which might have conflicting with the fact that App Service is listening to port 8080 or 80 (https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-linux#configure-port-number)
The solution was to only expose the nginx service's ports with the port parameter, and all other services should only by exposed internally with the expose parameter.
I'm running a flask python app, within which I make some Ajax calls.
Running it using the Flask development server works fine, the calls run in the background and I can continue using the app.
When moving to a gunicorn and Nginx reverse proxy setup the app seems to wait for that Ajax call to be processed (often ending up in a timeout). Why is that? Does this have something to do with multithreading? I'm new to gunicorn/nginx. Thanks for the help
The setup is pretty much the same as described here: https://testdriven.io/blog/dockerizing-flask-with-postgres-gunicorn-and-nginx/#docker
The nginx config:
upstream app {
server web:5000;
}
server {
listen 80;
location / {
proxy_pass http://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /home/app/web/project/static/;
}
location /media/ {
alias /home/app/web/project/media/;
}
}
docker-compose file:
version: '3.8'
services:
web:
container_name: app
restart: always
build:
context: ./services/web
dockerfile: Dockerfile.prod
expose:
- 5005
env_file:
- ./.env.prod
command: gunicorn --bind 0.0.0.0:5005 manage:app
volumes:
- static_volume:/home/hello_flask/web/app/static
- media_volume:/home/hello_flask/web/app/media
depends_on:
- db
db:
container_name: app_prod_db
restart: always
image: postgres:13-alpine
volumes:
- postgres_data_prod:/var/lib/postgresql/data/
env_file:
- ./.env.prod.db
nginx:
container_name: nginx
restart: always
build: ./services/nginx
volumes:
- static_volume:/home/app/web/app/static
- media_volume:/home/app/web/app/start/media
image: "nginx:latest"
ports:
- "5000:80"
depends_on:
- web
volumes:
postgres_data_prod:
static_volume:
media_volume:
Don't think that Ajax call is an issue but just in case here it is:
$("#load_account").on('submit', function(event) {
$.ajax({
data : {
vmpro : $('#accountInput').val()
},
type : 'POST',
url : '/account/load_account'
})
.done(function(data) {
if (data.error) {
$('#errorAlert_accountvmproInput').text(data.error).show();
$('#successAlert_accountInput').hide();
}
else {
$('#successAlert_accountInput').text(data.overview).show();
$('#errorAlert_accountInput').hide();
}
});
Solved:
gunicorn was running 1 single worker and of the default sync class.
Increasing the number of workers to 4 solved the problem.
However I actually opted to use gevent class workers in the end.
My updated docker-compose yml includes:
command: gunicorn -k gevent -w 2 --bind 0.0.0.0:5005 manage:app
Detailed in gunicorn documentation HERE
So I have banging my head on this issue for the past hours, and I have got a docker-compose.yml file
version: '2'
services:
web:
restart: always
build: ./web_app
expose:
- "8000"
ports:
- "8000:8000"
volumes:
- ./web_app:/data/web
command: /usr/local/bin/gunicorn web_interface:app -w 4 -t 90 --log-level=debug -b 0.0.0.0:8000 --reload
depends_on:
- postgres
nginx:
restart: always
build: ./nginx
ports:
- "8080:80"
volumes_from:
- web
depends_on:
- web
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
volumes:
- ./postgres/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
- ./backups/postgresql:/backup
expose:
- "5432"
data:
restart: always
image: alpine
volumes:
- /var/lib/postgresql
tty: true
However, when I docker-compose up and then navigate to localhost:8880, I get nothing served. Is like the nginx server is not accepting connections through the localhost.
nginx.conf
server {
listen 80;
server_name localhost;
charset utf-8;
location /static/ {
alias /data/web/crm/web_interface;
}
location = /favicon.ico {
alias /data/web/crm/web_interface/static/favicon.ico;
}
location / {
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
nginx/Dockerfile
FROM nginx:latest
RUN rm /etc/nginx/conf.d/default.conf
COPY ./nginx.conf /etc/nginx/conf.d/nginx.conf
And this is whats in my terminal:
I have been following this tutorial fairly closely, but it cant seem to serve the Flask App that I have created. Any ideas?
Try changing the port mapping for nginx service as below:
ports:
- "8880:80"
or make nginx listen on port 8080.
Two days of work and I'm still stuck. I'm running separate nginx and application containers. The application container has a flask app that runs a gunicorn process on port 8000.
Everytime I nav to localhost:8080 which is the nginx port 80 is mapped to on localhost, I get a loading screen and a nginx 504 error.
This is what I see on the terminal:
docker-compose.yml
version: '2'
services:
web:
restart: always
build: ./web_app
expose:
- "8000"
ports:
- "8000:8000"
volumes:
- ./web_app:/data/web
command: /usr/local/bin/gunicorn web_interface:app -w 4 -t 90 --log-level=info -b :8000 --reload
depends_on:
- postgres
nginx:
restart: always
build: ./nginx
ports:
- "8080:80"
volumes_from:
- web
depends_on:
- web
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
volumes:
- ./postgres/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
- ./backups/postgresql:/backup
expose:
- "5432"
data:
restart: always
image: alpine
volumes:
- /var/lib/postgresql
tty: true
nginx.conf
server {
listen 80;
server_name localhost;
charset utf-8;
location /static/ {
alias /data/web/crm/web_interface;
}
location = /favicon.ico {
alias /data/web/crm/web_interface/static/favicon.ico;
}
location / {
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
nginx Dockerfile
FROM nginx:latest
RUN rm /etc/nginx/conf.d/default.conf
COPY ./nginx.conf /etc/nginx/conf.d/nginx.conf
I'll provide more info if needed to get some help on this issue that Im struggling on.
NGINX response 504 indicate gateway timeout, because of NGINX can not connection the backend server. So, you can locate the issue at proxy_pass directory.
I guess NGINX can not resolve web domain name, There is two solution:
instead of IP
location / {
proxy_pass http://<real_ip>:8000;
}
use upstream
upstream web {
server <real_ip>;
}
location / {
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Ok, so after three days of bashing my head, I re-started from the ground up. Rebuilt the app container and ran gunicorn.
From there I was able to determine that the gunicorn process was timing out because the database host name was incorrect. Instead of the an error being returned through my application, the failure went silent.
I fixed this by linking the postgres container and the web container. In my code I was able to use "postgres" (the name of the container) as the postgres host name.
Check the addresses to your external hosts.