Two days of work and I'm still stuck. I'm running separate nginx and application containers. The application container has a flask app that runs a gunicorn process on port 8000.
Everytime I nav to localhost:8080 which is the nginx port 80 is mapped to on localhost, I get a loading screen and a nginx 504 error.
This is what I see on the terminal:
docker-compose.yml
version: '2'
services:
web:
restart: always
build: ./web_app
expose:
- "8000"
ports:
- "8000:8000"
volumes:
- ./web_app:/data/web
command: /usr/local/bin/gunicorn web_interface:app -w 4 -t 90 --log-level=info -b :8000 --reload
depends_on:
- postgres
nginx:
restart: always
build: ./nginx
ports:
- "8080:80"
volumes_from:
- web
depends_on:
- web
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
volumes:
- ./postgres/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
- ./backups/postgresql:/backup
expose:
- "5432"
data:
restart: always
image: alpine
volumes:
- /var/lib/postgresql
tty: true
nginx.conf
server {
listen 80;
server_name localhost;
charset utf-8;
location /static/ {
alias /data/web/crm/web_interface;
}
location = /favicon.ico {
alias /data/web/crm/web_interface/static/favicon.ico;
}
location / {
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
nginx Dockerfile
FROM nginx:latest
RUN rm /etc/nginx/conf.d/default.conf
COPY ./nginx.conf /etc/nginx/conf.d/nginx.conf
I'll provide more info if needed to get some help on this issue that Im struggling on.
NGINX response 504 indicate gateway timeout, because of NGINX can not connection the backend server. So, you can locate the issue at proxy_pass directory.
I guess NGINX can not resolve web domain name, There is two solution:
instead of IP
location / {
proxy_pass http://<real_ip>:8000;
}
use upstream
upstream web {
server <real_ip>;
}
location / {
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Ok, so after three days of bashing my head, I re-started from the ground up. Rebuilt the app container and ran gunicorn.
From there I was able to determine that the gunicorn process was timing out because the database host name was incorrect. Instead of the an error being returned through my application, the failure went silent.
I fixed this by linking the postgres container and the web container. In my code I was able to use "postgres" (the name of the container) as the postgres host name.
Check the addresses to your external hosts.
Related
I have a program consisting of Fast API, Celery, Flower and nginx written in Python. I use docker compose to build the images and deploy them to Azure App Service as a multi-container app.
My issue is that I cannot access Flower when deployed to Azure App Service. Locally, it works fine.
My docker-compose-build.yml which is used to build the images which are then pushed to ACR:
version: '3.4'
services:
fast_api:
container_name: fast_api
build:
context: .
dockerfile: ./Dockerfile
volumes:
- .:/app
ports:
- 8080:8080
depends_on:
- redis
celery_worker:
container_name: celery_worker
build: .
command: celery -A app.celery.worker worker --loglevel=warning --pool=eventlet --concurrency=1000 -O fair
volumes:
- .:/app
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- fast_api
- redis
redis:
container_name: redis
image: redis:6-alpine
ports:
- 6379:6379
flower:
container_name: flower
build: .
command: celery -A app.celery.worker flower --port=5555 --url_prefix=flower
ports:
- 5555:5555
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- fast_api
- redis
- celery_worker
nginx:
container_name: nginx
build:
context: .
dockerfile: ./Dockerfile.nginx
ports:
- 80:80
depends_on:
- flower
- fast_api
My docker-compose.yml which is used by Azure App Service:
version: '3.4'
services:
fast_api:
container_name: fast_api
image: name.azurecr.io/name_fast_api:latest
volumes:
- ${WEBAPP_STORAGE_HOME}/app
ports:
- 8080:8080
depends_on:
- redis
celery_worker:
container_name: celery_worker
image: name.azurecr.io/name_celery_worker:latest
command: celery -A app.celery.worker worker --loglevel=warning --pool=eventlet --concurrency=1000 -O fair
volumes:
- ${WEBAPP_STORAGE_HOME}/app
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- fast_api
- redis
redis:
container_name: redis
image: name.azurecr.io/redis:6-alpine
ports:
- 6379:6379
flower:
container_name: flower
image: name.azurecr.io/name_flower:latest
command: celery -A app.celery.worker flower --port=5555 --url_prefix=flower
ports:
- 5555:5555
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- fast_api
- redis
- celery_worker
nginx:
container_name: nginx
image: name.azurecr.io/name_nginx:latest
# volumes:
# - ${WEBAPP_STORAGE_HOME}/etc/nginx/nginx.conf # Storage:/etc/nginx/nginx.conf #/etc/nginx/ #/usr/share/nginx/html ##/etc/nginx/nginx.conf
ports:
- 80:80
depends_on:
- flower
- fast_api
Initially, in the docker-compose.yml, I pulled the image directly from Docker Hub, and then stored the nginx.conf file in Azure File Share which I mounted to the App Service.
I had a suspicion that the nginx.conf file was not used by nginx. Therefore, I build a custom nginx image by creating Dockerfile.nginx where I copy in the nginx.conf file.
FROM nginx:stable
COPY nginx.conf /etc/nginx/nginx.conf
My nginx.conf file:
events {
worker_connections 4096; ## Default: 1024
}
http {
server {
listen 80 default_server;
#listen [::]:80;
server_name _; #app-name.azurewebsites.net;
location / {
proxy_pass http://fast_api:8080/;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /test {
proxy_pass http://fast_api:8080/;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /flower {
proxy_pass http://flower:5555/flower;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
When I then go to https://app-name.azurewebsites.net/flower I get:
{
"detail": "Not Found"
}
I can without problem access https://app-name.azurewebsites.net/docs and the API works perfectly.
Does anyone have an idea why I cannot access Flower when deployed to Azure?
Any help and ideas is appreciated as I have run out of things to try!
The issue has been solved. The issue was that all the docker services' ports were exposed externally, and that port 8080 was used by the fast_api service which might have conflicting with the fact that App Service is listening to port 8080 or 80 (https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-linux#configure-port-number)
The solution was to only expose the nginx service's ports with the port parameter, and all other services should only by exposed internally with the expose parameter.
I'm running a flask python app, within which I make some Ajax calls.
Running it using the Flask development server works fine, the calls run in the background and I can continue using the app.
When moving to a gunicorn and Nginx reverse proxy setup the app seems to wait for that Ajax call to be processed (often ending up in a timeout). Why is that? Does this have something to do with multithreading? I'm new to gunicorn/nginx. Thanks for the help
The setup is pretty much the same as described here: https://testdriven.io/blog/dockerizing-flask-with-postgres-gunicorn-and-nginx/#docker
The nginx config:
upstream app {
server web:5000;
}
server {
listen 80;
location / {
proxy_pass http://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /home/app/web/project/static/;
}
location /media/ {
alias /home/app/web/project/media/;
}
}
docker-compose file:
version: '3.8'
services:
web:
container_name: app
restart: always
build:
context: ./services/web
dockerfile: Dockerfile.prod
expose:
- 5005
env_file:
- ./.env.prod
command: gunicorn --bind 0.0.0.0:5005 manage:app
volumes:
- static_volume:/home/hello_flask/web/app/static
- media_volume:/home/hello_flask/web/app/media
depends_on:
- db
db:
container_name: app_prod_db
restart: always
image: postgres:13-alpine
volumes:
- postgres_data_prod:/var/lib/postgresql/data/
env_file:
- ./.env.prod.db
nginx:
container_name: nginx
restart: always
build: ./services/nginx
volumes:
- static_volume:/home/app/web/app/static
- media_volume:/home/app/web/app/start/media
image: "nginx:latest"
ports:
- "5000:80"
depends_on:
- web
volumes:
postgres_data_prod:
static_volume:
media_volume:
Don't think that Ajax call is an issue but just in case here it is:
$("#load_account").on('submit', function(event) {
$.ajax({
data : {
vmpro : $('#accountInput').val()
},
type : 'POST',
url : '/account/load_account'
})
.done(function(data) {
if (data.error) {
$('#errorAlert_accountvmproInput').text(data.error).show();
$('#successAlert_accountInput').hide();
}
else {
$('#successAlert_accountInput').text(data.overview).show();
$('#errorAlert_accountInput').hide();
}
});
Solved:
gunicorn was running 1 single worker and of the default sync class.
Increasing the number of workers to 4 solved the problem.
However I actually opted to use gevent class workers in the end.
My updated docker-compose yml includes:
command: gunicorn -k gevent -w 2 --bind 0.0.0.0:5005 manage:app
Detailed in gunicorn documentation HERE
Description of problem
I am unable to get the static files to load when running KiwiTcms via docker in a subdirectory. ex: www.mysiteweb.com/kiwi with a reverse proxy (nginx)
As suggested in the docs, i made a local_setting.py file. It contains the following:
MEDIA_URL = '/kiwi/uploads/'
STATIC_URL = '/kiwi/static/'
Actual results
The path seems to be correct, ex: https://www.mysiteweb.com/kiwi/static/patternfly/dist/js/patternfly.min.js, but it fails do load. Give me 404.
Expected results
Run kiwi in a subdirectory: www.mysiteweb.com/kiwi and have the static files loading correctly.
Additional info
I use the default docker-compose.yml config as a base, but use my own database. (i've been able to perform the shema migration successfully).
I changed the base_url directly in the database to https://www.mysiteweb.com/kiwi
Here is my current nginx configuration: It is proxying correctly
#kiwi // port 8005 location /kiwi {
proxy_pass http://localhost:8005;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
Here is my docker-compose.yml
version: '2'
services:
web:
container_name: kiwi_web
restart: always
image: kiwitcms/kiwi:latest
ports:
- 8005:8443
volumes:
- /data/kiwitcms/Kiwiuploads:/Kiwi/uploads:Z
- ./local_settings.py:/venv/lib64/python3.6/site-packages/tcms/settings/local_settings.py
environment:
KIWI_DB_HOST: 172.17.0.1
KIWI_DB_PORT: 3306
KIWI_DB_NAME: kiwi
KIWI_DB_USER: kiwi
KIWI_DB_PASSWORD: XXXXXXXX
So I have banging my head on this issue for the past hours, and I have got a docker-compose.yml file
version: '2'
services:
web:
restart: always
build: ./web_app
expose:
- "8000"
ports:
- "8000:8000"
volumes:
- ./web_app:/data/web
command: /usr/local/bin/gunicorn web_interface:app -w 4 -t 90 --log-level=debug -b 0.0.0.0:8000 --reload
depends_on:
- postgres
nginx:
restart: always
build: ./nginx
ports:
- "8080:80"
volumes_from:
- web
depends_on:
- web
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
volumes:
- ./postgres/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
- ./backups/postgresql:/backup
expose:
- "5432"
data:
restart: always
image: alpine
volumes:
- /var/lib/postgresql
tty: true
However, when I docker-compose up and then navigate to localhost:8880, I get nothing served. Is like the nginx server is not accepting connections through the localhost.
nginx.conf
server {
listen 80;
server_name localhost;
charset utf-8;
location /static/ {
alias /data/web/crm/web_interface;
}
location = /favicon.ico {
alias /data/web/crm/web_interface/static/favicon.ico;
}
location / {
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
nginx/Dockerfile
FROM nginx:latest
RUN rm /etc/nginx/conf.d/default.conf
COPY ./nginx.conf /etc/nginx/conf.d/nginx.conf
And this is whats in my terminal:
I have been following this tutorial fairly closely, but it cant seem to serve the Flask App that I have created. Any ideas?
Try changing the port mapping for nginx service as below:
ports:
- "8880:80"
or make nginx listen on port 8080.
I am following this example and this answer on stackoverflow and I am stuck.
I am running this example on a digitalocean VPS. My file structure is as follows:
project structure
docker-compose.yml
mainweb/
nginx/
README
docker-compose.yml
version: '2'
services:
app:
restart: always
build: ./mainweb
command: gunicorn -w 2 -b :5000 wsgi:app
networks:
- mainnet
expose:
- "5000"
ports:
- "5000:5000"
nginx:
restart: always
build: ./nginx
networks:
- mainnet
links:
- app
volumes:
- /www/static
expose:
- 8080
ports:
- "8880:8080"
networks:
mainnet:
mainweb/
app.py
Dockerfile
requirements.txt
templates/
wsgi.py
mainweb/app.py
from flask import Flask, render_template
app=Flask(__name__)
#app.route('/')
def home()():
return render_template('templates/home.html')
if __name__=="__main__":
app.run(host="0.0.0.0", port=5000)
mainweb/Dockerfile
FROM python:3.5
MAINTAINER castellanprime
RUN mkdir /mainweb
COPY . /mainweb
WORKDIR /mainweb
RUN pip install -r requirements.txt
mainweb/templates/
home.html
mainweb/templates/home.html
<!doctype html>
<html>
<head>
<title> My website </title>
</head>
<body>
<h1> I am here </h1>
</body>
</html>
mainweb/wsgi.py
from app import app
if __name__=="__main__":
app.run()
nginx
Dockerfile
sites-enabled.conf
static/
nginx/Dockerfile
FROM nginx:1.13.1-alpine
MAINTAINER castellanprime
ADD sites-enabled.conf /etc/nginx/conf.d/sites-enabled.conf
ADD static/ /www/static/
nginx/sites-enabled.conf
server{
listen 8080;
server_name app; # Should I put my actual www.XXXXXX.XXXXX address here
charset utf-8;
location /static{
alias /www/static/;
}
location / {
proxy_pass http://app:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X_Forwared-For $proxy_add_x_forwarded_for;
}
}
nginx/static
css/
js/
After I run the command docker-compose up -d, I check the www.XXXXXX.com:8880, or www.XXXXXX.com:8080 from another web client on another system.
I get the standard nginx web page.
How do I redirect it to the home.html?
Take a step back and run the Flask app alone.
You have some syntax errors.
from flask import Flask, render_template
app=Flask(__name__)
#app.route('/')
def home(): # Remove double brackets
return render_template('home.html') # The templates folder is already picked up
if __name__=="__main__":
app.run(host="0.0.0.0", port=5000)
Then in a Docker container, and without gunicorn
FROM python:3.5
RUN mkdir /mainweb
COPY . /mainweb
WORKDIR /mainweb
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python3","/mainweb/app.py"]
And run it, and see if it works.
cd mainapp
docker build -t flask:test .
docker run --rm -p 5000:5000 flask:test
Open http://server:5000
Then start on docker-compose with just that container and define nginx if you want.
nginx/Dockerfile
FROM nginx:1.13.1-alpine
ADD flask.conf /etc/nginx/conf.d/
EXPOSE 8080
nginx/flask.conf (I changed this based on a file that I have in a project)
server {
listen 8080; # This is the port to EXPOSE in nginx container
server_name app; # You can change this, but not necessary
charset utf-8;
location ^~ /static/ {
alias /usr/share/nginx/html/;
}
location / {
try_files $uri $uri/ #flask;
}
location #flask {
proxy_pass http://app:5000; # This is the port Flask container EXPOSE'd
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X_Forwared-For $proxy_add_x_forwarded_for;
}
}
And finally, the compose. You don't want to have your site exposing both 5000 and 80 (you don't want people to bypass nginx), so just don't expose 5000
version: '2'
services:
app:
restart: always
build: ./mainweb
networks:
- mainnet
nginx:
restart: always
build: ./nginx
networks:
- mainnet
links:
- app
volumes:
- ./mainweb/static:/usr/share/nginx/html
ports:
- "80:8080"
networks:
mainnet: