serving flask via nginx and gunicorn in docker - python

Playing around with flask I would like to get a real setup up and running in docker. This means flask should be served via nginx and gunicorn. I set up a sample code repository https://github.com/geoHeil/pythonServing but so far can't get nginx to work properly.
Flask is served on application:5000, docker should resolve application to its respective name.
Nginx config is as follows:
server {
listen 8080;
server_name application;
charset utf-8;
location / {
proxy_pass http://application:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
which looks good to me. So far I cant find the problem.
edit
compose file is here. Command to start was
docker-compose build
docker-compose up
version: '2'
services:
application:
restart: always
build: ./application
command: gunicorn -w 4 --bind :5000 wsgi:application
links:
- db
expose:
- "5000"
ports:
- "5000:5000"
nginx:
restart: always
build: ./nginx
links:
- application
expose:
- 8080
ports:
- "8880:8080"

Your nginx config file is in a wrong location.
Steps to fix:
sudo docker-compose down
Delete nginx image:
sudo docker images
sudo docker rmi
REPOSITORY TAG IMAGE ID CREATED SIZE
pythonserving_nginx latest 152698f13c7a About a minute ago 54.3 MB
sudo docker rmi pythonserving_nginx
Now change the nginx Dockerfile:
FROM nginx:1.11.8-alpine
MAINTAINER geoheil
ADD sites-enabled.conf /etc/nginx/conf.d/sites-enabled.conf
Please note the location of nginx config.
Now try this docker-compose file (Using User-defined Networks):
version: '2'
services:
application:
restart: always
build: ./application
command: gunicorn -w 4 --bind :5000 wsgi:application
networks:
- testnetwork
expose:
- "5000"
ports:
- "5000:5000"
db:
restart: always
image: postgres:9.6.1-alpine
networks:
- testnetwork
ports:
- "5432:5432"
environment:
- POSTGRES_USER=d
- POSTGRES_PASSWORD=d
- POSTGRES_DB=d
volumes:
- ./postgres:/var/lib/postgresql
nginx:
restart: always
build: ./nginx
networks:
- testnetwork
expose:
- 8080
ports:
- "8880:8080"
networks:
testnetwork:
And Bring up containers:
sudo docker-compose up
Browse to http://localhost:8880

Smaple docker file
FROM python:3.5
RUN apt-get update
RUN apt-get install -y --no-install-recommends \
libatlas-base-dev gfortran nginx supervisor
RUN pip3 install uwsgi
COPY ./requirements.txt /project/requirements.txt
RUN pip3 install -r /project/requirements.txt
RUN useradd --no-create-home nginx
RUN rm /etc/nginx/sites-enabled/default
RUN rm -r /root/.cache
COPY nginx.conf /etc/nginx/
COPY flask-site-nginx.conf /etc/nginx/conf.d/
COPY uwsgi.ini /etc/uwsgi/
COPY supervisord.conf /etc/
COPY /app /project
WORKDIR /project
CMD ["/usr/bin/supervisord"]

Related

django and nginx is not working on docker

i am a beginner in the docker and i want to deploy my django project
using nginx and postgres on vps using docker so I create a dockerfile and docker-compose but it is not working it means that postgres is on the port but django and nginx is not working i don't have any idea can you help me
my dockerfile
FROM python:3.8-slim-buster
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install -y build-essential libpq-dev
RUN rm -rf /var/lib/apt/lists/*
COPY . .
RUN pip install --upgrade pip && pip install -r requirements.txt
my docker compose
version: '3.8'
services:
database:
container_name: database
image: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: postgres
volumes:
- postgres:/var/lib/postgresql/data
restart: always
app:
build:
context: .
container_name: django-app
command: >
sh -c "python3 manage.py migrate &&
gunicorn config.wsgi:application --bind 0.0.0.0:8000"
volumes:
- static:/usr/src/app/static
- media:/usr/src/app/media
depends_on:
- database
environment:
- DEBUG=False
- ALLOWD_HOST=*
- DATABASE-NAME=postgres
- DATABASE-USER=postgres
- DATABASE-PASSWORD=postgres
- DATABASE-HOST=database
- DATABASE-PORT=5432
nginx:
image: nginx
container_name: nginx
ports:
- "80:80"
volumes:
- ./nginx:/etc/nginx/conf.d
- static:/var/www/static
- media:/var/www/media
volumes:
postgres:
static:
media:
I presume you want to serve Django static files in Nginx (reverse proxy feature in that case).You lack binding Nginx with Gunicorn served port and static file catalogue here.
So you will need to configure nginx to do that in the conf file.
For example. file default.conf
upstream backend {
server app:8000;
}
server {
listen 80;
# Django admin (if implemented)
location /admin {
proxy_pass http://backend;
autoindex off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
# Django static assests.
location /static/ {
autoindex off;
alias /usr/src/app/staticfiles/; # if thats where you copied your app files in docker container
}
Also will need static files to be configured in settings.py
STATIC_URL = "/static/"
STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
static files regards catalogue in project root.
You will also have to make some dirs in Dockerfile for Django before you will copy project - in order to avoid error in collectstatic.
$WORKDIR your_workdir_path_in_docker_cotainer
# Make static files dirs in order to avoid error from collectstatic.
RUN mkdir $WORKDIR/staticfiles && \
mkdir $WORKDIR/staticfiles/admin && \ # if you implemented - to keep css/js
mkdir $WORKDIR/staticfiles/rest_framework #if you using DRF - to keep css/js
Last thing is to add add collectstatic command in you Docker-compose section where Gunicorn is started like.
command: >
sh -c "python manage.py migrate --noinput &&
python manage.py collectstatic --no-input &&
gunicorn config.wsgi:application --bind 0.0.0.0:8000"
Maybe its worthy to create separate Dockerfile for Nginx
FROM nginx:1.21-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY your_path_for_file/default.conf /etc/nginx/conf.d
CMD ["nginx", "-g", "daemon off;"] # To start Nginx
You can start that aditional container in Docker-compose by specifying path to Dockerfile
dockerfile: your_path_to_nginx_dockerfile/Dockerfile.nginx

docker nginx django gunicorn not serving static files in production

Trying to deploy website with nginx + gunicorn + docker + django. But ngingx isn't serving static files. Following are the configurations:
Django project structure
settings file production.py
STATIC_URL = "/static/"
"""STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'static'),
)"""
STATIC_ROOT = "/app/forex/static/admin/"
Docker file for nginx
FROM nginx:1.19.0
COPY ./default.conf /etc/nginx/conf.d/default.conf
nginx configurations
upstream django {
server website:8000;
}
server {
listen 80;
client_max_body_size 100M;
proxy_set_header X-Forwarded-Proto $scheme;
location / {
proxy_pass http://django;
}
location /media/ {
alias /app/media/;
}
location /static/ {
alias /app/forex/static/admin/;
}
}
Gunicorn docker file
FROM python:3
ADD requirements.txt /app/requirements.txt
ADD . /app
WORKDIR /app
EXPOSE 8000:8000
RUN pip install --upgrade pip && pip install -r /app/requirements.txt
RUN python manage.py collectstatic --no-input --settings=forex.settings.production
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "3", "forex.wsgi:application", "DJANGO_SETTINGS_MODULE=forex.settings.production"]
docker-compose.yml
services:
website:
build:
context: .
dockerfile: Dockerfile.app
env_file:
- env
container_name: website_container_8
nginx:
build: ./nginx
volumes:
- static:/app/forex/static/admin/
ports:
- "80:80"
depends_on:
- website
volumes:
static:
FROM nginx container, it isn't copying static files.
What do I need to change to make it working?
Your files are located at your website container, you need to share it, with the nginx container:
services:
website:
build:
context: .
dockerfile: Dockerfile.app
env_file:
- env
container_name: website_container_8
volumes:
- static:/app/forex/static/admin/ #<-- you want to share this
nginx:
build: ./nginx
volumes:
- static:/app/forex/static/admin/ #<-- with this folder
ports:
- "80:80"
depends_on:
- website
volumes:
static: #<-- you can do it through this

uwsgi connect() failed (111: Connection refused) - DOCKER - PYTHON [duplicate]

This question already has answers here:
Nginx and uwsgi connection refused when placed in separate docker containers
(1 answer)
Connection refused: when uwsgi and nginx in different containers
(1 answer)
Closed last year.
I'm trying to do an infra with docker. I can't use flask or Django.
Here are my files, I tried lot of uwsgi parameters but I still have this error :
connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.240.1, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://127.0.0.1:8080", host: "127.0.0.1:8080", referrer: "http://127.0.0.1:8080/"
I'm a student, I already read all the documentations, articles that I can, but I still don't know how to fix my problem.
Here are my different files.
[./default.conf]
upstream uwsgicluster {
server 127.0.0.1:8080;
}
server {
listen 80;
#root /usr/share/nginx/html;
location / {
#try_files $uri #wsgi;
uwsgi_pass uwsgicluster;
include /etc/nginx/uwsgi_params;
}
}
[python/app.ini]
[uwsgi]
wsgi-file = run.py # The file containing the callable (app)
callable = python # The callable object itself
socket = :5000 # The socket uwsgi will listen on
master = true
chmod-socket = 666
vacuum = true
process = 2
[python/Dockerfile]
FROM python:3.6.5
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
RUN apt-get update && apt-get install -y uwsgi-plugin-python3
EXPOSE 5000
COPY . .
RUN uwsgi --ini app.ini
[docker-compose.yml]
services:
mysql_db:
container_name: "mysql_db"
image: mysql:5.7
command: mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci
volumes:
- db_volume:/var/lib/mysql
environment: # Set up mysql database name and password
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: employees
MYSQL_USER: user
MYSQL_PASSWORD: password
networks:
- app-db
python3:
restart: always
build: ./python
container_name: "python3"
working_dir: "/root/"
tty: true
environment:
- APP_NAME=DOCKER-TEST
depends_on:
- mysql_db
networks:
- app-front
- app-db
volumes:
- ./python:/root
- pycache_volume:/root/.cache
links:
- mysql_db
front:
image: nginx:latest
container_name: "front"
volumes:
- ./front/default.conf:/etc/nginx/conf.d/default.conf
ports:
- 8080:80
networks:
- app-front
links:
- python3
networks:
app-front:
driver: bridge
app-db:
driver: bridge
volumes:
db_volume:
pycache_volume:
Can you help me please ?
Thank you !

Docker Compose Nginx Internal Server Error

I have a flask app that I want to host on nginx in my docker compose file but when I do it gives me an Internal Server error
Here are some important files:
docker-compose.yml
version: "3.8"
services:
th3pl4gu3:
container_name: portfolix
build: ./
networks:
- portfolix_net
ports:
- 8084:8084
restart: always
server:
image: nginx:1.17.10
container_name: nginx
depends_on:
- th3pl4gu3
volumes:
- ./reverse_proxy/nginx.conf:/etc/nginx/nginx.conf
ports:
- 80:80
networks:
- portfolix_net
networks:
portfolix_net:
name: portfolix_network
driver: bridge
nginx.conf:
server {
listen 80;
location / {
include uwsgi_params;
uwsgi_pass th3pl4gu3:8084;
}
}
Flask Dockerfile
# Using python 3.8 in Alpine
FROM python:3.8-alpine3.11
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Dependencies for uWSGI
RUN apk add python3-dev build-base linux-headers pcre-dev && pip install -r requirements.txt && apk update
# In case bash is needed
#RUN apk add --no-cache bash
# Tell the port number the container should expose
EXPOSE 8084
# Run the command
ENTRYPOINT ["uwsgi", "app.ini"]
app.ini
[uwsgi]
module = run:app
master = true
processes = 5
http-socket = 0.0.0.0:8084
chmod-socket = 660
vacuum = true
die-on-term = true
Now when I run this docker-compose without the nginx service, it works, but i want it to run on nginx server. Any idea why i am geting the Internal Server Error?
I was able to solve it with the following docker-compose:
version: "3.8"
services:
th3pl4gu3:
container_name: portfolix
build: ./
networks:
- portfolix_net
expose:
- 8084
restart: always
server:
image: nginx:1.17.10
container_name: nginx
depends_on:
- th3pl4gu3
volumes:
- ./reverse_proxy/nginx.conf:/etc/nginx/nginx.conf
ports:
- 8084:80
networks:
- portfolix_net
networks:
portfolix_net:
name: portfolix_network
driver: bridge
The issue was with my 8084:8084

How to Hot-Reload in ReactJS Docker

This might sound simple, but I have this problem.
I have two docker containers running. One is for my front-end and other is for my backend services.
these are the Dockerfiles for both services.
front-end Dockerfile :
# Use an official node runtime as a parent image
FROM node:8
WORKDIR /app
# Install dependencies
COPY package.json /app
RUN npm install --silent
# Add rest of the client code
COPY . /app
EXPOSE 3000
CMD npm start
backend Dockerfile :
FROM python:3.7.7
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY server.py /usr/src/app
COPY . /usr/src/app
EXPOSE 8083
# CMD ["python3", "-m", "http.server", "8080"]
CMD ["python3", "./server.py"]
I am building images with the docker-compose.yaml as below:
version: "3.2"
services:
frontend:
build: ./frontend
ports:
- 80:3000
depends_on:
- backend
backend:
build: ./backends/banuka
ports:
- 8080:8083
How can I make this two services Update whenever there is a change to front-end or back-end?
I found this repo, which is a booilerplate for reactjs, python-flask and posgresel, which says it has enabled Hot reload for both reactjs frontend and python-flask backend. But I couldn't find anything related to that. Can someone help me?
repo link
What I want is: after every code change the container should b e up-to-date automatically !
Try this in your docker-compose.yml
version: "3.2"
services:
frontend:
build: ./frontend
environment:
CHOKIDAR_USEPOLLING: "true"
volumes:
- /app/node_modules
- ./frontend:/app
ports:
- 80:3000
depends_on:
- backend
backend:
build: ./backends/banuka
environment:
CHOKIDAR_USEPOLLING: "true"
volumes:
- ./backends/banuka:/app
ports:
- 8080:8083
Basically you need that chokidar environment to enable hot reloading and you need volume bindings to make your code on your machine communicate with code in container. See if this works.
If you are mapping your react container's port to a different port:
ports:
- "30000:3000"
you may need to tell the WebSocketClient to look at the correct port:
environment:
- CHOKIDAR_USEPOLLING=true # create-ui-app <= 5.x
- WATCHPACK_POLLING=true # create-ui-app >= 5.x
- FAST_REFRESH=false
- WDS_SOCKET_PORT=30000 # The mapped port on your host machine
See related issue:
https://github.com/facebook/create-react-app/issues/11779

Categories

Resources