Situation: I have a Django Application that I want to deploy, the tools I use for this one are Nginx, Gunicorn, and all of them are inside a docker container using Docker Desktop.
Problem: I'm able to view the django app locally using the IP of my docker, IP of my machine, and Loopback IP. However when I try to access it from my laptop(another machine connected on same wifi), I can't access it.
My Machine: Windows 10, I have already enable the expose of port 80 in the windows firewall inbound and outbound.
Steps Taken: I've tried doing python -m http.server 80 on my machine, and it's working perfectly fine so I'm sure there is something to do maybe on my Hyper-V of docker desktop or maybe nginx configuration
My docker-compose file
version: '3'
services:
dashboard:
build: .
volumes:
- .:/opt/services/dashboard/src
- static_volume:/opt/services/dashboard/src/static
networks: # <-- here
- nginx_network
nginx:
image: nginx:1.13
ports:
- 0.0.0.0:80:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/opt/services/dashboard/src/static
depends_on:
- dashboard
networks: # <-- here
- nginx_network
networks: # <-- and here
nginx_network:
driver: bridge
volumes:
static_volume: # <-- declare the static volume
My dockerfile
# start from an official image
FROM python:3.6
# arbitrary location choice: you can change the directory
RUN mkdir -p /opt/services/dashboard/src
WORKDIR /opt/services/dashboard/src
# install our two dependencies
RUN pip install gunicorn django requests jira python-dateutil
# copy our project code
COPY . /opt/services/dashboard/src
# expose the port 80
EXPOSE 80
# define the default command to run when starting the container
CMD ["gunicorn", "--bind", ":80", "dashboard.wsgi:application"]
My nginx config file
# first we declare our upstream server, which is our Gunicorn application
upstream dashboard_server {
# docker will automatically resolve this to the correct address
# because we use the same name as the service: "djangoapp"
server dashboard:80;
}
# now we declare our main server
server {
listen 80;
server_name localhost;
location / {
# everything is passed to Gunicorn
proxy_pass http://dashboard_server;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /opt/services/dashboard/src/static/;
}
}
Here is an image of my folder structure.
Image of folder structure
QUESTION: How do I atleast make it viewable on my laptop which is connected through the same Wifi as my desktop machine? I've tried accesing it using the IP of my machine.
Restarted the router switch and it worked perfectly.
Related
I've been dealing with nginx for about 1 week. I have 4 services that I set up with docker, django postgresql fastapi and nginx, but nginx does not serve django's static files. I'm facing 403 error. I tried the solutions of all similar error fields, but it doesn't work (file permission granting file path control) below I am sharing the files I use please help.
docker-compose.yml:
django_gunicorn:
build: .
command: gunicorn sunucu.wsgi:application --bind 0.0.0.0:7800 --workers 3
volumes:
- ./static:/root/Kodlar/kodlar/static
env_file:
- .env
environment:
- DATABASE_URL="**"
ports:
- "7800"
depends_on:
- db
nginx:
build: ./nginx
volumes:
- ./static:/root/Kodlar/kodlar/static
ports:
- "80:80"
depends_on:
- django_gunicorn
volumes:
static_files:
Django Docker File:
FROM python:3.8-slim-buster
WORKDIR /app
ENV PYTHONUNBUFFERED=1
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
RUN python manage.py migrate --no-input
RUN python manage.py collectstatic --no-input
CMD ["gunicorn", "server.wsgi:application", "--bind", "0.0.0.0:7800","--workers","3"]
Django Settings py:
STATIC_URL = '/static/'
STATIC_ROOT = '/root/Kodlar/kodlar/static/'
DEBUG = False
Nginx Docker File:
FROM nginx:1.19.0-alpine
COPY ./default.conf /etc/nginx/conf.d/default.conf
Nginx Conf File:
upstream django {
server django_gunicorn:7800;
}
server {
listen 80;
server_name mydomain.com;
error_page 404 /404.html;
location = /404.html {
root /root/Kodlar/kodlar/templates;
internal;
}
if ($host != 'mydomain.com') {
return 404;
}
location / {
proxy_pass http://django;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /static/ {
alias /root/Kodlar/kodlar/static/;
}
}
I want my django service to run actively with gunicorn and serve static files in nginx
Currently, I try to deploy a Django project to AWS EB but I'm facing a lot of problems. I could dockerize the project and deploy it on the AWS elastic beanstalk. But when I try to access the site I always see: 502 Bad Gateway. Locally, the project runs smoothly. I am not really into nginx and I have no idea how to solve this problem.
This is my project structure:
This is my Dockerfile:
# Creating image based on official python3 image
FROM python:3
MAINTAINER Jaron Bardenhagen
# Sets dumping log messages directly to stream instead of buffering
ENV PYTHONUNBUFFERED 1
# Creating and putting configurations
RUN mkdir /config
ADD config/app /config/
# Installing all python dependencies
RUN pip install -r /config/requirements.txt
# Open port 8000 to outside world
EXPOSE 8000
# When container starts, this script will be executed.
# Note that it is NOT executed during building
CMD ["sh", "/config/on-container-start.sh"]
# Creating and putting application inside container
# and setting it to working directory (meaning it is going to be default)
RUN mkdir /app
WORKDIR /app
ADD app /app/
This is my docker-compose file:
# File structure version
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_DB_PORT: "5432"
POSTGRES_DB_HOST: "*******"
POSTGRES_PASSWORD: "*******"
POSTGRES_USER: Jaron
POSTGRES_DB: ebdb
# Build from remote dockerfile
# Connect local app folder with image folder, so changes will be pushed to image instantly
# Open port 8000
app:
build:
context: .
dockerfile: config/app/Dockerfile
hostname: app
volumes:
- ./app:/app
expose:
- "8000"
depends_on:
- db
# Web server based on official nginx image
# Connect external 8000 (which you can access from browser)
# with internal port 8000(which will be linked to app port 8000 in configs)
# Connect local nginx configuration with image configuration
nginx:
image: nginx
hostname: nginx
ports:
- "8000:8000"
volumes:
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- app
This is the Dockerrun.aws File:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "******/******:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8000"
}
]
}
On-container-start.sh file:
# Create migrations based on django models
python manage.py makemigrations
# Migrate created migrations to database
python manage.py migrate
# Start gunicorn server at port 8000 and keep an eye for app code changes
# If changes occur, kill worker and start a new one
gunicorn --reload project.wsgi:application -b 0.0.0.0:8000
And here is the file for the nginx settings (app.conf):
# define group app
upstream app {
# balancing by ip
ip_hash;
# define server app
server app:8000;
}
# portal
server {
# all other requests proxies to app
location / {
proxy_pass http://app/;
}
# only respond to port 8000
listen 8000;
# domain localhost
server_name localhost;
}
I really appreciate any kind of help!
First check this question on stack overflow: link to question.
If this doesn't solve your problem try the below suggestion.
Try to replace the postgress database with RDS and make sure your RDS and EB environment are in the same VPC. I had this problem when I tried to deploy django project on EB.
Removing Postgress docker image and connecting it with RDS solved my issue.
For some reason, EB is stopping postgress images.
I have django app and implementing websocket support with channels and channels api. I am using demultiplexer with bindings to my models. For example when i save a model it will send the change to my open websocket connection.
Everything works OK if i run ./manage.py runserver 0:80 and have all in one container. But if i separate my app to UWSGI, daphne and the worker containers using docker the signals are not triggered. For example i want any celery worker (task) to trigger the signal and send update via the websocket. In my multicontainer setup the websocket connection is established OK and web works OK, but nothing triggers that signals.
How the signals are defined you can see here on github.
I am using django 1.9.12, python 2.7, docker and build on debian stretch.
docker-compose.yml
web:
build: .
ports: ["8001:80"]
daphne:
build: .
command: daphne -b 0.0.0.0 -p 8000 --access-log - -v 2 my_proj.asgi:channel_layer
ws_worker:
build: .
command: ./manage.py runworker -v 2
celery_worker:
build: .
command: /usr/local/bin/celery -A my_proj worker
nginx.conf
upstream django {
server unix:/home/docker/app.sock;
}
server {
listen 80;
server_name 127.0.0.1;
charset utf-8;
client_max_body_size 1000M;
location /static/ {
alias /home/docker/static/;
}
# proxy to other container
location /ws/ {
proxy_pass http://daphne:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / {
uwsgi_pass django;
include /home/docker/uwsgi_params;
}
}
My problem was that the signals were not loading cause i defined binding classes else where than in models.py. If i load them after the models are loaded in my_app/config.py it works across multiple containers
from django.apps import AppConfig as DefaultAppConfig
class AppConfig(DefaultAppConfig):
def ready(self):
# for websockets bindigns
from my_app.websockets.bindings_one import *
from my_app.websockets.bindings_two import *
I am following this example and this answer on stackoverflow and I am stuck.
I am running this example on a digitalocean VPS. My file structure is as follows:
project structure
docker-compose.yml
mainweb/
nginx/
README
docker-compose.yml
version: '2'
services:
app:
restart: always
build: ./mainweb
command: gunicorn -w 2 -b :5000 wsgi:app
networks:
- mainnet
expose:
- "5000"
ports:
- "5000:5000"
nginx:
restart: always
build: ./nginx
networks:
- mainnet
links:
- app
volumes:
- /www/static
expose:
- 8080
ports:
- "8880:8080"
networks:
mainnet:
mainweb/
app.py
Dockerfile
requirements.txt
templates/
wsgi.py
mainweb/app.py
from flask import Flask, render_template
app=Flask(__name__)
#app.route('/')
def home()():
return render_template('templates/home.html')
if __name__=="__main__":
app.run(host="0.0.0.0", port=5000)
mainweb/Dockerfile
FROM python:3.5
MAINTAINER castellanprime
RUN mkdir /mainweb
COPY . /mainweb
WORKDIR /mainweb
RUN pip install -r requirements.txt
mainweb/templates/
home.html
mainweb/templates/home.html
<!doctype html>
<html>
<head>
<title> My website </title>
</head>
<body>
<h1> I am here </h1>
</body>
</html>
mainweb/wsgi.py
from app import app
if __name__=="__main__":
app.run()
nginx
Dockerfile
sites-enabled.conf
static/
nginx/Dockerfile
FROM nginx:1.13.1-alpine
MAINTAINER castellanprime
ADD sites-enabled.conf /etc/nginx/conf.d/sites-enabled.conf
ADD static/ /www/static/
nginx/sites-enabled.conf
server{
listen 8080;
server_name app; # Should I put my actual www.XXXXXX.XXXXX address here
charset utf-8;
location /static{
alias /www/static/;
}
location / {
proxy_pass http://app:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X_Forwared-For $proxy_add_x_forwarded_for;
}
}
nginx/static
css/
js/
After I run the command docker-compose up -d, I check the www.XXXXXX.com:8880, or www.XXXXXX.com:8080 from another web client on another system.
I get the standard nginx web page.
How do I redirect it to the home.html?
Take a step back and run the Flask app alone.
You have some syntax errors.
from flask import Flask, render_template
app=Flask(__name__)
#app.route('/')
def home(): # Remove double brackets
return render_template('home.html') # The templates folder is already picked up
if __name__=="__main__":
app.run(host="0.0.0.0", port=5000)
Then in a Docker container, and without gunicorn
FROM python:3.5
RUN mkdir /mainweb
COPY . /mainweb
WORKDIR /mainweb
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python3","/mainweb/app.py"]
And run it, and see if it works.
cd mainapp
docker build -t flask:test .
docker run --rm -p 5000:5000 flask:test
Open http://server:5000
Then start on docker-compose with just that container and define nginx if you want.
nginx/Dockerfile
FROM nginx:1.13.1-alpine
ADD flask.conf /etc/nginx/conf.d/
EXPOSE 8080
nginx/flask.conf (I changed this based on a file that I have in a project)
server {
listen 8080; # This is the port to EXPOSE in nginx container
server_name app; # You can change this, but not necessary
charset utf-8;
location ^~ /static/ {
alias /usr/share/nginx/html/;
}
location / {
try_files $uri $uri/ #flask;
}
location #flask {
proxy_pass http://app:5000; # This is the port Flask container EXPOSE'd
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X_Forwared-For $proxy_add_x_forwarded_for;
}
}
And finally, the compose. You don't want to have your site exposing both 5000 and 80 (you don't want people to bypass nginx), so just don't expose 5000
version: '2'
services:
app:
restart: always
build: ./mainweb
networks:
- mainnet
nginx:
restart: always
build: ./nginx
networks:
- mainnet
links:
- app
volumes:
- ./mainweb/static:/usr/share/nginx/html
ports:
- "80:8080"
networks:
mainnet:
Two days of work and I'm still stuck. I'm running separate nginx and application containers. The application container has a flask app that runs a gunicorn process on port 8000.
Everytime I nav to localhost:8080 which is the nginx port 80 is mapped to on localhost, I get a loading screen and a nginx 504 error.
This is what I see on the terminal:
docker-compose.yml
version: '2'
services:
web:
restart: always
build: ./web_app
expose:
- "8000"
ports:
- "8000:8000"
volumes:
- ./web_app:/data/web
command: /usr/local/bin/gunicorn web_interface:app -w 4 -t 90 --log-level=info -b :8000 --reload
depends_on:
- postgres
nginx:
restart: always
build: ./nginx
ports:
- "8080:80"
volumes_from:
- web
depends_on:
- web
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
volumes:
- ./postgres/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
- ./backups/postgresql:/backup
expose:
- "5432"
data:
restart: always
image: alpine
volumes:
- /var/lib/postgresql
tty: true
nginx.conf
server {
listen 80;
server_name localhost;
charset utf-8;
location /static/ {
alias /data/web/crm/web_interface;
}
location = /favicon.ico {
alias /data/web/crm/web_interface/static/favicon.ico;
}
location / {
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
nginx Dockerfile
FROM nginx:latest
RUN rm /etc/nginx/conf.d/default.conf
COPY ./nginx.conf /etc/nginx/conf.d/nginx.conf
I'll provide more info if needed to get some help on this issue that Im struggling on.
NGINX response 504 indicate gateway timeout, because of NGINX can not connection the backend server. So, you can locate the issue at proxy_pass directory.
I guess NGINX can not resolve web domain name, There is two solution:
instead of IP
location / {
proxy_pass http://<real_ip>:8000;
}
use upstream
upstream web {
server <real_ip>;
}
location / {
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Ok, so after three days of bashing my head, I re-started from the ground up. Rebuilt the app container and ran gunicorn.
From there I was able to determine that the gunicorn process was timing out because the database host name was incorrect. Instead of the an error being returned through my application, the failure went silent.
I fixed this by linking the postgres container and the web container. In my code I was able to use "postgres" (the name of the container) as the postgres host name.
Check the addresses to your external hosts.