Django signals not working with channels in multicontainer setup - python

I have django app and implementing websocket support with channels and channels api. I am using demultiplexer with bindings to my models. For example when i save a model it will send the change to my open websocket connection.
Everything works OK if i run ./manage.py runserver 0:80 and have all in one container. But if i separate my app to UWSGI, daphne and the worker containers using docker the signals are not triggered. For example i want any celery worker (task) to trigger the signal and send update via the websocket. In my multicontainer setup the websocket connection is established OK and web works OK, but nothing triggers that signals.
How the signals are defined you can see here on github.
I am using django 1.9.12, python 2.7, docker and build on debian stretch.
docker-compose.yml
web:
build: .
ports: ["8001:80"]
daphne:
build: .
command: daphne -b 0.0.0.0 -p 8000 --access-log - -v 2 my_proj.asgi:channel_layer
ws_worker:
build: .
command: ./manage.py runworker -v 2
celery_worker:
build: .
command: /usr/local/bin/celery -A my_proj worker
nginx.conf
upstream django {
server unix:/home/docker/app.sock;
}
server {
listen 80;
server_name 127.0.0.1;
charset utf-8;
client_max_body_size 1000M;
location /static/ {
alias /home/docker/static/;
}
# proxy to other container
location /ws/ {
proxy_pass http://daphne:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / {
uwsgi_pass django;
include /home/docker/uwsgi_params;
}
}

My problem was that the signals were not loading cause i defined binding classes else where than in models.py. If i load them after the models are loaded in my_app/config.py it works across multiple containers
from django.apps import AppConfig as DefaultAppConfig
class AppConfig(DefaultAppConfig):
def ready(self):
# for websockets bindigns
from my_app.websockets.bindings_one import *
from my_app.websockets.bindings_two import *

Related

Nginx reverse proxy on unix socket for uvicorn not working

Files:
# main.py:
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_root():
return {"Hello": "World"}
-
# nginx.conf:
events {
worker_connections 128;
}
http{
server {
listen 0.0.0.0:8080;
location / {
include uwsgi_params;
uwsgi_pass unix:/tmp/uvi.sock;
}
}
}
-
# Dockerfile
FROM python:3
COPY main.py .
RUN apt-get -y update && apt-get install -y htop tmux vim nginx
RUN pip install fastapi uvicorn
COPY nginx.conf /etc/nginx/
Setup:
docker build -t nginx-uvicorn:latest .
docker run -it --entrypoint=/bin/bash --name nginx-uvicorn -p 80:8080 nginx-uvicorn:latest
Starting uvicorn as usual:
$ uvicorn --host 0.0.0.0 --port 8080 main:app
Works - I can access http://127.0.0.1/ from my browser.
Starting uvicorn behind nginx:
$ service nginx start
[ ok ] Starting nginx: nginx.
$ uvicorn main:app --uds /tmp/uvi.sock
INFO: Started server process [40]
INFO: Uvicorn running on unix socket /tmp/uvi.sock (Press CTRL+C to quit)
INFO: Waiting for application startup.
INFO: Application startup complete.
If I now request http://127.0.0.1/ then:
Nginx: Responds with 502 Bad Gateway
uvicorn: Responds with WARNING: Invalid HTTP request received.
Hence a connection is established but something is wrong about the configuration.
Any ideas?
You are using the uwsgi module of nginx. Uvicorn exposes an asgi API. Therefore you should use a "reverse proxy" configuration instead of an uwsgi configuration.
You can get more info on the uvicorn documentation: https://www.uvicorn.org/deployment/#running-behind-nginx (see the proxy_pass line)

Nginx (proxy_pass) + Gunicorn can’t be reached

I want to run django with gunicorn and nginx as a proxy server on a remote Ubuntu VPS.
The site works with djangos dev server:
python manage.py runserver 0.0.0.0:8000
The site works with gunicorns server (even static files don't work):
gunicorn my_project.wsgi --bind 0.0.0.0:8000
But with nginx on top I get the following error:
This site can’t be reached ... refused to connect. ERR_CONNECTION_REFUSED
Also both nginx log files error.log & access.log are empty.
Here is how I configured nginx:
server {
listen 80;
server_name my_ip_address;
location / {
proxy_pass http://127.0.0.1:8001;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
}
}
In this case gunicorn runs with --bind 127.0.0.1:8001 of course.
Status check (service nginx status) returns:
● nginx.service - A high performance web server and a reverse proxy server
Active: active (running) since Fri 2019-09-20 07:41:00 UTC; 1min 19s ago
Starting A high performance web server and a reverse proxy server...
nginx.service: Failed to parse PID from file /run/nginx.pid: Invalid argument
Started A high performance web server and a reverse proxy server.
First, check your configuration with nginx -t. The configuration you posted is not valid as a standalone config file, but I assume you are using the common nginx config structure of having a main nginx.conf and sites-available and sites-enabled directories.
If it does not complain, introduce an error, e.g. by removing a closing bracket, and try again. If it still doesn't complain, your configuration is not being picked up by nginx.
In this case, check if you created a correct symlink from sites-enabled/your_config to sites-available/your_config.
If that all seems correct:
check if nginx is actually running: ps aux | grep nginx
check if nginx is listening to port 80: netstat -tulpen | grep ":80"
check firewall rules

Deploying Django using Nginx Docker Container

Situation: I have a Django Application that I want to deploy, the tools I use for this one are Nginx, Gunicorn, and all of them are inside a docker container using Docker Desktop.
Problem: I'm able to view the django app locally using the IP of my docker, IP of my machine, and Loopback IP. However when I try to access it from my laptop(another machine connected on same wifi), I can't access it.
My Machine: Windows 10, I have already enable the expose of port 80 in the windows firewall inbound and outbound.
Steps Taken: I've tried doing python -m http.server 80 on my machine, and it's working perfectly fine so I'm sure there is something to do maybe on my Hyper-V of docker desktop or maybe nginx configuration
My docker-compose file
version: '3'
services:
dashboard:
build: .
volumes:
- .:/opt/services/dashboard/src
- static_volume:/opt/services/dashboard/src/static
networks: # <-- here
- nginx_network
nginx:
image: nginx:1.13
ports:
- 0.0.0.0:80:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/opt/services/dashboard/src/static
depends_on:
- dashboard
networks: # <-- here
- nginx_network
networks: # <-- and here
nginx_network:
driver: bridge
volumes:
static_volume: # <-- declare the static volume
My dockerfile
# start from an official image
FROM python:3.6
# arbitrary location choice: you can change the directory
RUN mkdir -p /opt/services/dashboard/src
WORKDIR /opt/services/dashboard/src
# install our two dependencies
RUN pip install gunicorn django requests jira python-dateutil
# copy our project code
COPY . /opt/services/dashboard/src
# expose the port 80
EXPOSE 80
# define the default command to run when starting the container
CMD ["gunicorn", "--bind", ":80", "dashboard.wsgi:application"]
My nginx config file
# first we declare our upstream server, which is our Gunicorn application
upstream dashboard_server {
# docker will automatically resolve this to the correct address
# because we use the same name as the service: "djangoapp"
server dashboard:80;
}
# now we declare our main server
server {
listen 80;
server_name localhost;
location / {
# everything is passed to Gunicorn
proxy_pass http://dashboard_server;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /opt/services/dashboard/src/static/;
}
}
Here is an image of my folder structure.
Image of folder structure
QUESTION: How do I atleast make it viewable on my laptop which is connected through the same Wifi as my desktop machine? I've tried accesing it using the IP of my machine.
Restarted the router switch and it worked perfectly.

Multiple Django app using nginx and gunicorn on Ubuntu 14.04 trusty server

I am new to server configuration. I do some google and config django app using gunicorn and nginx on ubuntu 14.04 trusty server. For the first django app I use port number 80 and my configfiles are :
/etc/init/gunicorn.conf :-
description "Gunicorn application server handling myproject"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
setuid
setgid www-data
chdir /home/myserver/my_virtualenv_path/myproject
exec /home/myserver/my_virtualenv_path/myproject/gunicorn --workers 2 --bind unix:/home/myserver/my_virtualenv_path/myproject/myproject.sock myproject.wsgi:application
My nginx configuration file for first django app:
/etc/nginx/site-available :-
server {
listen 80;
server_name myapp.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/myserver/my_virtualenv_path/myproject;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/myserver/my_virtualenv_path/myproject/myproject.sock;
}
}
After that, i link site to site-enabled .
Next, i create a new django app inside the first django app virtualenv like:
FirstApp_Virtual_Env\first_djangoapp\app files
FirstApp_Virtual_Env\Second_djangoapp\app files
Now i configure gunicorn for second app like :
/etc/init/gunicorn_t :-
description "Gunicorn application server handling myproject2"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
setuid
setgid www-data
chdir /home/myserver/my_virtualenv_path/myproject2
exec /home/myserver/my_virtualenv_path/myproject/gunicorn --workers 2 --bind unix:/home/myserver/my_virtualenv_path/myproject2/myproject2.sock myproject2.wsgi:application
My nginx configuration file for second django app:
/etc/nginx/site-available :-
server {
listen 8000;
server_name myapp2.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/myserver/my_virtualenv_path/myproject2;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/myserver/my_virtualenv_path/myproject2/myproject2.sock;
}
}
After that i link site to site-enabled .
Now here is my problem: when i type myapp.com then my first django app is working fine but for second app when i type myapp2.com its showing nginx page and when i type myapp2.com:8000 it's working fine . I do some google for that but i am unable to find solution. I am newbie to this so please give me a hint for that how to correct my problem. Thanks for your time.
You configured nginx to serve myapp2.com on port 8000:
server {
listen 8000;
server_name myapp2.com;
# ...
}
so why would you expect nginx to serve it on port 80 ?
[edit] I thought the above was enough to make the problem clear but obviously not, so let's start again:
You configured nginx to serve myapp2.com on port 8000 (the listen 8000; line in your conf, so nginx do what you asked for: it serves myapp2.com on port 8000.
If you want nginx to serve myapp2.com on port 80 (which is the implied default port for http so you don't have to specify it explicitely in your url - IOW 'http://myapp2.com/' is a shortcut for 'http://myapp2.com:80/'), all you have to do is to configure nginx to serve it on port 80 just like you did for 'myapp.com': replace listen 8000; by listen 80;.
If you don't type in a port, your client will automatically use port 80.
Typing myapp2.com is the same as typing myapp2.com:80
But myapp2.com is not running on port 80, it's running on port 8000.
When you go into production it is possible to redirect myapp2.com to port 8000 without explicitly typing it. You register myapp2.com with a DNS name server and point it towards myapp2.com:8000

Running 2 Gunicorn Apps and Nginx with Supervisord

This problem has admittedly stumped me for months. I've just procrastinated fixing other bugs and putting this aside until now where it HAS to be fixed --
I am trying to run 2 separate gunicorn apps and start nginx within the same supervisord.conf file. When I start supervisor, I am able to successfully run the handlecalls app but when I go to the website that commentbox is responsible for loading, I get an internal service error (500).
When I run the handlecalls and commentbox apps separately with the commands following the command field, the apps run fine. Why is the commentbox program giving me a 500 error when I try to run both with supervisord?
my supervisord script:
[program:nginx]
directory = /var/www/vmail
command = service nginx start -g "daemon off;"
autostart = True
[program:commentbox]
directory = /var/www/vmail
command = gunicorn app:app -bind 0.0.0.0:8000
autostart = True
[program:handlecalls]
directory = /var/www/vmail
command = gunicorn handle_calls:app --bind 0.0.0.0:8000
autostart = True
[supervisord]
directory = /var/www/vmail
logfile = /var/www/vmail/supervisorerrs.log
loglevel = trace
This has nothing to do with supervisord. Supervisord is just a way for you to start/stop/restart your server. This has more to do with your server's configuration.
The basic: To serve two gunicorn apps with nginx, you have to run them on two different ports, then config nginx to proxy_pass the request to their respective ports. The reson is: once a process is running on a port, that port cannot be used by another process.
So change the configuration in your supervisord script to:
[program:commentbox]
directory = /var/www/vmail
command = gunicorn app:app --bind 0.0.0.0:8000
autostart = True
[program:handlecalls]
directory = /var/www/vmail
command = gunicorn handle_calls:app --bind 0.0.0.0:8001
autostart = True
Then in your nginx server's configuration for handlecalls
proxy_pass 127.0.0.1:8081
Update: Here is the basics of deploying a web application
As mentioned above, one port can only be listened by a process.
You can use nginx as a http server, listening to port 80 (or 443 for https), then passing the request to other applications listening to other ports (for example, commentbox on port 8000 and handlecalls on port 8001)
You can add rules to nginx as how to serve your application by adding certain server configuration files in /etc/nginx/sites-available/ (by default. It is different in some cases). The rules should specify a way for nginx to know which application it should send the request to, for example:
To reuse the same http port (80), each application should be assigned to a different domain. i.e: commentbox.yourdomain.com for commentbox and handlecalls.yourdomain.com for handlecalls
A way to serve two different apps on the same domain, is for them to serve on different ports. For example: yourdomain.com would serve commentbox and yourdomain.com:8080 would serve handlecalls
A way to serve two different apps on the same domain and the same ports, is for them to serve on two different endpoints. For example yourdomain.com/commentbox would serve commentbox and yourdomain.com/handlecalls would serve handlecalls
After adding configuration files to /etc/nginx/sites-available/, you must symlink those files to /etc/nginx/sites-enabled/, well, to tell nginx that you want to enable them. You can add the files directly to /etc/nginx/sites-enabled/, but I don't recommend it, since it doesn't give you a convenient way to enable/disable your application.
Update: Here is how to config nginx to serve gunicorn applications using two different subdomains:
Add two subdomains commentbox.yourdomain.com and handlecalls.yourdomain.com, and point them both to your server's IP.
Create a a configuration file for commentbox at /etc/nginx/sites-available/commentbox with the following content (edit as fit):
server {
listen 80;
server_name commentbox.yourdomain.com;
root /path/to/your/application/static/folder;
location / {
try_files $uri #app;
}
location #app {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://127.0.0.1:8000;
}
}
Create a configuration file for handlecalls at /etc/nginx/sites-available/handlecalls with the following content (edit as fit):
server {
listen 80;
server_name handlecalls.yourdomain.com;
root /path/to/your/application/static/folder;
location / {
try_files $uri #app;
}
location #app {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://127.0.0.1:8001;
}
}
Create symlinks to enable those servers:
sudo ln -s /etc/nginx/sites-available/commentbox /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/handlecalls /etc/nginx/sites-enabled/
Restart nginx to take effect
sudo service nginx restart

Categories

Resources