Django not routing Heroku Websokect to dyno - python

I am trying to enable Channels V2 for a Django app deployed in Heroku.
The WSGI web dyno works perfectly but the second web dyno for ASGI channels never get the requests so when trying to create a websocket connection I get a 404 response.
Here is the Procfile file:
web: gunicorn app_xxx.wsgi --log-file -
web2: daphne app_xxx.routing:application --port $PORT --bind 0.0.0.0 -v2
I have also tried with Uvicorn like:
web: gunicorn app_xxx.wsgi --log-file -
web2: gunicorn app_xxx.asgi:application -b 0.0.0.0:$PORT -w 1 -k uvicorn.workers.UvicornWorker
Seems like everything is in place, just need to find a way to EXPOSE the wss endpoint

In order to make Channels works on Heroku you should first add a Redis add-on then make sure your CHANNEL_LAYERS variable in your settings.py points to this redis host machine. Below you can see an example:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [config('CHANNEL_LAYERS_HOST')],
},
},
}

Related

Flask app with Nginx and Gunicorn in Ubuntu

I have a vps flask application that processes incoming requests and returns a result in the format {result: OK} or {result: BAD}. There is also a bunch of nginx + gunicorn. When only gunicorn works for me, everything works as it should, but as soon as I connect nginx, the server returns me a response in the form of html code, which I will present below.
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.18.0 (Ubuntu)</center>
</body>
</html>
Here is my '/etc/nginx/sites-available/myproject' file:
server {
listen 80;
server_name 0.0.0.0;
location / {
include proxy_params;
proxy_pass http://unix:/home/sammy/myproject/myproject.sock;
}
}
My configurations of myproject.service:
[Unit]
Description=Gunicorn instance to serve myproject
After=network.target
[Service]
User=sammy
Group=www-data
WorkingDirectory=/home/sammy/myproject
Environment="PATH=/home/sammy/myproject/myprojectenv/bin"
ExecStart=/home/sammy/myproject/myprojectenv/bin/gunicorn --workers 3 --bind unix:myproject.sock -m 007 wsgi:app
[Install]
WantedBy=multi-user.target
my nginx and myporject status id "active" and have no errors
I can't figure out what I'm doing wrong, because I've read a lot of guides and documentation, the result is still unsuccessful
Once again, if I work without nginx, then everything works fine, but I need to connect it too

Deploying django channels on heroku

I have created a standard django application with startproject, startapp, etc. and I want to deploy it on heroku. When I was using gunicorn I solved the directory issue like so:
web: gunicorn --pythonpath enigma enigma.wsgi
with the --pythonpath option. But now I am using django channels and so it is daphne. Is there an equivalent? I have tried everything but for the life of me I can't get the project to start. I always get issues with the settings file, apps not loaded or another assortment of cwd-related issues.
As given in the Heroku Django channels tutorial, I have tried:
daphne enigma.asgi:channel_layer --port 8888
This led to a variety of module not found errors with asgi and settings.
I also tried
daphne enigma.enigma.asgi:channel_layer --port 8888
This led to module not found enigma.settings errors.
I also tried
cd enigma && daphne enigma.asgi:channel_layer --port 8888
Which led to Django apps not ready errors.
I also tried moving the Procfile and pipfiles into the project directory and deploying that subdirectory but once again I got apps not ready errors.
I have now started temporarily using
cd enigma && python manage.py runserver 0.0.0.0:$PORT
But I know that you're not supposed to do this in production.
Try this:
Procfile
web: daphne enigma.asgi:application --port $PORT --bind 0.0.0.0 -v2
chatworker: python manage.py runworker --settings=enigma.settings -v2
settings.py
if DEBUG:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("localhost", 6379)],
},
},
}
else:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [os.environ.get('REDIS_URL', 'redis://localhost:6379')],
},
},
}
asgi.py
import os, django
from channels.routing import get_default_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'enigma.settings')
django.setup()
application = get_default_application()

Deploy a Django project to AWS EB using Docker and nginx

Currently, I try to deploy a Django project to AWS EB but I'm facing a lot of problems. I could dockerize the project and deploy it on the AWS elastic beanstalk. But when I try to access the site I always see: 502 Bad Gateway. Locally, the project runs smoothly. I am not really into nginx and I have no idea how to solve this problem.
This is my project structure:
This is my Dockerfile:
# Creating image based on official python3 image
FROM python:3
MAINTAINER Jaron Bardenhagen
# Sets dumping log messages directly to stream instead of buffering
ENV PYTHONUNBUFFERED 1
# Creating and putting configurations
RUN mkdir /config
ADD config/app /config/
# Installing all python dependencies
RUN pip install -r /config/requirements.txt
# Open port 8000 to outside world
EXPOSE 8000
# When container starts, this script will be executed.
# Note that it is NOT executed during building
CMD ["sh", "/config/on-container-start.sh"]
# Creating and putting application inside container
# and setting it to working directory (meaning it is going to be default)
RUN mkdir /app
WORKDIR /app
ADD app /app/
This is my docker-compose file:
# File structure version
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_DB_PORT: "5432"
POSTGRES_DB_HOST: "*******"
POSTGRES_PASSWORD: "*******"
POSTGRES_USER: Jaron
POSTGRES_DB: ebdb
# Build from remote dockerfile
# Connect local app folder with image folder, so changes will be pushed to image instantly
# Open port 8000
app:
build:
context: .
dockerfile: config/app/Dockerfile
hostname: app
volumes:
- ./app:/app
expose:
- "8000"
depends_on:
- db
# Web server based on official nginx image
# Connect external 8000 (which you can access from browser)
# with internal port 8000(which will be linked to app port 8000 in configs)
# Connect local nginx configuration with image configuration
nginx:
image: nginx
hostname: nginx
ports:
- "8000:8000"
volumes:
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- app
This is the Dockerrun.aws File:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "******/******:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8000"
}
]
}
On-container-start.sh file:
# Create migrations based on django models
python manage.py makemigrations
# Migrate created migrations to database
python manage.py migrate
# Start gunicorn server at port 8000 and keep an eye for app code changes
# If changes occur, kill worker and start a new one
gunicorn --reload project.wsgi:application -b 0.0.0.0:8000
And here is the file for the nginx settings (app.conf):
# define group app
upstream app {
# balancing by ip
ip_hash;
# define server app
server app:8000;
}
# portal
server {
# all other requests proxies to app
location / {
proxy_pass http://app/;
}
# only respond to port 8000
listen 8000;
# domain localhost
server_name localhost;
}
I really appreciate any kind of help!
First check this question on stack overflow: link to question.
If this doesn't solve your problem try the below suggestion.
Try to replace the postgress database with RDS and make sure your RDS and EB environment are in the same VPC. I had this problem when I tried to deploy django project on EB.
Removing Postgress docker image and connecting it with RDS solved my issue.
For some reason, EB is stopping postgress images.

Nginx reverse proxy on unix socket for uvicorn not working

Files:
# main.py:
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_root():
return {"Hello": "World"}
-
# nginx.conf:
events {
worker_connections 128;
}
http{
server {
listen 0.0.0.0:8080;
location / {
include uwsgi_params;
uwsgi_pass unix:/tmp/uvi.sock;
}
}
}
-
# Dockerfile
FROM python:3
COPY main.py .
RUN apt-get -y update && apt-get install -y htop tmux vim nginx
RUN pip install fastapi uvicorn
COPY nginx.conf /etc/nginx/
Setup:
docker build -t nginx-uvicorn:latest .
docker run -it --entrypoint=/bin/bash --name nginx-uvicorn -p 80:8080 nginx-uvicorn:latest
Starting uvicorn as usual:
$ uvicorn --host 0.0.0.0 --port 8080 main:app
Works - I can access http://127.0.0.1/ from my browser.
Starting uvicorn behind nginx:
$ service nginx start
[ ok ] Starting nginx: nginx.
$ uvicorn main:app --uds /tmp/uvi.sock
INFO: Started server process [40]
INFO: Uvicorn running on unix socket /tmp/uvi.sock (Press CTRL+C to quit)
INFO: Waiting for application startup.
INFO: Application startup complete.
If I now request http://127.0.0.1/ then:
Nginx: Responds with 502 Bad Gateway
uvicorn: Responds with WARNING: Invalid HTTP request received.
Hence a connection is established but something is wrong about the configuration.
Any ideas?
You are using the uwsgi module of nginx. Uvicorn exposes an asgi API. Therefore you should use a "reverse proxy" configuration instead of an uwsgi configuration.
You can get more info on the uvicorn documentation: https://www.uvicorn.org/deployment/#running-behind-nginx (see the proxy_pass line)

Docker Nginx does not listen to browser

I have this docker-compose.yml file:
version: '2'
services:
nginx:
image: nginx:latest
container_name: nz01
ports:
- "8001:8000"
volumes:
- ./src:/src
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- web
web:
build: .
container_name: dz01
depends_on:
- db
volumes:
- ./src:/src
expose:
- "8000"
db:
image: postgres:latest
container_name: pz01
ports:
- "5433:5432"
volumes:
- postgres_database:/var/lib/postgresql/data:Z
volumes:
postgres_database:
external: true
And this dockerfile:
FROM python:3.5
ENV PYTHONUNBUFFERED 1
RUN mkdir /src
RUN mkdir /static
WORKDIR /src
ADD ./src /src
RUN pip install -r requirements.pip
CMD python manage.py collectstatic --no-input;python manage.py migrate; gunicorn computationalMarketing.wsgi -b 0.0.0.0:8000
The web and postgres server does not return an error log, just the success when I run docker-compose build and docker-compose up -d.
At this moment the three containers are running, but when I go to the browser and navigate to: localhost:8001 it does not work.
It shows the "connection has been restarted" error message.
Despite that, the web server still does not return any error, so I guess that I have everything properly configurated in my Django app. I really believe that the problem is related to Nginx, because when I review the Nginx log (using kinematic) it is still empty.
Why wouldn't Nginx be listening to connections?
Hint:
This error is happening in a new project. I tried to understand whether I have anything wrong, I'm running an old project and it works perfectly. I tried to copy the working project in my new folder and remove all existent containers and then try to run this old project in a new folder and there is the surprise. It does not work now, despite being an exact copy of the project that works in the other folder...
EDIT
In my repo I have a config/nginx folder with the helloworld.conf file:
upstream web {
ip_hash;
server web:8000;
}
server {
location /static/ {
autoindex on;
alias /src/static/;
}
location / {
proxy_pass http://web/;
}
listen 8001;
server_name localhost;
}
Still with the same error... I do not see any log error.
Django container log
Operations to perform:
Apply all migrations: admin, auth, contenttypes, sessions
Running migrations:
No migrations to apply.
[2018-11-05 13:00:09 +0000] [8] [INFO] Starting gunicorn 19.7.1
[2018-11-05 13:00:09 +0000] [8] [INFO] Listening at: http://0.0.0.0:8000 (8)
[2018-11-05 13:00:09 +0000] [8] [INFO] Using worker: sync
[2018-11-05 13:00:09 +0000] [11] [INFO] Booting worker with pid: 11
You nginx configure should be like:
upstream web {
ip_hash;
server web:8000;
}
server {
location / {
proxy_pass http://web/;
}
listen 8001;
server_name localhost;
}
Since this kind of problems usually are difficult to debug/reproduce I have created dummy example to just run Django app and serve it via Nginx. You can try to adjust it to your needs. Please forgive me if I have missed something, or done that shouldn't be, but I'm unfamiliar with Django framework.
Dockerfile for Django container:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code && \
pip install django
WORKDIR /code
ADD helloworld /code
docker-compose.yml:
version: '3'
services:
nginx:
image: nginx:latest
container_name: nginx
ports:
- "80:80"
volumes:
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- web
web:
build: .
container_name: django
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./helloworld:/code
expose:
- "8000"
config/nginx/django.conf:
upstream web {
ip_hash;
server web:8000;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://web/;
}
}
Django app is inside helloworld folder.
For this example, traffic is simply passed. Proper way would be using Unix sockets instead of ports but again I'm unfamiliar with Django.

Categories

Resources