I'm deploying a Django project on the Openshift cloud. This project uses channels and Websockets to make it work asynchronously. The problem is that I can't successfully connect websockets from the browser to the Daphne Server I got running on the server side.
I'm using the django (python2.7) and redis cartridges to make it run.
The post_deploy script I'm using looks like this:
...
python manage.py runworker -v2 && daphne myapp.asgi:channel_layer -p 8443 -b $OPENSHIFT_REDIS_HOST -v2
...
Here is my Django configuration. In settings.py :
...
ALLOWED_HOSTS = [
socket.gethostname(),
os.environ.get('OPENSHIFT_APP_DNS'),
]
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts": [("redis://:{}#{}:{}/0").format(
OPENSHIFT_REDIS_PASSWORD,
OPENSHIFT_REDIS_HOST,
OPENSHIFT_REDIS_PORT
)],
},
"ROUTING": "myapp.routing.channel_routing",
},
}
...
In routing.py:
...
ws_routing = [
routing.route("websocket.connect", ws_connect),
routing.route("websocket.receive", ws_receive),
routing.route("websocket.disconnect", ws_disconnect),
]
channel_routing = [
include(ws_routing, path=r"^/sync"),
]
...
In consumers.py;
def ws_connect(message):
Group("notifications").add(message.reply_channel)
def ws_disconnect(message):
Group("notifications").discard(message.reply_channel)
def ws_receive(message):
print "Receiving: '%s' from %s" % (message.content['text'], message.content['reply_channel'])
In the client side, I'm running this code:
var ws_scheme = window.location.protocol == "https:" ? "wss" : "ws";
var path = ws_scheme+'://'+window.location.host + ':8443/sync';
var ws = new WebSocket(path);
ws.onmessage = function(message) {
console.log(message.data);
}
ws.onopen = function() {
this.send('WS Connecting to receive updates!');
}
Notice that I'm using port 8443 in Daphne settings and WebSockets settings because of this documentation. Also, Daphne is bound to OPENSHIFT_HOST address because is not possible to bind it to 0.0.0.0 in Openshift (permission problem)
The output looks like this:
Everything looks ok in the client side but if you remember, in consumers.py I had this:
def ws_receive(message):
print "Receiving: '%s' from %s" % (message.content['text'], message.content['reply_channel'])
So in my terminal, the server should be printing out something like: "Receiving: from " but it's not. What I'm missing here?
tl;dr: Client-side websocket looks like it's connected correctly but the server is not printing out a message to confirm it.
I managed to make it work. The problem seems related to a port forwarding thing that made me impossible to connect websockets through the apache server on openshift cloud to my daphne server.
To solve this problem:
1) With the default cartridge for django projects I was unable to modify apache conf file and even update apache to install the mod_proxy_wstunnel to support websockets so I decided to change it.
Also, mod_proxy_wstunnel works on apache 2.4 but the default cartridge uses 2.2.
The channels docs, recommend to use nginx. So I found a cartridge that allows me to use it along with uwsgi and django.
I followed the instructions in that repo but before pushing my code, I tweaked the action hooks a bit to get the lastest versions of those packages and replaced the sample django project with mine. I did the same with the requirements.txt.
2) After pushing it, I added the redis cartridge.
3) Then I proceeded to tweak uwsgi.yaml and nginx.conf that the cartridge provides as templates to set the proper values:
uwsgi.yaml
uwsgi:
socket: $OPENSHIFT_DIY_IP:15005
pidfile: $OPENSHIFT_TMP_DIR/uwsgi.pid
pythonpath: $OPENSHIFT_REPO_DIR/$APP_DIR
module: $APP_NAME.wsgi:application
virtualenv: $OPENSHIFT_DATA_DIR/virtualenv
nginx.conf
...
http {
...
server {
listen $OPENSHIFT_DIY_IP:$OPENSHIFT_DIY_PORT;
server_name localhost;
set_real_ip_from $OPENSHIFT_DIY_IP;
real_ip_header X-Forwarded-For;
location / {
uwsgi_pass $OPENSHIFT_DIY_IP:15005;
include uwsgi_params;
}
location /sync {
proxy_pass http://$OPENSHIFT_DIY_IP:8443;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
...
}
}
In my post_deploy script I have the following:
...
python manage.py runworker -v2 &
daphne myapp.asgi:channel_layer -p 8443 -b $OPENSHIFT_DIY_IP -v2 &
...
So daphne is listening in $OPENSHIFT_DIY_IP:8443 and when nginx receives a request from websockets like this:
var path = 'wss://'+window.location.host + ':8443/sync';
var ws = new WebSocket(path);
ws.onmessage = function(message) {
alert(message.data);
}
ws.onopen = function() {
this.send('WS Connecting to receive updates!');
}
Now I can see:
And in the browser:
So I know that is working. I Hope this could help someone else but me.
Related
I'm new to python and have been put on a task of building out a spreadsheet parser. I've created a python script that reads an xlsx file and parses the data. I have an Nginx server set up that this will be hosted on. I need this script to be an API endpoint so I can pass the parsed data back as JSON. I have been reading about WSGI for production server and have tried to follow the route of building that out. I am able to serve a path on the server and have it output the wsgi python script. The script has the following:
def application(environ, start_response):
status = '200 OK'
html = '<html>\n' \
'<body>\n' \
' Hooray, mod_wsgi is working\n' \
'</body>\n' \
'</html>\n'
response_header = [('Content-type','text/html')]
start_response(status, response_header)
return [html]
I'm a little confused as to how to receive a request and send back json with my excel parser class? Thanks and I hope I'm being clear. I do have a flask server that works, but I do not know how to have it constantly running to serve my endpoint:
app = Flask(__name__)
#app.route('/parser/direct_energy', methods=['GET'])
def get_data():
return jsonify(commissions_data)
if name == 'main':
app.run(host='0.0.0.0')
You don't want to use raw WSGI for this.
Use a package such as FastAPI (or Flask) to make everything easier for you.
For instance, using FastAPI, an app with an endpoint to receive a binary (Excel) file and return a JSON response is approximately
from fastapi import FastAPI, File, UploadFile
app = FastAPI()
#app.post("/process")
def process_file(file: UploadFile = File()):
response = my_data_processing_function(data)
return {"response": response}
See:
To get going: https://fastapi.tiangolo.com/tutorial/
To process files: https://fastapi.tiangolo.com/tutorial/request-files/
To deploy your service (behind Nginx): https://fastapi.tiangolo.com/deployment/
I use python/flask for development & gunicorn for production.
To get it to accept HTTP requests, I use function decorators. Its the most common way.
#application.route('/epp/api/v1.0/request', methods=['POST'])
def eppJSON():
if flask.request.json is None:
return abort(400, "No JSON data was POSTed")
return jsonRequest(flask.request.json, flask.request.remote_addr)
So here, the url /epp/api/v1.0/request accepts POSTed JSON and returns JSON
When you run flask in dev mode it listens on http://127.0.0.1:5000
https://github.com/james-stevens/epp-restapi/blob/master/epprest.py
https://github.com/james-stevens/dnsflsk/blob/master/dnsflsk.py
These are both python/flask projects of mine. Feel free to copy. They each run multiple instances of the python code in a single container load-balanced by nginx - pretty neat combination.
UPDATE
I got things working throug NGinx, flask, and GUnicorn. However, my flask app is only working when I go to '/'. If I go to a route such as /parser/de/v1 I get a 404 Not Found.
Here is my setup for NGinx:
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/html/excel_parser;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html index.php;
server_name 208.97.141.147;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://127.0.0.1:5000;
proxy_connect_timeout 75s;
proxy_read_timeout 300s;
try_files $uri $uri/ =404;
}
# pass PHP scripts to FastCGI server
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
# fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
my nginx.conf looks slightly different, partly becuase I am running multiple WSGI instances, then getting nginx to load-balance over them
worker_processes 3;
events {
worker_connections 1024;
}
user daemon;
http {
access_log off;
error_log stderr error;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream dns_servers {
server unix:/ram/dnsflsk_1.sock;
server unix:/ram/dnsflsk_2.sock;
server unix:/ram/dnsflsk_3.sock;
}
server {
listen 800 ssl;
server_name localhost;
ssl_certificate certkey.pem;
ssl_certificate_key certkey.pem;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://dns_servers;
}
}
}
But with this, all the URLs are passed to the python/wsgi
i'm building web server with django and nginx, uwsgi.
I'm using macbook pro mid 14 , mojave
when i input port number, django works.
but without port number, it won't work.
i don't know what is wrong with my code.
uwsgi --http-socket 0.0.0.0:8091 --wsgi-file test.py
if i use this line, it works.
but
uwsgi --socket projectname.sock --wsgi-file test.py
like this, it won't work. i'm trying to solve this error couple times.
but it won't work
this is projectname_nginx.conf
upstream django {
#server 127.0.0.1:8001;
server unix:/Users/myname/Desktop/projectnae/name/projectname.sock;
}
server {
listen 8999;
server_name localhost;
charset utf-8;
client_max_body_size 75M;
location /media {
alias /Users/myname/Desktop/projectnae/name/media;
}
location /static {
alias //Users/myname/Desktop/projectnae/name/static
}
location / {
uwsgi_pass django;
include /Users/myname/Desktop/projectname/fido/uwsgi_params;
}
}
and this is projectname_uwsgi.ini
[uwsgi]
chdir = /Users/myname/Desktop/projectname/name/
module = projectname.wsgi
home = /Users/junbeomkwak/Desktop/venv/
master = true
process = 10
socket = /Users/myname/Desktop/projectnae/name/projectname.sock;
vacuum = true
Solved
see bottom for fixes etc.
I'm trying to connetc my django app with nginx via uwsgi, but it seems that the passing of data to uwsgi does not happen. I've tested that the uwsgi server is running properly and do not get any log output on either end.
uwsgi.ini
[uwsgi]
module = MyDjangoApp.wsgi:application
master = True
;http-socket = :8001 #to run uwsgi on its one to ensure that it works
socket = :8001
vacuum = True
max-requests = 5000
plugin = python3
enable-threads = True
/etc/nginx/sites-available file tree
default
serverDjango_nginx.conf
serverDjango_nginx.conf:
# the upstream component nginx needs to connect to
upstream django {
#server unix:///path/to/your/mysite/mysite.sock; # for a file socket
server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 8000;
# the domain name it will serve for
server_name 127.0.0.1; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
# location /media {
# location /media {
# alias /path/to/your/mysite/media; # your Django project's media files $
# }
# location /static {
# alias /path/to/your/mysite/static; # your Django project's static files$
# }
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /home/pi/Server/uwsgi_params; # the uwsgi_params file you in$
}
UPDATE:
first the site wasn't enabled...
second I've put a link to it in the /etc/nginx/sites-enabled/ as the documentation said
now i get this wierd error:
2020/03/29 12:14:18 [emerg] 4344#4344: open() "/etc/nginx/sites-enabled/serverDjango_nginx.conf" failed (2: No such file or directory) in /etc/nginx/nginx.conf:63
I looked into the corresponding config file to find
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
and now I am wondering why id does not find the file I've linked to
sudo ln -s ~/etc/nginx/sites-available/serverDjango_nginx.conf /etc/nginx/sites-enabled/
Update No2
so the linkagepath was wrong because of:
sudo ln -s ~/etc/nginx/sites-available/serverDjango_nginx.conf
the tilde there, which forced a relative path, therefore invalidating the link
This site is a great tool for generating your Nginx config files. In your server block you should be putting the listen to either 80 or 443 (if you want it to be accessible via standard http/s ports). You also should put your server_name to be your domain such as www.google.com google.com (yes include both) or whatever domain(s) you want to serve your Django site on.
I don't use the uwsgi like you do under location either. I just use proxy_pass like proxy_pass http://localhost:8001 and then pass an include for my proxy config.
well, I just setup NGINX and now its working.
As my BackEnd WebServer under NGINX I have Python Tornado running:
I only use NGINX for allow big uploads (large-sized), so one of my URL (for upload) is served on NGINX and the rest of URLs are served by Tornado.
I use Sessions provided by Tornado (running at http://localhost:8080/), and NGINX is running at http://localhost:8888/;
Well this is my nginx config file:
location /images/upload {
upload_pass /after_upload;
.....
.....
.....
}
location /after_upload {
proxy_pass http://localhost:8080/v1/upload/;
}
As you see, there aren't anything about authentication on NGINX.
URL for proxy_pass requiere a session valid (provided by Tornado)
This is scheme of the system is the following:
When users log in the system, system create a Tornado (tornado sessions) session in server and in user's Browser, so I need pass authentication to NGINX and continue this authentication process again in Tornado Service.
How I change NginX for authenticate against Tornado?
Thanks in advance
Well, Nginx works as a Proxy, therefore is not necessary make changes in Tornado or in your application. For my application I just add rewrites from NGINX urls towards Tornado urls. So this includes all traffic (auth, etc.) and all HTTP structures like if you were working in Tornado.
server {
listen 8888; ## listen for ipv4
server_name localhost;
access_log /var/log/nginx/localhost.access.log;
client_max_body_size 100000M;
location / {
#Real Location URL for Tornado.
proxy_pass http://localhost:8080/;
}
}
Key is proxy_pass , where every requests for 8888 port are passed to 8080 port in localhost.
Everything is passed to Tornado BackEnd from Nginx.
I am running Ubuntu 10.04, Django 1.3, Nginx 0.8.54 and uWSGI 0.9.7.
Both Nginx and uWSGI load without error. However, when you access my site, it sits for a LONG time and then eventually loads a "504 Gateway Time-out" error.
Here is my Nginx Virtual Host conf file:
server {
listen 80;
server_name www.mysite.com mysite.com;
error_log /home/mysite/log/error.log;
access_log /home/mysite/log/access.log;
location / {
auth_basic "Restricted";
auth_basic_user_file /home/mysite/public/passwd;
include uwsgi_params;
uwsgi_pass unix:///home/mysite/public/myapp.sock;
}
location /media {
alias /home/mysite/public/myapp/media;
}
error_page 401 /coming_soon.html;
location /coming_soon.html {
root /home/mysite/public/error_pages/401;
}
location /401/images {
alias /home/mysite/public/error_pages/401/images;
}
location /401/style {
alias /home/mysite/public/error_pages/401/style;
}
}
My site log shows this:
SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request / !!!
My error log show this:
upstream timed out (110: Connection timed out) while reading response header from upstream
I have two other sites on this server with the same configuration and they load PERFECTLY.
Has anyone else encountered this problem? There are several threads on here that are similar to my issue and I've tried several of those solutions but nothing seems to work.
Thank you in advance for your help!
That error is produced when requests exceed the NGINX uwsgi_read_timeout setting. After NGINX exceeds this limit it closes the socket and then uWSGI tries to write to the closed socket, producing the error that you see from uWSIG.
Make sure your NGINX timeouts are at least as high as uWSGI timeouts (HARAKIRI_TIMEOUT).
unix:///home/mysite/public/myapp.sock;
syntax not correct, use like this:
unix:/home/mysite/public/myapp.sock;