Can I use the uwsgi protocol to call http? - python

Here's a data flow:
http <--> nginx <--> uWSGI <--> python webapp
I guess there's http2uwsgi transfer in nginx, and uwsgi2http in uWSGI.
What if I want to directly call uWSGI to test an API in a webapp?
actually i'm using pyramid. just config [uwsgi] in .ini and run uWSGI. but i want to test if uWSGI hold webapp function normally, the uWSGI socket is not directly reachable by http.

Try using uwsgi_curl
$ pip install uwsgi-tools
$ uwsgi_curl 10.0.0.1:3030 /path
or if you need to do some more requests try uwsgi_proxy from the same package
$ uwsgi_proxy 10.0.0.1:3030
Proxying remote uWSGI server 10.0.0.1:3030 "" to local HTTP server 127.0.0.1:3030...
so you can browse it locally at http://127.0.0.1:3030/.
If your application allows only certain Host header, you can specify host name as well
$ uwsgi_curl 10.0.0.1:3030 host.name/path
$ uwsgi_proxy 10.0.0.1:3030 -n host.name
If application has static files, you can redirect such requests to your front server using -s argument. You can also specify different local port if needed.

From your question I'm assuming, you want to directly run your WSGI-compliant app with uWSGI and open an HTTP-Socket. You can do so by configuring your uwsgi.ini (or whatever the filename is) with
http=127.0.0.1:8080
uwsgi will now open an HTTP-socket that listen on port 8080 for incoming connections from localhost (see documentation: http://uwsgi-docs.readthedocs.org/en/latest/HTTP.html)
Alternatively you can directly start your process from the command-line with the http-parameter:
$ uwsgi --http=127.0.0.1:8080 --module=yourapp:wsgi_entry_point
If you use unix-sockets to configure uwsgi nginx is able to communicate with that socket via the uwsgi-protocol (http://uwsgi-docs.readthedocs.org/en/latest/Protocol.html).
Keep in mind, that if you usually serve static content (css, javascript, images) through nginx you will need to set that up, too, if you run uwsgi directly. But if you only want to test a REST-API this should work out for you.

First, consider those questions:
On which port is uWSGI running?
Is uWSGI running on your or on a remote machine?
If it's running on a remote machine, is the port accessible from your computer? (iptables rules might forbid external access)
If you made sure you have access, you can just call http://hostname:port/path/to/uWSGI for direct API access.

I know this is an old question but I just needed this and found out that this docker+nginx solution works for me the best
cat > /tmp/nginx.conf << EOF
events {}
http {
server {
listen 8000;
location / {
include uwsgi_params;
uwsgi_pass 127.0.0.1:3031;
}
}
}
EOF
docker run -it --network=host --rm --name uswgi-proxy -v /tmp/nginx.conf:/etc/nginx/nginx.conf:ro nginx

Related

nginx how to host both react and django

i have a react frontend imported inside a django backend. communication between the two is done through django-rest-framework. on the react's side, fetching is done through relative paths therefore in my package.json i have added the line:
"proxy": "http://127.0.0.1:8000",
django is hosting react-app locally without problems when i run: python3 manage.py runserver.
on the remote i am trying to use nginx with gunicorn to deploy this app on aws ubuntu instance and run into the problem:
first, i'm running python3 manage.py collectstatic
later, i'm pointing nginx to that static_files for the index.html
success! nginx serves react static files
use gunicorn myapp.wsgi -b 127.0.0.1:8000 to run django
problem! nginx served react files do not fetch anything. fetch does not call for this local path but instead calls public ip of aws instance. also, i cannot simulate get/post requests to the django backend because i think nginx "covers" django's gunicorn generated paths.
please tell how can i connect nginx-served react frontedn to gunicorn run django
my nginx sites-enabled/example
server {
listen 80 default_server;
listen [::]:80 default_server;
root /home/ubuntu/fandigger/frontend/build/;
server_name public_ip_without_port;
location / {
try_files $uri $uri/ =404;
}
}

How to Deploy Django App on Centos server having domain secured With SSL?

I have a Django App which I want to deploy on a Centos Linux server having a global/public IP which is assigned to a domain and is secured with SSL.
The System configuration is as:
centos-release-6-10.el6.centos.12.3.x86_64
Apache/2.2.15 (CentOS)
When I run the server using:
python manage.py runserver 0.0.0.0:8000,
then it is only accessible from the browser by passing a local IP say
http://192.xxx.xx.xxx:8000/django_app/home
But I want to access it from the public/global IP, but it gives an error when I replace the local IP with Global/Public IP or domain assigned to the public IP as:
105.168.296.58 took too long to respond.
Try:
Checking the connection
Checking the proxy and the firewall
Running Windows Network Diagnostics
ERR_CONNECTION_TIMED_OUT
When I simply put this public IP in the browser as https://105.168.296.58 then it redirects the browser to the app/website say https://mywebsite.com/home which is running at port 80 on Apache, and shows the assigned domain instead of IP, so IP is working fine.
in settings.py: all hosts are allowed as ALLOWED_HOSTS = ['*']
Methods tried are:
by using django-extensions as:
python manage.py runserver_plus --cert certname 0.0.0.0:8000
by using django-sslserver as:
python manage.py runsslserver 0.0.0.0:8000
The app runs from both of these methods only locally at http://192.xxx.xx.xxx:8000/django_app/home, but None of them worked from a Public IP.
Kindly tell, How to deploy or configure this Django App to Run Outside the Local Network?
i would deploy django like this:
install uwsgi in your virtualenv
pip install uwsgi
create a uwsgi config file (everything inside <> has to be edited to your needs
# mysite_uwsgi.ini file
[uwsgi]
# Django-related settings
# the base directory (full path)
chdir = </path/to/your/project>
# Django's wsgi file
module = <project>.wsgi
# the virtualenv (full path)
home = </path/to/virtualenv>
# process-related settings
# master
master = true
# maximum number of worker processes
processes = 10
# the socket (use the full path to be safe
socket = </path/to/your/project/mysite.sock>
# ... with appropriate permissions - may be needed
# chmod-socket = 664
# clear environment on exit
vacuum = true
install nginx and create following config
# mysite_nginx.conf
# the upstream component nginx needs to connect to
upstream django {
server unix:///<path/to/your/mysite/mysite.sock>;
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name <example.com;> # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias </path/to/your/mysite/media;> # your Django project's media files - amend as required
}
location /static {
alias </path/to/your/mysite/static>; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include </path/to/your/mysite/uwsgi_params;> # the uwsgi_params file you installed
}
}
Download and copy uwsgi_params to your directory
https://github.com/nginx/nginx/blob/master/conf/uwsgi_params
symlink config from /etc/nginx/sites-enabled
sudo ln -s /etc/nginx/sites-available/mysite_nginx.conf /etc/nginx/sites-enabled/
Start uwsgi and nginx
uwsgi --ini mysite_uwsgi.ini
sudo service nginx restart
don't forget collectstatic so that these files can be served.
we added the mysite_uwsgi.ini and uwsgi_params files to our repository

Apache config on a Synology DS for use with mod_wsgi / Django

I'm started developing a new site using Django. For realistic testing I wanted to run it on a Synology DS212J NAS.
Following the official Synology guides I installed ipkg and with it the mod_wsgi package.
As Next step: Following the standard tutorial I made a virtualenv and installed Django in it. Opening a new project and adjust the settings following to: https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-apache-and-mod_wsgi-on-ubuntu-16-04
I'm able to reach the "Hello World" Site from Django by use of manage.py
As suggested I want to exchange the manage.py through the apache server on the NAS. So I think I should go and edit the apache config files for e.g. define a virtual host...
However I can't localize the files for it, as it seems they where moved at DSM6 (which I use) in comparison too other guides.
Where need I to enter the values following the Tutorial on the Synology?
As I'm quite new into the topic do I need to especially load the mod_wsgi module for Apache and if where?
Is it a good idea to use the basic mode of wsgi instead of the daemon mode? I'm not sure which Django modules will be used later on in development...
Thanks for the support!
Activate the python 3 package and the webstation
In webstation> general settings> main server http enable nginx
In Control Panel> Network> DSM Settings> Enable Custom Domain: "test"
(which will allow us to access the nas by entering test.local and simplify the task later.)
Enable ssh connection in control panel> terminal and smtp
We use the ddns service of synology to have external access in our case "test.synology.me"
In control panel> security> certificate : we generate our ssl certificate with let's encrypt
Connect to the nas in ssh
Take root rights sudo -i
Install virtualenv: easy_install virtualenv
We set up our virtual environment: virtualenv -p python3 flasktest
Flask and gunicorn are installed:
pip install flask gunicorn
We create our web application, file: init.py
We launch our web application with gunicorn:
gunicorn --certfile /usr/syno/etc/certificate/system/default/cert.pem --keyfile /usr/syno/etc/certificate/system/default/privkey.pem -b 127.0 .0.1: 5000 app: app
In /etc/nginx/sites-enabled we create a server configuration file, we will use nginx as a proxy, in our case the file will be flasktest.conf
flasktest.conf file:
`
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
gzip on;
server_name test.synology.me;
location / {
proxy_pass https://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_log /volume1/projects/flasktest/logs/error.log;
access_log /volume1/projects/flasktest/logs/acess.log;
}
`
Open the control panel port> external access> router configuration> create> integrate application> enable the check box for webstation and apply
We check our server file for that we enter the command, nginx -t
We are restarting nginx synoservicecfg --restart nginx
You now have access to your python web applications from outside in https ** https: //test.synology.me**
a little more information ...
To end and access your application permanently if you will ever be able to reboot, crash ... you can create a script that will restart gunicorn because otherwise the webstation takes over elsewhere if you enter ip nas locally you do not will not see your web apps in python because we did not modify the main configuration file /etc/nginx/nginx.conf locally so this is the default index.html page of the webstation that will be displayed.
example:
cd / volume1 / projects / flasktest
source bin / activate
gunicorn --certfile /usr/syno/etc/certificate/system/default/cert.pem --keyfile /usr/syno/etc/certificate/system/default/privkey.pem -b 127.0.0.1:5000 app: app
</ dev / null 2> & 1 &
This method found with other python framework

uwsgi + nginx + flask: upstream prematurely closed

I created an endpoint on my flask which generates a spreadsheet from a database query (remote db) and then sends it as a download in the browser. Flask doesn't throw any errors. Uwsgi doesn't complain.
But when I check nginx's error.log I see a lot of
2014/12/10 05:06:24 [error] 14084#0: *239436 upstream prematurely
closed connection while reading response header from upstream, client:
34.34.34.34, server: me.com, request: "GET /download/export.csv HTTP/1.1", upstream: "uwsgi://0.0.0.0:5002", host: "me.com", referrer:
"https://me.com/download/export.csv"
I deploy the uwsgi like
uwsgi --socket 0.0.0.0:5002 --buffer-size=32768 --module server --callab app
my nginx config:
server {
listen 80;
merge_slashes off;
server_name me.com www.me.cpm;
location / { try_files $uri #app; }
location #app {
include uwsgi_params;
uwsgi_pass 0.0.0.0:5002;
uwsgi_buffer_size 32k;
uwsgi_buffers 8 32k;
uwsgi_busy_buffers_size 32k;
}
}
server {
listen 443;
merge_slashes off;
server_name me.com www.me.com;
location / { try_files $uri #app; }
location #app {
include uwsgi_params;
uwsgi_pass 0.0.0.0:5002;
uwsgi_buffer_size 32k;
uwsgi_buffers 8 32k;
uwsgi_busy_buffers_size 32k;
}
}
Is this an nginx or uwsgi issue, or both?
As mentioned by #mahdix, the error can be caused by Nginx sending a request with the uwsgi protocol while uwsgi is listening on that port for http packets.
When in the Nginx config you have something like:
upstream org_app {
server 10.0.9.79:9597;
}
location / {
include uwsgi_params;
uwsgi_pass org_app;
}
Nginx will use the uwsgi protocol. But if in uwsgi.ini you have something like (or its equivalent in the command line):
http-socket=:9597
uwsgi will speak http, and the error mentioned in the question appears. See native HTTP support.
A possible fix is to have instead:
socket=:9597
In which case Nginx and uwsgi will communicate with each other using the uwsgi protocol over a TCP connection.
Side note: if Nginx and uwsgi are in the same node, a Unix socket will be faster than TCP. See using Unix sockets instead of ports.
Change nginx.conf to include
sendfile on;
client_max_body_size 20M;
keepalive_timeout 0;
See self answer uwsgi upstart on amazon linux for full example
In my case, problem was nginx was sending a request with uwsgi protocol while uwsgi was listening on that port for http packets. So either I had to change the way nginx connects to uwsgi or change the uwsgi to listen using uwsgi protocol.
I had the same sporadic errors in Elastic Beanstalk single-container Docker WSGI app deployment. On EC2 instance of the environment upstream configuration looks like:
upstream docker {
server 172.17.0.3:8080;
keepalive 256;
}
With this default upstream simple load test like:
siege -b -c 16 -t 60S -T 'application/json' 'http://host/foo POST {"foo": "bar"}'
...on the EC2 led to availability of ~70%. The rest were 502 errors caused by upstream prematurely closed connection while reading response header from upstream.
The solution was to either remove keepalive setting from the upstream configuration, or which is easier and more reasonable, is to enable HTTP keep-alive at uWSGI's side as well, with --http-keepalive (available since 1.9).
Replace uwsgi_pass 0.0.0.0:5002; with uwsgi_pass 127.0.0.1:5002; or better use unix sockets.
It seems many causes can stand behind this error message. I know you are using uwsgi_pass, but for those having the problem on long requests when using proxy_pass, setting http-timeout on uWSGI may help (it is not harakiri setting).
There are many potential causes and solutions for this problem. In my case, the back-end code was taking too long to run. Modifying these variables fixed it for me.
Nginx:
proxy_connect_timeout, proxy_send_timeout, proxy_read_timeout, fastcgi_send_timeout, fastcgi_read_timeout, keepalive_timeout, uwsgi_read_timeout, uwsgi_send_timeout, uwsgi_socket_keepalive.
uWSGI: limit-post.
I fixed this issue by passing socket-timeout = 65 (uwsgi.ini file) or --socket-timeout=65 (uwsgi command line) option in uwsgi. We have to check with different value depends on the web traffic. This value socket-timeout = 65 in uwsgi.ini file worked in my case.
I fixed this by reverting to pip3 install uwsgi.
I was trying out the setup with Ubuntu and Amazon Linux side by side. I initially used a virtual environment and did pip3 install uwsgi both systems work fine. Later, I did continue the setup with virtual env turned off. On Ubuntu I install using pip3 install uwsgi and on Amazon Linux yum install uwsgi -y. That was the source of the problem for me.
Ubuntu works fine, but not the Amazon Linux
The fix,
yum remove uwsgi and pip3 install uwsgi restart and it works fine.
This issue can also be caused by a mismatch between timeout values.
I had this issue when nginx had a keepalive_timeout of 75s, while the upstream server's value was a few seconds.
This caused the upstream server to close the connection when its timeout was reached, and nginx logged Connection reset by peer errors.
When having such abrupt "connection closed" errors, please check the upstream timeout values are higher than nginx' values (see Raphael's answer for a good list to check)

Problems deploying Pyramid with uWSGI on Nginx

I seem to be having some slight problems deploying a Pyramid web application. The problem seems to lie in my init script that I am using to start my web application on boot. For some reason, uWSGI will not work unless my socket is set to have a permission of "nobody.nobody" OR Nginx is started after my uwsgi init script. I'm changed my init script to reflect these changes, but it does not seem to be working. The init script (or the part that starts uwsgi) looks like so:
#!/sbin/runscript
args="--ini-paste /var/www/pyramid/app1/development.ini"
command="/var/www/pyramid/bin/uwsgi"
pidfile="/var/run/uwsgi.pid"
sock="/var/tmp/proxy/uwsgi.sock"
nobody="nobody.nobody"
start() {
ebegin "Starting app1"
chown $nobody $sock
start-stop-daemon --start --exec $command -- $args \
--pidfile $pidfile
chown $nobody $sock
einfo "app1 started"
eend $?
}
My Nginx configuration looks like so:
location / {
include uwsgi_params;
uwsgi_pass unix:///var/tmp/proxy/uwsgi.sock;
uwsgi_param SCRIPT_NAME "" ;
}
My ini file includes the following:
[uwsgi]
socket = /var/tmp/proxy/uwsgi.sock
pidfile = /var/run/uwsgi.pid
master = true
processes = 1
home = /var/www/pyramid
daemonize = /var/log/uwsgi.log
virtualenv = /var/www/pyramid/
pythonpath = /var/www/pyramid/bin
What happens is that Nginx will start, and then uwsgi will start. Performing a "ls -la" in /var/tmp/proxy reveals that the permissions of uwsgi.sock is set to "root root" instead of "nobody nobody". However, restarting Nginx will fix the problem, regardless of what the socket's permissions are (but Nginx has to be started first).
Thus, the ways I can get this to work is:
start uwsgi
start nginx
restart nginx
or
start nginx
start uwsgi
restart nginx
I'm at a complete loss as to why this isn't working. If anyone has any advice I'd greatly appreciate it!
You can use the following settings to change the permission of its socket in your ini file:
chmod-socket = 777 # socket permission
gid = www-data # socket group
uid = www-data # socket user
Another thing to consider is whether you actually want uWSGI to run as root. If you pass --uid and --gid arguments to uwsgi, uwsgi will masquerade as a different (preferably non-root) user.
For example, nginx usually runs as the www-data user and www-data group. So if you set up your wsgi app to run with "--gid www-data" and then add at least group-write permissions to your socket file with "--chmod-socket 020", then nginx will be able to write to the socket and you'll be in business.
See my blog post on the subject: http://blog.jackdesert.com/common-hurdles-to-deploying-uwsgi-apps-part-1

Categories

Resources