I'm trying to build a simple torrent-tracker with Flask, but I meet a problem.
If client in NAPT network, the port, which included in request is incorrect. I want to get client connect port with Flask (In PHP like this function: $_SERVER['REMOTE_PORT']).
How to get client port with Flask?
You can get it from request.environ
request.environ.get('REMOTE_PORT')
If Flask is behind a reverse proxy,
request.environ.get('REMOTE_PORT')
will not give you what you want, you are going to get the port
used by the reverse proxy.
If using Nginx, add this bold line to your config file:
server {
listen 80;
server_name _;
location / {
proxy_pass ...;
proxy_set_header WHATEVER $remote_port;
}
}
Then you can get the client port with :
request.headers.get('WHATEVER')
Related
I have applied a domain(for example example.com) from godaddy.com. And I run the command to create Let's Encrypt, but there is error.
(venv) ubuntu2#212.../microblog$ wget https://dl.eff.org/certbot-auto
(venv) ubuntu2#212.../microblog$ chmod a+x ./certbot-auto
(venv) ubuntu2#212...~/microblog$ ../certbot-auto certonly --webroot -w /home/ubuntu2/microblog -d example.com --email example#aa.com
But there is error as following:
Requesting to rerun ./certbot-auto with root privileges...
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator webroot, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for example.com
Using the webroot path /home/ubuntu2/microblog for all unmatched domains.
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. example.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://example.com/.well-known/acme-challenge/V9B6Dz7gPx7RhyLmpYIlwYUhs1d4rWJF2HlpJbNbjbY: "<!DOCTYPE html><body style="padding:0; margin:0;"><html><body><iframe src="http://mcc.godaddy.com/park/MaO2MaO2LKWaYaOvrt==/fe/M"
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: example.com
Type: unauthorized
Detail: Invalid response from
http://example.com/.well-known/acme-challenge/V9B6Dz7gPx7RhyLmpYIlwYUhs1d4rWJF2HlpJbNbjbY:
"<!DOCTYPE html><body style="padding:0;
margin:0;"><html><body><iframe
src="http://mcc.godaddy.com/park/MaO2MaO2LKWaYaOvrt==/fe/M"
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
and the /etc/nginx/sites-enabled/microblog as following
server {
# listen on port 80 (http)
listen 80;
server_name example.com;
location / {
# redirect any requests to the same URL but on https
return 301 https://$host$request_uri;
}
}
server {
# listen on port 443 (https)
listen 443 ssl;
server_name example.com;
# location of the self-signed SSL certificate
#ssl_certificate /home/ubuntu/microblog2/certs/cert.pem;
#ssl_certificate_key /home/ubuntu/microblog2/certs/key.pem;
# write access and error logs to /var/log
access_log /var/log/microblog_access.log;
error_log /var/log/microblog_error.log;
location / {
# forward application requests to the gunicorn server
proxy_pass http://127.0.0.1:8000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /static {
# handle static files directly, without forwarding to the application
alias /home/ubuntu2/microblog/static;
expires 30d;
}
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /home/ubuntu2/microblog/;
}
location = /.well-known/acme-challenge/ {
return 404;
}
}
I don't know where is wrong, could you help me solve this issue, thanks!
It appears from the error message that example.com is not pointing to your Nginx server address yet. Notice this in the error message:
Detail: Invalid response from
http://example.com/.well-known/acme-challenge/V9B6Dz7gPx7RhyLmpYIlwYUhs1d4rWJF2HlpJbNbjbY:
"<!DOCTYPE html><body style="padding:0;
margin:0;"><html><body><iframe
src="http://mcc.godaddy.com/park/MaO2MaO2LKWaYaOvrt==/fe/M" <<<<<<<<<<<<<<<
The godaddy.com/park part likely means that the domain is parked at GoDaddy, and is not reaching your server yet.
The error message provide is also useful to understand the problem:
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
Check DNS settings, and try again. Also note - it could be DNS propagation time related, it the DNS change was recent. In that case you need to wait for the change to be propagated, up to 24 hours.
Ps You can confirm that by running curl example.com on the command line. Make sure it returns the home page of your server. When it does, try again.
I am running Django, uwsgi, ngix server.
My server works fine for GET, POST requests of smaller size. But when POSTing requests of large size, nginx returns 502:
nginx error.log is:
2016/03/01 13:52:19 [error] 29837#0: *1482 sendfile() failed (32: Broken pipe) while sending request to upstream, client: 175.110.112.36, server: server-ip, request: "POST /me/amptemp/ HTTP/1.1", upstream: "uwsgi://unix:///tmp/uwsgi.sock:", host: "servername"
So, in order to find where the real problem is, I ran uwsgi on a different port and checked if any error occurs with the same request. But the request was successful. So, the problem is with nginx or unix socket configuration.
Ngin-x configuration:
# the upstream component nginx needs to connect to
upstream django {
server unix:///tmp/uwsgi.sock; # for a file socket
# server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name 52.25.29.179; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /home/usman/Projects/trequant/trequant-python/trequant/media; # your Django project's media files - amend as required
}
location /static {
alias /home/usman/Projects/trequant/trequant-python/trequant/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
######## proxy_pass http://127.0.0.1:8000;
######## proxy_set_header Host $host;
######## proxy_set_header X-Real-IP $remote_addr;
######## proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
uwsgi_pass django;
uwsgi_read_timeout 600s;
uwsgi_send_timeout 600s;
include /etc/nginx/uwsgi_params; # the uwsgi_params file you installed
}
}
So, any idea what I am doing wrong? Thank you in advance.
Supposedly setting post-buffering = 8192 in your uwsgi.ini file will fix this. I got this answer from a 2.5-yr-old answer here and it implies this fix is not the root cause. Hope it helps!
Another fix is to use a TCP socket instead of a unix socket in your conf files:
In uwsgi.ini, use something like socket = 127.0.0.1:8000 in the [uwsgi] section instead of:
socket = /tmp/uwsgi.sock
chown-socket = nginx:nginx
chmod-socket = 664
In your nginx.conf file (btw in Ubuntu, I'm referring to /etc/nginx/conf.d/nginx.conf, NOT the one simply in /etc/nginx/) use uwsgi_pass 127.0.0.1:8000; instead of include uwsgi_params;
I've posted this as a separate answer because either answer may work, and I'm interested to see which answer helps others the most.
In my case this seemed to be for requests that would have given a 308 redirect. I think my Node backend was sending response before postdata was fully received. Updating the client to hit new endpoint (no redirect) may permanently fix my case. Seems promising.
Set higher body buffer size client_body_buffer_size 1M; This will fix.
References:
http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size
https://www.nginx.com/resources/wiki/start/topics/examples/full/
I am using nginx as a proxy server for a Django app using gunicorn. The Django app is binded to http://127.0.0.1:8000. And here's my nginx setup from etc/nginx/sites-enabled/parkitbackend:
server {
server_name AA.BB.CC.DD;
access_log off;
location /static/ {
autoindex on;
alias /home/zihe/parkitbackend/parkitbackend/common-static/;
}
location / {
proxy_pass http://127.0.0.1:8000;
}
}
I am using python requests module:
requests.post("http://AA.BB.CC.DD/dashboard/checkin/", data=unicode(json.dumps(payload), "utf8"))
to post JSON objects to my django app called dashboard, where I have a function in dashboard/views.py called checkin to process the JSON object.
I did not receive any errors from running JSON posting script. However, Nginx does not seem to be able to pass the request to gunicorn binded at 127.0.0.1:8000. What should I do so I can use Nginx to pass the JSON to my django app? Thank you!
Additional notes:
I am very sure JSON posting code and my django app work properly since I tested it by binding Django app to http://AA.BB.CC.DD:8000 and ran this code in python:
requests.post("http://AA.BB.CC.DD:8000/dashboard/checkin/", data=unicode(json.dumps(payload), "utf8"))
and my django app received the JSON as expected.
I checked the error.log located at /var/log/nginx/. It turns out that the JSON I was sending was too large and was giving this error:
[error] 3450#0: *9 client intended to send too large body: 1243811 bytes, client: 127.0.0.1, server: _, request: "POST /dashboard/checkin/ HTTP/1.1", host: "127.0.0.1"
After reading up on this link: http://gunicorn-docs.readthedocs.org/en/19.3/deploy.html#nginx-configuration
I reduced the size of the JSON and modified etc/nginx/sites-enabled/parkitbackend to be like this:
upstream app_server {
server 127.0.0.1:8000;
}
server {
listen AA.BB.CC.DD:80;
server_name = _;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
location /static/ {
autoindex on;
alias /home/username/parkitbackend/parkitbackend/common-static/;
}
}
and replaced this line in /etc/nginx/nginx.conf:
include /etc/nginx/sites-enabled/*;
with this:
include /etc/nginx/sites-enabled/parkitbackend;
And the problem is resolved.
well, I just setup NGINX and now its working.
As my BackEnd WebServer under NGINX I have Python Tornado running:
I only use NGINX for allow big uploads (large-sized), so one of my URL (for upload) is served on NGINX and the rest of URLs are served by Tornado.
I use Sessions provided by Tornado (running at http://localhost:8080/), and NGINX is running at http://localhost:8888/;
Well this is my nginx config file:
location /images/upload {
upload_pass /after_upload;
.....
.....
.....
}
location /after_upload {
proxy_pass http://localhost:8080/v1/upload/;
}
As you see, there aren't anything about authentication on NGINX.
URL for proxy_pass requiere a session valid (provided by Tornado)
This is scheme of the system is the following:
When users log in the system, system create a Tornado (tornado sessions) session in server and in user's Browser, so I need pass authentication to NGINX and continue this authentication process again in Tornado Service.
How I change NginX for authenticate against Tornado?
Thanks in advance
Well, Nginx works as a Proxy, therefore is not necessary make changes in Tornado or in your application. For my application I just add rewrites from NGINX urls towards Tornado urls. So this includes all traffic (auth, etc.) and all HTTP structures like if you were working in Tornado.
server {
listen 8888; ## listen for ipv4
server_name localhost;
access_log /var/log/nginx/localhost.access.log;
client_max_body_size 100000M;
location / {
#Real Location URL for Tornado.
proxy_pass http://localhost:8080/;
}
}
Key is proxy_pass , where every requests for 8888 port are passed to 8080 port in localhost.
Everything is passed to Tornado BackEnd from Nginx.
I've got a flask app daemonized via supervisor. I want to proxy_pass a subfolder on the localhost to the flask app. The flask app runs correctly when run directly, however it gives 404 errors when called through the proxy. Here is the config file for nginx:
upstream apiserver {
server 127.0.0.1:5000;
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://apiserver;
proxy_next_upstream error timeout http_502;
proxy_buffering off;
}
For instance, when I go to http://127.0.0.1:5000/me, I get a valid response from the app. However when I go to http://127.0.0.1/api/me I get a 404 from the flask app (not nginx). Also, the flask SERVER_NAME variable is set to 127.0.0.1:5000, if that's important.
I'd really appreciate any suggestions; I'm pretty stumped! If there's anything else I need to add, let me know!
I suggest not setting SERVER_NAME.
If SERVER_NAME is set, it will 404 any requests that don't match the value.
Since Flask is handling the request, you could just add a little bit of information to the 404 error to help you understand what's passing through to the application and give you some real feedback about what effect your nginx configuration changes cause.
from flask import request
#app.errorhandler(404)
def page_not_found(error):
return 'This route does not exist {}'.format(request.url), 404
So when you get a 404 page, it will helpfully tell you exactly what Flask was handling, which should help you to very quickly narrow down your problem.
I ran into the same issue. Flask should really provide more verbose errors here since the naked 404 isn't very helpful.
In my case, SERVER_NAME was set to my domain name (e.g. example.com).
nginx was forwarding requests without the server name, and as #Zoidberg notes, this caused Flask to trigger a 404.
The solution was to configure nginx to use the same server name as Flask.
In your nginx configuration file (e.g. sites_available or nginx.conf, depending on where you're defining your server):
server {
listen 80;
server_name example.com; # this should match Flask SERVER_NAME
...etc...