How to capture client ip of web application in flask? [duplicate] - python

This question already has answers here:
What is going on when I set app.wsgi_app = ProxyFix(app.wsgi_app) when running a Flask app on gunicorn?
(2 answers)
X-Forwarded-Proto and Flask
(1 answer)
Closed 2 years ago.
I am trying to capture client ip in logs and getting correct ip on development server but after deployment on sit server, getting wrong ip by default (10.46.0.0). Please someone suggest, what else can be used in flask python? Thanks in advance.
Code:-
Import request
Ip = request.environ['REMOTE_ADDR']

The IP address you retrieve is part of the CIDR block 10.0.0.0/8 and reserved for private networks. This tells me that your application deployed behind a reverse proxy which performs requests to your application on the original users' behalves. A reverse proxy would typically inform the upstream service (your application) about the originating IP address by adding the header X-Forwarded-For to the proxied request. This depends on the configuration of the reverse proxy, so you should contact the people in charge of the deployment server about its configuration.

It seems like you got the address from a Reverse Proxy. The original client IP may be forwarded as part of the proxy chain.
This worked for me.
client_ip = request.headers.get(
'X-Forwarded-For',
request.headers.get('Client-Ip',
request.remote_addr))
There is a difference between WSGI and CGI.
WSGI: request.headers
CGI: request.environ

Related

Create a default vhost to serve http request in uwsgi

I've uwsgi 2.0.19 on Linux running with the python plugin. I serve http(s) traffic with different applications each for a specific record of my managed domain using such kind of configuration to register them to the front uwsgi servers.
subscribe2 = server=x.x.x.x:4443,key=domain.com,sni_key=/etc/ssl/private/domain.com.key,sni_cert=/etc/ssl/certs/domain.com.crt
subscribe2 = server=x.x.x.x:4443,key=domain.com:443,sni_key=/etc/ssl/private/domain.com.key,sni_cert=/etc/ssl/certs/domain.com.crt
subscribe2 = server=y.y.y.y:4443,key=domain.com,sni_key=/etc/ssl/private/domain.com.key,sni_cert=/etc/ssl/certs/domain.com.crt
subscribe2 = server=y.y.y.y:4443,key=domain.com:443,sni_key=/etc/ssl/private/domain.com.key,sni_cert=/etc/ssl/certs/domain.com.crt
Now when I reach one of the front servers to access a not-existing host, I received such error (the TCP connexion is closed I assume)
curl: (52) Empty reply from server
I would like to be able to have a default/catchall key for such case, that permits to return an HTTP status 404 as I would do in Apache using the _default_ vhost. is it possible.
In order to implement this, you need to define a fallback application using the http-subscription-fallback-key on the front uwsgi server
http-subscription-fallback-key=default
default is a standard application registered on the frontal uwsgi like any other application
subscribe2 = server=x.x.x.x:4443,key=default
subscribe2 = server=x.x.x.x:4443,key=default:80

Python Flask API remote acces with public ip [duplicate]

This question already has answers here:
Are a WSGI server and HTTP server required to serve a Flask app?
(3 answers)
Configure Flask dev server to be visible across the network
(17 answers)
Access localhost from the internet [closed]
(6 answers)
Closed 2 years ago.
Hello guys so I wanted to be able to host an API on a raspberry and acces it from computers that are on another networks.
this is my simple code that's just for test as I need only to be able to acces it remote with the public ip
import flask
from flask import request
app = flask.Flask(__name__)
#app.route('/', methods=['GET'])
def home():
return '123'
app.run(host='0.0.0.0', port=3138)
So I have created a port forward in my router option on the port 3138 linked with the static internal ip of the raspberry and I tried to acces it remotely like this: <public_ip>:3138/ it should show "123" but it shows nothing, it won't even load, do you have any ideeas how to be able to acces it in this way ?
Can you do some tests:
Have you tried to access it from your local network first (to make sure the port is open)?
Can also try run netcat on the raspberry(to exclude your program is not working) : "nc -l 3138" Then access the port from your mobile phone (should not be on connected to your network)
Setup your PC to use same ip and disconnect raspberry (to make sure port is open)
Check that you have a public IP so its not a Carrier-grade NAT (https://en.wikipedia.org/wiki/Carrier-grade_NAT). Check if your ip starts with 10.x.x.x, 172.16.x.x or 192.168.x.x (This can be indication its CGNAT)
It's not advisable to you the flask development server in production. I'll advise you use on of the WSGI suited for production(you may use waitress):
1. pip install waitress
2. create a file server.py (or whatever name suites you)
#content of server.py
from waitress import serve
import main #import flask app main file
serve(main.app, host='0.0.0.0', port=8080)
3. run server.py
4. access you app via:
http://<public_ip>:8080

Python socket.io server error 400 (NodeJS server works)

I'm trying to make JavaScript client to a Python websocket server through an Apache2 proxy.
The client is dead simple:
const socket = io({
transports: ['websocket']
});
I have a NodeJS websocket server and a working Apache2 reverse proxy setup.
Now I want to replace the NodeJS server with a Python server - but none of the example implementations from socket.io works. With each of the my client reports an "error 400" when setting up the websocket connection.
The Python server examples come from here:
https://github.com/miguelgrinberg/python-socketio/tree/master/examples/server
Error 400 stands for "Bad Request" - but I know that my requests are fine because my NodeJS server understands them.
When not running behind a proxy then all Python examples work fine.
What could be the problem?
I found the solution - all the Python socket.io server examples that I refered to are not configured to run behind a reverse proxy. The reason is, that the socket.io server is managing a list of allowed request origins and the automatic list creation is failing in the reverse proxy situation.
This function creates the automatic list of allowed origins (engineio/asyncio_server.py):
def _cors_allowed_origins(self, environ):
default_origins = []
if 'wsgi.url_scheme' in environ and 'HTTP_HOST' in environ:
default_origins.append('{scheme}://{host}'.format(
scheme=environ['wsgi.url_scheme'], host=environ['HTTP_HOST']))
if 'HTTP_X_FORWARDED_HOST' in environ:
scheme = environ.get(
'HTTP_X_FORWARDED_PROTO',
environ['wsgi.url_scheme']).split(',')[0].strip()
default_origins.append('{scheme}://{host}'.format(
scheme=scheme, host=environ['HTTP_X_FORWARDED_HOST'].split(
',')[0].strip()))
As you can see, it only adds URLs with {scheme} as a protocol. When behind a reverse proxy, {scheme} will always be "http". So if the initial request was HTTPS based, it will not be in the list of allowed origins.
The solution to this problem is very simple: when creating the socket.io server, you have to either tell him to allow all origins or specify your origin:
import socketio
sio = socketio.AsyncServer(cors_allowed_origins="*") # allow all
# or
sio = socketio.AsyncServer(cors_allowed_origins="https://example.com") # allow specific

Flask SERVER_NAME setting best pratices

Since my app has background tasks, I use the Flask context. For the context to work, the Flask setting SERVER_NAME should be set.
When the SERVER_NAME is set the incoming requests are checked to match this value or the route isn't found. When placing an nginx (or other webserver in front), the SERVER_NAME should also include the port and the reverse proxy should handle the rewrite stuff, hiding the port number from the outside world (which it does).
For session cookies to work in modern browsers, the URL name passed by the proxy should be the same as the SERVER_NAME, otherwise the browser refuses to send the cookies. This can be solved by adding the official hostname in the /etc/hosts and setting it to 127.0.0.1.
There is one thing that I haven't figured out yet and it is the URL in the background tasks. url_for() is used with the _external option to generate URLs in the mail it sends out. But that URL includes the port, which is different from the 443 port used by my nginx instance.
Removing the port from the SERVER_NAME makes the stuff described in the first paragraph fail.
So what are my best options for handling the url_for in the mail. Create a separate config setting? Create my own url_for?
You should use url_for(location, _external=True)
or include proxy_params if you use nginx.

How to pass correct client ip from nginx running in one container to python-flask app running 2nd container? [duplicate]

Heroku proxies requests from a client to server, so you have to parse the X-Forwarded-For to find the originating IP address.
The general format of the X-Forwarded-For is:
X-Forwarded-For: client1, proxy1, proxy2
Using werkzeug on flask, I'm trying to come up with a solution in order to access the originating IP of the client.
Does anyone know a good way to do this?
Thank you!
Werkzeug (and Flask) store headers in an instance of werkzeug.datastructures.Headers. You should be able to do something like this:
provided_ips = request.headers.getlist("X-Forwarded-For")
# The first entry in the list should be the client's IP.
Alternately, you could use request.access_route (thanks #Bastian for pointing that out!):
provided_ips = request.access_route
# First entry in the list is the client's IP
This is what I use in Django. See this https://docs.djangoproject.com/en/dev/ref/request-response/#django.http.HttpRequest.get_host
Note: At least on Heroku HTTP_X_FORWARDED_FOR will be an array of IP addresses. The first one is the client IP the rest are proxy server IPs.
in settings.py:
USE_X_FORWARDED_HOST = True
in your views.py:
if 'HTTP_X_FORWARDED_FOR' in request.META:
ip_adds = request.META['HTTP_X_FORWARDED_FOR'].split(",")
ip = ip_adds[0]
else:
ip = request.META['REMOTE_ADDR']

Categories

Resources