I have an AWS ELB connected with multiple EC2s that are running the AWS Flask server. I am not sure if AWS ELB passes the complete request to EC2 or not. I know we can do the restrictions at ELB level but I want to put restrictions on only one endpoint and verify the hostname of the server who invoked the endpoint in Flask. Is it possible?
You could try the following:
import socket
from flask import request
#app.route("/your_route", methods=["GET"])
def your_route():
hostname, aliaslist, ipaddrlist = socket.gethostbyaddr(request.remote_addr)
Note that relying on the remote_addr is unreliable, however as this is unrelated to the topic I will refer to this answer which makes use of ProxyFix:
For more information on socket.gethostbyaddr() please check out: socket.gethostbyaddr()
I suggest you use the decorator pattern for such cases i.e. you add a new config option IP_LIST with some kind of address set divided by comma.
IP_LIST = "127.0.0.1,127.0.0.2,..."
After that add a new decorator function, and decorate any endpoint with the decorator.
def ip_verified(fn):
"""
A custom decorator that checks if a client IP is in the list, otherwise block access.
"""
#wraps(fn)
def decorated_view(*args, **kwargs):
ip_list_str = current_app.config['IP_LIST']
ip_list = ip_list_str.split(",") if ip_list_str else []
if request.headers.getlist("X-Forwarded-For"):
remote_ip = request.headers.getlist("X-Forwarded-For")[0]
else:
remote_ip = request.remote_addr
if remote_ip not in ip_list:
return "Not sufficient privileges", 403
return fn(*args, **kwargs)
return decorated_view
#app.route("/your_route", methods=["GET"])
#ip_verified
def your_route():
...
One option is to use a Network Load Balancer which preserves the IP address of the client making the request. You can even have the NLB do the TLS termination just like an ELB. An NLB does not alter the data in the network request, with the exception of TLS termination if you choose to use that.
Related
In my tool the users can provide a mail backend using certain infos on a model and send their mails via the backend which gets created from those values. This all works, but I would love to have a quick check if the provided backend actually will work before using it. Using something like this check_mail_connection doesn't work as this actually returns False even though I entered valid connection parameters.
from django.core.mail import get_connection
class User(models.Model):
...
def get_mail_connection(self, fail_silently=False)
return get_connection(host=self.email_host,
port=self.email_port,
username=self.email_username,
password=self.email_password ... )
def check_mail_connection(self) -> bool:
from socket import error as socket_error
from smtplib import SMTP, SMTPConnectError
smtp = SMTP(host=self.email_host, port=self.email_port)
try:
smtp.connect()
return True
except SMTPConnectError:
return False
except socket_error:
return False
I don't want to send a test mail to confirm, as this can easily get lost or fail on a different part of the system. This feature is for sending out emails from the users mail servers, as I suspect most of my users have a mail server anyways and I basically offer white labeling and similar stuff to them.
You have the following line smtp.connect() in your code that attempts to make a connection. If you look at the documentation for smtplib the signature for this method is:
SMTP.connect(host='localhost', port=0)
Meaning you are trying to connect to localhost with port 25 (standard SMTP port). Of course there is no server listening there and you get a ConnectionRefusedError which you catch and return False. In fact you don't even need to call connect because the documentation states:
If the optional host and port parameters are given, the SMTP
connect() method is called with those parameters during
initialization.
Hence you can simply write:
def check_mail_connection(self) -> bool:
from smtplib import SMTP
try:
smtp = SMTP(host=self.email_host, port=self.email_port)
return True
except OSError:
return False
You can also simply use the open method of the email backend's instance rather than creating the SMTP instance and calling connect yourself:
def check_mail_connection(self) -> bool:
try:
email_backend = self.get_mail_connection()
silent_exception = email_backend.open() is None
email_backend.close()
return not silent_exception
except OSError:
return False
I have a few questions for you and would like for you to answer these questions before we can go further.
What type of OS are you running the server on?
What mail client and tutorial did you follow? Postfix?
Can a user on the server send local mail to another user on the server?
What ports are open and what type of security features do you have installed?
What did your logs say when the email failed?
Are you self hosting/ are you acting as the server admin?
(It's fine if this is your first time. Everyone had a first day.)
SSL and A FQDN isn't too important if your just sending mail out. The system will still work, you just won't be able to receive mail.
(I'm talking from the sense of making sure it at least will send an email. You should at least use SSL as it can be gotten for free.)
If you checked all of these things, there is a part of the mail client that you are using and it probably won't send mail out unless it has approval. There are a lot of variables.
All of these things matter or it wont work.
Sorry meant to make this as a comment. I'm not use to speaking on here.
I am setting up a HTTP proxy in python to filter web content. I found a good example on StackOverflow which does exactly this using Twisted. However, I need another proxy to access the web. So, the proxy needs to forward requests to another proxy. What is the best way to do this using twisted.web.proxy?
I found a related question which needs something similar, but from a reverse proxy.
My best guess is that it should be possible to build a chained proxy by modifying or subclassing twisted.web.proxy.ProxyClient to connect to the next proxy instead of connecting to the web directly. Unfortunately I didn't find any clues in the documentation on how to do this.
The code I have so far (cite):
from twisted.python import log
from twisted.web import http, proxy
class ProxyClient(proxy.ProxyClient):
def handleResponsePart(self, buffer):
proxy.ProxyClient.handleResponsePart(self, buffer)
class ProxyClientFactory(proxy.ProxyClientFactory):
protocol = ProxyClient
class ProxyRequest(proxy.ProxyRequest):
protocols = dict(http=ProxyClientFactory)
class Proxy(proxy.Proxy):
requestFactory = ProxyRequest
class ProxyFactory(http.HTTPFactory):
protocol = Proxy
portstr = "tcp:8080:interface=localhost" # serve on localhost:8080
if __name__ == '__main__':
import sys
from twisted.internet import endpoints, reactor
log.startLogging(sys.stdout)
endpoint = endpoints.serverFromString(reactor, portstr)
d = endpoint.listen(ProxyFactory())
reactor.run()
This is actually not hard to implement using Twisted. Let me give you a simple example.
Suppose the first proxy is proxy1.py, like the code you pasted in your question;
the second proxy is proxy2.py.
For proxy1.py, you just need to override the process function of class ProxyRequest. Like this:
class ProxyRequest(proxy.ProxyRequest):
def process(self):
parsed = urllib_parse.urlparse(self.uri)
protocol = parsed[0]
host = parsed[1].decode('ascii')
port = self.ports[protocol]
if ':' in host:
host, port = host.split(':')
port = int(port)
rest = urllib_parse.urlunparse((b'', b'') + parsed[2:])
if not rest:
rest = rest + b'/'
class_ = self.protocols[protocol]
headers = self.getAllHeaders().copy()
if b'host' not in headers:
headers[b'host'] = host.encode('ascii')
self.content.seek(0, 0)
s = self.content.read()
clientFactory = class_(self.method, rest, self.clientproto, headers, s, self)
if (NeedGoToSecondProxy):
self.reactor.connectTCP(your_second_proxy_server_ip, your_second_proxy_port, clientFactory)
else:
self.reactor.connectTCP(host, port, clientFactory)
For proxy2.py, you just need to set up another simple proxy. A problem need to be noticed though, you may need to override process function in proxy2.py again, because the self.uri may not be valid after the proxy forward (chain).
For example, the original self.uri should be http://www.google.com/something?para1=xxx, and you may find it as /something?para1=xxx only, at second proxy. So you need to extract the host info from self.headers and complement the self.uri so that your second proxy can normally deliver it to the correct destination.
I want let some IPs can access to site.
example :
bottle server IP : 192.168.0.1
and I want let 192.168.0.1/29 can access to site,
so 192.168.0.2 can access to site, 192.168.0.11 can't access to site.
my way is create a function to check client IP,
if out of range return status 403.
check IP function like this:
from netaddr import IPSet,IPAddress
def authIP(clientIP=None):
rules = IPSet(['192.168.0.1/29'])
if(IPAddress(clientIP) in rules):
return 'ok.'
else:
abort(403,'access denied.')
but, use this way,I will add this function to every route function to check it.
Like:
#route('/ip')
def tip():
cip = request.environ['REMOTE_ADDR']
return authIP(cip)
Have any other ideas ...?
I have a python code that would need to make use of various ShadowSocks proxy server that I have set up in order to use the IP of those servers.
Say for example I would like to use:
1.1.1.1:5678
2.2.2.2:5678
3.3.3.3:5678
i.e., all these servers have the same remote port and the local ports are all 1080.
My preference is to have the 3 proxies to rotate randomly so that each time I send a urlopen() request (in urllib2), my code randomly connect to one of the proxies and send the request via that proxy, and disconnect when the request is complete.
The IP could be hard coded or could be stored in some config files.
The problem is at the moment, all the sample online that I have found seems all require the connection to be pre-established and the Python code should simply use whatever that is on localhost:1080 instead of actively making connections.
I am just wondering if anyone could lend me a helping hand to accomplish this in the code.
Thanks!
If you have a look at the source of urllib2, you can see that when a default opener is installed, it is really just takes an object with an open method. So you really just need to create an object whose open method returns a random opener. Something like the following (untested) should work:
import urllib2
import random
class RandomOpener(object):
def __init__(self, ip_list)
self.ip_list = ip_list
def open(self, *args, **kwargs):
proxy = random.choice(self.ip_list)
handler = urllib2.ProxyHandler({'http': 'http://' + proxy})
opener = urllib2.build_opener(handler)
return opener(*args, **kwargs)
my_opener = RandomOpener(['1.1.1.1:5678',
'2.2.2.2:5678',
'3.3.3.3:5678'])
urllib2.install_opener(my_opener)
I am trying to make a very simple XML RPC Server with Python that provides basic authentication + ability to obtain the connected user's IP. Let's take the example provided in http://docs.python.org/library/xmlrpclib.html :
import xmlrpclib
from SimpleXMLRPCServer import SimpleXMLRPCServer
def is_even(n):
return n%2 == 0
server = SimpleXMLRPCServer(("localhost", 8000))
server.register_function(is_even, "is_even")
server.serve_forever()
So now, the first idea behind this is to make the user supply credentials and process them before allowing him to use the functions. I need very simple authentication, for example just a code. Right now what I'm doing is to force the user to supply this code in the function call and test it with an if-statement.
The second one is to be able to get the user IP when he calls a function or either store it after he connects to the server.
Moreover, I already have an Apache Server running and it might be simpler to integrate this into it.
What do you think?
This is a related question that I found helpful:
IP address of client in Python SimpleXMLRPCServer?
What worked for me was to grab the client_address in an overridden finish_request method of the server, stash it in the server itself, and then access this in an overridden server _dispatch routine. You might be able to access the server itself from within the method, too, but I was just trying to add the IP address as an automatic first argument to all my method calls. The reason I used a dict was because I'm also going to add a session token and perhaps other metadata as well.
from xmlrpc.server import DocXMLRPCServer
from socketserver import BaseServer
class NewXMLRPCServer( DocXMLRPCServer):
def finish_request( self, request, client_address):
self.client_address = client_address
BaseServer.finish_request( self, request, client_address)
def _dispatch( self, method, params):
metadata = { 'client_address' : self.client_address[ 0] }
newParams = ( metadata, ) + params
return DocXMLRPCServer._dispatch( self, method, metadata)
Note this will BREAK introspection functions like system.listMethods() because that isn't expecting the extra argument. One idea would be to check the method name for "system." and just pass the regular params in that case.