Python SimpleXMLRPCServer: get user IP and simple authentication - python

I am trying to make a very simple XML RPC Server with Python that provides basic authentication + ability to obtain the connected user's IP. Let's take the example provided in http://docs.python.org/library/xmlrpclib.html :
import xmlrpclib
from SimpleXMLRPCServer import SimpleXMLRPCServer
def is_even(n):
return n%2 == 0
server = SimpleXMLRPCServer(("localhost", 8000))
server.register_function(is_even, "is_even")
server.serve_forever()
So now, the first idea behind this is to make the user supply credentials and process them before allowing him to use the functions. I need very simple authentication, for example just a code. Right now what I'm doing is to force the user to supply this code in the function call and test it with an if-statement.
The second one is to be able to get the user IP when he calls a function or either store it after he connects to the server.
Moreover, I already have an Apache Server running and it might be simpler to integrate this into it.
What do you think?

This is a related question that I found helpful:
IP address of client in Python SimpleXMLRPCServer?
What worked for me was to grab the client_address in an overridden finish_request method of the server, stash it in the server itself, and then access this in an overridden server _dispatch routine. You might be able to access the server itself from within the method, too, but I was just trying to add the IP address as an automatic first argument to all my method calls. The reason I used a dict was because I'm also going to add a session token and perhaps other metadata as well.
from xmlrpc.server import DocXMLRPCServer
from socketserver import BaseServer
class NewXMLRPCServer( DocXMLRPCServer):
def finish_request( self, request, client_address):
self.client_address = client_address
BaseServer.finish_request( self, request, client_address)
def _dispatch( self, method, params):
metadata = { 'client_address' : self.client_address[ 0] }
newParams = ( metadata, ) + params
return DocXMLRPCServer._dispatch( self, method, metadata)
Note this will BREAK introspection functions like system.listMethods() because that isn't expecting the extra argument. One idea would be to check the method name for "system." and just pass the regular params in that case.

Related

Trying to run a simple web server to receive post data from my utility company that uses greenbuttondata

I have this running in a simple python script that runs a http server, its supposed to retrieve data from a post call from my utility company, ive noticed though that as soon as i run it, and navigate to https://www.yougetsignal.com/tools/open-ports/ and check if my port is open for 443, it says closed (even though its open before)
i put this in a test.py and run it so i can wait for the post request and download the xml data, but for whatever it turns off my port when i go and check it
from pgesmd_self_access.api import SelfAccessApi
from pgesmd_self_access.server import SelfAccessServer
from pgesmd_self_access.helpers import save_espi_xml
pge_api = SelfAccessApi.auth( < i changed this to reflect my auth file > )
SelfAccessServer(pge_api, save_file=save_espi_xml)
i thought the port should stay open on my router? it doesnt seem like normal behavior
i think i need to implement SO_REUSEADDR in his code according to some docket documentation, but im not sure how to do that
class SelfAccessServer:
"""Server class for PGE SMD Self Access API."""
def __init__(
self, api_instance, save_file=None, filename=None, to_db=True, close_after=False
):
"""Initialize and start the server on construction."""
PgePostHandler.api = api_instance
PgePostHandler.save_file = save_file
PgePostHandler.filename = filename
PgePostHandler.to_db = to_db
server = HTTPServer(("", 7999), PgePostHandler)
server.socket = ssl.wrap_socket(
server.socket,
certfile=api_instance.cert[0],
keyfile=api_instance.cert[1],
server_side=True,
)
if close_after:
server.handle_request()
else:
server.serve_forever()

Verify hostname of the server who invoked the API

I have an AWS ELB connected with multiple EC2s that are running the AWS Flask server. I am not sure if AWS ELB passes the complete request to EC2 or not. I know we can do the restrictions at ELB level but I want to put restrictions on only one endpoint and verify the hostname of the server who invoked the endpoint in Flask. Is it possible?
You could try the following:
import socket
from flask import request
#app.route("/your_route", methods=["GET"])
def your_route():
hostname, aliaslist, ipaddrlist = socket.gethostbyaddr(request.remote_addr)
Note that relying on the remote_addr is unreliable, however as this is unrelated to the topic I will refer to this answer which makes use of ProxyFix:
For more information on socket.gethostbyaddr() please check out: socket.gethostbyaddr()
I suggest you use the decorator pattern for such cases i.e. you add a new config option IP_LIST with some kind of address set divided by comma.
IP_LIST = "127.0.0.1,127.0.0.2,..."
After that add a new decorator function, and decorate any endpoint with the decorator.
def ip_verified(fn):
"""
A custom decorator that checks if a client IP is in the list, otherwise block access.
"""
#wraps(fn)
def decorated_view(*args, **kwargs):
ip_list_str = current_app.config['IP_LIST']
ip_list = ip_list_str.split(",") if ip_list_str else []
if request.headers.getlist("X-Forwarded-For"):
remote_ip = request.headers.getlist("X-Forwarded-For")[0]
else:
remote_ip = request.remote_addr
if remote_ip not in ip_list:
return "Not sufficient privileges", 403
return fn(*args, **kwargs)
return decorated_view
#app.route("/your_route", methods=["GET"])
#ip_verified
def your_route():
...
One option is to use a Network Load Balancer which preserves the IP address of the client making the request. You can even have the NLB do the TLS termination just like an ELB. An NLB does not alter the data in the network request, with the exception of TLS termination if you choose to use that.

Twisted - forwarding proxy requests to another proxy (proxy chain)

I am setting up a HTTP proxy in python to filter web content. I found a good example on StackOverflow which does exactly this using Twisted. However, I need another proxy to access the web. So, the proxy needs to forward requests to another proxy. What is the best way to do this using twisted.web.proxy?
I found a related question which needs something similar, but from a reverse proxy.
My best guess is that it should be possible to build a chained proxy by modifying or subclassing twisted.web.proxy.ProxyClient to connect to the next proxy instead of connecting to the web directly. Unfortunately I didn't find any clues in the documentation on how to do this.
The code I have so far (cite):
from twisted.python import log
from twisted.web import http, proxy
class ProxyClient(proxy.ProxyClient):
def handleResponsePart(self, buffer):
proxy.ProxyClient.handleResponsePart(self, buffer)
class ProxyClientFactory(proxy.ProxyClientFactory):
protocol = ProxyClient
class ProxyRequest(proxy.ProxyRequest):
protocols = dict(http=ProxyClientFactory)
class Proxy(proxy.Proxy):
requestFactory = ProxyRequest
class ProxyFactory(http.HTTPFactory):
protocol = Proxy
portstr = "tcp:8080:interface=localhost" # serve on localhost:8080
if __name__ == '__main__':
import sys
from twisted.internet import endpoints, reactor
log.startLogging(sys.stdout)
endpoint = endpoints.serverFromString(reactor, portstr)
d = endpoint.listen(ProxyFactory())
reactor.run()
This is actually not hard to implement using Twisted. Let me give you a simple example.
Suppose the first proxy is proxy1.py, like the code you pasted in your question;
the second proxy is proxy2.py.
For proxy1.py, you just need to override the process function of class ProxyRequest. Like this:
class ProxyRequest(proxy.ProxyRequest):
def process(self):
parsed = urllib_parse.urlparse(self.uri)
protocol = parsed[0]
host = parsed[1].decode('ascii')
port = self.ports[protocol]
if ':' in host:
host, port = host.split(':')
port = int(port)
rest = urllib_parse.urlunparse((b'', b'') + parsed[2:])
if not rest:
rest = rest + b'/'
class_ = self.protocols[protocol]
headers = self.getAllHeaders().copy()
if b'host' not in headers:
headers[b'host'] = host.encode('ascii')
self.content.seek(0, 0)
s = self.content.read()
clientFactory = class_(self.method, rest, self.clientproto, headers, s, self)
if (NeedGoToSecondProxy):
self.reactor.connectTCP(your_second_proxy_server_ip, your_second_proxy_port, clientFactory)
else:
self.reactor.connectTCP(host, port, clientFactory)
For proxy2.py, you just need to set up another simple proxy. A problem need to be noticed though, you may need to override process function in proxy2.py again, because the self.uri may not be valid after the proxy forward (chain).
For example, the original self.uri should be http://www.google.com/something?para1=xxx, and you may find it as /something?para1=xxx only, at second proxy. So you need to extract the host info from self.headers and complement the self.uri so that your second proxy can normally deliver it to the correct destination.

Rotate through Shadowsocks proxy via Python

I have a python code that would need to make use of various ShadowSocks proxy server that I have set up in order to use the IP of those servers.
Say for example I would like to use:
1.1.1.1:5678
2.2.2.2:5678
3.3.3.3:5678
i.e., all these servers have the same remote port and the local ports are all 1080.
My preference is to have the 3 proxies to rotate randomly so that each time I send a urlopen() request (in urllib2), my code randomly connect to one of the proxies and send the request via that proxy, and disconnect when the request is complete.
The IP could be hard coded or could be stored in some config files.
The problem is at the moment, all the sample online that I have found seems all require the connection to be pre-established and the Python code should simply use whatever that is on localhost:1080 instead of actively making connections.
I am just wondering if anyone could lend me a helping hand to accomplish this in the code.
Thanks!
If you have a look at the source of urllib2, you can see that when a default opener is installed, it is really just takes an object with an open method. So you really just need to create an object whose open method returns a random opener. Something like the following (untested) should work:
import urllib2
import random
class RandomOpener(object):
def __init__(self, ip_list)
self.ip_list = ip_list
def open(self, *args, **kwargs):
proxy = random.choice(self.ip_list)
handler = urllib2.ProxyHandler({'http': 'http://' + proxy})
opener = urllib2.build_opener(handler)
return opener(*args, **kwargs)
my_opener = RandomOpener(['1.1.1.1:5678',
'2.2.2.2:5678',
'3.3.3.3:5678'])
urllib2.install_opener(my_opener)

Python 3: How to log SSL handshake errors from server side

I'm using HTTPServer for a basic HTTP server using SSL. I would like to log any time a client initiates an SSL Handshake (or perhaps any time a socket is accepted?) along with any associated errors. I imagine that I'd need to extend some class or override some method, but I'm not sure which or how to properly go about implementing it. I'd greatly appreciate any help. Thanks in advance!
Trimmed down sample code:
from http.server import BaseHTTPRequestHandler, HTTPServer
from socketserver import ThreadingMixIn
from threading import Thread
import ssl
import logging
import sys
class MyHTTPHandler(BaseHTTPRequestHandler):
def log_message(self, format, *args):
logger.info("%s - - %s" % (self.address_string(), format%args))
def do_GET(self):
self.send_response(200)
self.end_headers()
self.wfile.write('test'.encode("utf-8"))
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
pass
logger = logging.getLogger('myserver')
handler = logging.FileHandler('server.log')
formatter = logging.Formatter('[%(asctime)s] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
server = ThreadedHTTPServer(('', 443), MyHTTPHandler)
server.socket = ssl.wrap_socket (server.socket, keyfile='server.key', certfile='server.crt', server_side=True, cert_reqs=ssl.CERT_REQUIRED, ca_certs='client.crt')
Thread(target=server.serve_forever).start()
try:
quitcheck = input("Type 'quit' at any time to quit.\n")
if quitcheck == "quit":
server.shutdown()
except (KeyboardInterrupt) as error:
server.shutdown()
From looking at the ssl module, most of the relevant magic happens in the SSLSocket class.
ssl.wrap_socket() is just a tiny convenience function that basically serves as a factory for an SSLSocket with some reasonable defaults, and wraps an existing socket.
Unfortunately, SSLSocket does not seem to do any logging of its own, so there's no easy way to turn up a logging level, set a debug flag or register any handlers.
So what you can do instead is to subclass SSLSocket, override the methods you're interested in with your own that do some logging, and create and use your own wrap_socket helper function.
Subclassing SSLSocket
First, copy over ssl.wrap_socket() from your Python's .../lib/python2.7/ssl.py into your code. (Make sure that any code you copy and modify actually comes from the Python installation you're using - the code may have changed between different Python versions).
Now adapt your copy of wrap_socket() to
create an instance of a LoggingSSLSocket (which we'll implement below) instead of SSLSocket
and use constants from the ssl module where necessary (ssl.CERT_NONE and ssl.PROTOCOL_SSLv23 in this example)
def wrap_socket(sock, keyfile=None, certfile=None,
server_side=False, cert_reqs=ssl.CERT_NONE,
ssl_version=ssl.PROTOCOL_SSLv23, ca_certs=None,
do_handshake_on_connect=True,
suppress_ragged_eofs=True,
ciphers=None):
return LoggingSSLSocket(sock=sock, keyfile=keyfile, certfile=certfile,
server_side=server_side, cert_reqs=cert_reqs,
ssl_version=ssl_version, ca_certs=ca_certs,
do_handshake_on_connect=do_handshake_on_connect,
suppress_ragged_eofs=suppress_ragged_eofs,
ciphers=ciphers)
Now change your line
server.socket = ssl.wrap_socket (server.socket, ...)
to
server.socket = wrap_socket(server.socket, ...)
in order to use your own wrap_socket().
Now for subclassing SSLSocket. Create a class LoggingSSLSocket that subclasses SSLSocket by adding the following to your code:
class LoggingSSLSocket(ssl.SSLSocket):
def accept(self, *args, **kwargs):
logger.debug('Accepting connection...')
result = super(LoggingSSLSocket, self).accept(*args, **kwargs)
logger.debug('Done accepting connection.')
return result
def do_handshake(self, *args, **kwargs):
logger.debug('Starting handshake...')
result = super(LoggingSSLSocket, self).do_handshake(*args, **kwargs)
logger.debug('Done with handshake.')
return result
Here we override the accept() and do_handshake() methods of ssl.SSLSocket - everything else stays the same, since the class inherits from SSLSocket.
Generic approach to overriding methods
I used a particular pattern for overriding these methods in order to make it easier to apply to pretty much any method you'll ever override:
def methodname(self, *args, **kwargs):
*args, **kwargs makes sure our method accepts any number of positional and keyword arguments, if any. accept doesn't actually take any of those, but it still works because of Python's packing / unpacking of argument lists.
logger.debug('Before call to superclass method')
Here you get the opportunity to do your own thing before calling the superclass' method.
result = super(LoggingSSLSocket, self).methodname(*args, **kwargs)
This is the actual call to the superclass' method. See the docs on super() for details on how this works, but it basically calls .methodname() on LoggingSSLSocket's superclass (SSLSocket). Because we pass *args, **kwargs to the method, we just pass on any positional and keyword arguments our method got - we don't even need to know what they are, the method signatures will always match.
Because some methods (like accept()) will return a result, we store that result and return it at the end of our method, just before doing our post-call work:
logger.debug('After call.')
return result
Logging more details
If you want to include more information in your logging statements, you'll likely have to completely overwrite the respective methods. So copy them over and modify them as required, and make sure you satisfy any missing imports.
Here's an example for accept() that includes the IP address and local port of the client that's trying to connect:
def accept(self):
"""Accepts a new connection from a remote client, and returns
a tuple containing that new connection wrapped with a server-side
SSL channel, and the address of the remote client."""
newsock, addr = socket.accept(self)
logger.debug("Accepting connection from '%s'..." % (addr, ))
newsock = self.context.wrap_socket(newsock,
do_handshake_on_connect=self.do_handshake_on_connect,
suppress_ragged_eofs=self.suppress_ragged_eofs,
server_side=True)
logger.debug('Done accepting connection.')
return newsock, addr
(Make sure to include from socket import socket in your imports at the top of your code - refer to the ssl module's imports to determine where you need to import missing names from if you get a NameError. An good text editor with PyFlakes configured is very helpful in pointing those missing imports out to you).
This method will result in logging output like this:
[2014-10-24 22:01:40,299] Accepting connection from '('127.0.0.1', 64152)'...
[2014-10-24 22:01:40,300] Done accepting connection.
[2014-10-24 22:01:40,301] Accepting connection from '('127.0.0.1', 64153)'...
[2014-10-24 22:01:40,302] Done accepting connection.
[2014-10-24 22:01:40,306] Accepting connection from '('127.0.0.1', 64155)'...
[2014-10-24 22:01:40,307] Done accepting connection.
[2014-10-24 22:01:40,308] 127.0.0.1 - - "GET / HTTP/1.1" 200 -
Because it involves quite a few changes scattered all over the place, here's a gist containing all the changes to your example code.

Categories

Resources