Twisted: ReconnectingClientFactory connection to different servers - python

I have a twisted ReconnectingClientFactory and i can successfully connect to given ip and port couple with this factory. And it works well.
reactor.connectTCP(ip, port, myHandsomeReconnectingClientFactory)
In this situation, when the server is gone, myHandsomeReconnectingClientFactory tries to connect same ip and port (as expected).
My goal is, when the server which serves on given ip and port couple is gone, connecting to a backup server (which have different ip and port).
Any ideas/comments on how to achieve this goal will be appreciated.

Id try something like:
class myHandsomeReconnectingClientFactory(protocol.ReconnectingClientFactory):
def __init_(self, hosts):
# hosts should be a list of tuples (host, port)
self._hosts = hosts
def clientConnectionFailed(self, connector, reason):
if self.continueTrying:
self._try_next_host(connector)
def clientConnectionLost(self, connector, unused_reason):
if self.continueTrying:
self._try_next_host(connector)
def _try_next_host(self, connector):
# round robing of servers
to_try = self._hosts.pop(0)
self._hosts.append(to_try)
connector.host, connector.port = to_try
self.connector = connector
self.retry()
I haven't actually tested it, but at least it should give you a good starting point. Good uck.

ReconnectingClientFactory doesn't have this capability. You can build your own factory which implements this kind of reconnection logic, mostly by hooking into the clientConnectionFailed factory method. When this is called and the reason seems to you like that justifies switching servers (eg, twisted.internet.error.ConnectionRefused), pick the next address on your list and use the appropriate reactor.connectXYZ method to try connecting to it.
You could also try constructing this as an endpoint (which is the newer high-level connection setup API that is preferred by some), but handling reconnection with endpoints is not yet a well documented topic.

Related

python imaplib gmail connection with user pass proxy [duplicate]

Neither poplib or imaplib seem to offer proxy support and I couldn't find much info about it despite my google-fu attempts.
I'm using python to fetch emails from various imap/pop enabled servers and need to be able to do it through proxies.
Ideally, I'd like to be able to do it in python directly but using a wrapper (external program/script, OSX based) to force all traffic to go through the proxy might be enough if I can't find anything better.
Could anyone give me a hand? I can't imagine I'm the only one who ever needed to fetch emails through a proxy in python...
** EDIT Title edit to remove HTTP, because I shouldn't type so fast when I'm tired, sorry for that guys **
The proxies I'm planning to use allow socks in addition to http.
Pop or Imap work through http wouldn't make much sense (stateful vs stateless) but my understanding is that socks would allow me to do what I want.
So far the only way to achieve what I want seems to be dirty hacking of imaplib... would rather avoid it if I can.
You don't need to dirtily hack imaplib. You could try using the SocksiPy package, which supports socks4, socks5 and http proxy (connect):
Something like this, obviously you'd want to handle the setproxy options better, via extra arguments to a custom __init__ method, etc.
from imaplib import IMAP4, IMAP4_SSL, IMAP4_PORT, IMAP4_SSL_PORT
from socks import sockssocket, PROXY_TYPE_SOCKS4, PROXY_TYPE_SOCKS5, PROXY_TYPE_HTTP
class SocksIMAP4(IMAP4):
def open(self,host,port=IMAP4_PORT):
self.host = host
self.port = port
self.sock = sockssocket()
self.sock.setproxy(PROXY_TYPE_SOCKS5,'socks.example.com')
self.sock.connect((host,port))
self.file = self.sock.makefile('rb')
You could do similar with IMAP4_SSL. Just take care to wrap it into an ssl socket
import ssl
class SocksIMAP4SSL(IMAP4_SSL):
def open(self, host, port=IMAP4_SSL_PORT):
self.host = host
self.port = port
#actual privoxy default setting, but as said, you may want to parameterize it
self.sock = create_connection((host, port), PROXY_TYPE_HTTP, "127.0.0.1", 8118)
self.sslobj = ssl.wrap_socket(self.sock, self.keyfile, self.certfile)
self.file = self.sslobj.makefile('rb')
Answer to my own question...
There's a quick and dirty way to force trafic from a python script to go through a proxy without hassle using Socksipy (thanks MattH for pointing me that way)
import socks
import socket
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS4,proxy_ip,port,True)
socket.socket = socks.socksocket
That global socket override is obviously a bit brutal, but works as a quick fix till I find the time to properly subclass IMAP4 and IMAP4_SSL.
If I understand you correctly you're trying to put a square peg in a round hole.
An HTTP Proxy only knows how to "talk" HTTP so can't connect to a POP or IMAP server directly.
If you want to do this you'll need to implement your own server somewhere to talk to the mail servers. It would receive HTTP Requests and then make the appropriate calls to the Mail Server. E.g.:
How practical this would be I don't know since you'd have to convert a stateful protocol into a stateless one.

Blocking an ip from joining a socket server [duplicate]

I have a python socket server that listens on a port, and accepts all incoming connections using:
(conn, address) = socket.accept()
However, I wish to accept connections only from certain ip address.
Currently, I close the connection if the address isn't registered, to accomplish this.
But is there a better way to do this, by directly rejecting connections from unregistered addresses, instead of accepting connections and then closing them?
It's not possible to indicate Connection refused to clients from some IP addresses, and to establish the connection to clients from other IP addresses. This is not a Python limitation, but a lower-level, BSD socket layer limitation. You can't do it even from C.
The closest behavior in general you can do in Python is closing the connection quickly after it has been accepted:
sock, addr = server_socket.accept()
if addr[0] != '12.34.56.78':
sock.close()
return
...
Then the client would see the connection being accepted, and very shortly after that the client would see EOF when reading from it, and it wouldn't be able to write to it.
However it's possible to limit by interface (i.e. network card) at bind time, by using one of:
server_socket.bind(('', 65432)) # Bind on any interface.
server_socket.bind(('127.0.0.1', 65432)) # Bind on loopback (localhost clients only).
server_socket.bind(('34.56.78.91', 65432))
So in the 127.0.0.1 version, telnet 127.0.0.1 65432 (as a client) would work, but telnet myhostname 65432 would yield Connection refused (and the server_socket.accept() call won't get this connection).
If you read the docs you can find the BaseServer.verify_request(request, client_address) which tells you this:
Must return a Boolean value; if the value is True, the request will be processed, and if it’s False, the request will be denied. This function can be overridden to implement access controls for a server. The default implementation always returns True.
Microsoft appears to support this functionality via the SO_CONDITIONAL_ACCEPT socket option
This appears to require usage of WSAAccept to accept connections
This constant does not appear in pythons socket module on my windows 8 machine. I don't think there is an option to use WSAAccept via python's builtin socket module.
If I understand correctly, this will allow your server to respond to SYN packets immediately with RST packets when configured to do so instead of finishing the handshake and exchanging FIN packets. Note that usage of this flag removes responsibility to handle connections from the operating system and places it on the application, so there is plenty of room for errors and performance hits to occur. If a performance boost was the goal, it might not be not worth pursuing
It is possible to do at the C level on windows. Pythons ctypes module allows interfacing with C code, so it is technically possible to do via a python interface. But it likely requires a non trivial amount of effort. If you are certain you require this feature, it may be less effort to find a C socket library that supports this out of the box, then you could make a ctypes wrapper for that.

Releasing resources when Pyro4 client disconnects unexpectedly

I have a Pyro4 distributed system with multiple clients connecting to a single server. These clients connect to a remote object, and that object may allocate some resources in the system (virtual devices, in my case).
Once a client disconnects (let's say because of a crash), I need to release those resources. What is the proper way to detect that an specific client has disconnected from an specific object?
I've tried different things:
Overriding the Daemon.clientDisconnected method. I get a connection parameter from this method. But I can't correlate that to an object, because I have no access to which remote object that connection refers to.
Using Pyro4.current_context in Daemon.clientDisconnected. This doesn't work because that is a thread-local object. That in place, if I have more clients connected than threads in my pool, I get repeated contexts.
Using Proxy._pyroAnnotations as in the "usersession" example available by the Pyro4 project, doesn't help me, because again, I get the annotation from the Pyro4.core.current_context.annotations attribute, which shows me wrong annotations when Daemon.clientDisconnected is called (I imagine due to a thread related issues).
Using instance_mode="session" and the __del__ method in the remote class (as each client would have a separate instance of the class, so the instance is supposed to be destroyed once the client disconnects). But this relies on the __del__ method, which has some problems as some Python programmers would point out.
I added my current solution as an answer, but I really would like to know if there's a more elegant way of doing this with Pyro4, as this scenario is a recurrent pattern in network programming.
Pyro 4.63 will probably have some built-in support for this to make it easier to do. You can read about it here http://pyro4.readthedocs.io/en/latest/tipstricks.html#automatically-freeing-resources-when-client-connection-gets-closed and try it out if you clone the current master from Github. Maybe you can take a look and see if that would make your use case simpler?
I use the Proxy._pyroHandshake attribute as a client ID in the client side and override the Daemon.validateHandshake and Daemon.clientDisconnected. This way, on every new connection I map the handshake data (unique per client) to a connection. But I really wanted to know if there's an elegant way to do that in Pyro4, which is pattern that happens very often in network programming.
Notice that instead of using the Proxy as an attribute of Client, Client can also extends Pyro4.Proxy and use _pyroAnnotations to send the client ID to all the remote calls.
class Client:
def __init__(self):
self._client_id = uuid.uuid4()
self._proxy = Pyro4.Proxy("PYRO:server#127.0.0.1")
self._proxy._pyroHandshake = self._client_id
self._proxy._pyroBind()
def allocate_resource(self, resource_name):
self._proxy.allocate_resource(self._client_id, resource_name)
class Server:
def __init__(self):
self._client_id_by_connection = {}
self._resources_by_client_id = {}
def client_connected(self, connection, client_id):
self._client_id_by_connection[client_id] = connection
self._resources_by_client_id[client_id] = []
def client_disconnected(self, connection):
client_id = self._client_id_by_connection[connection]
for resource in self._resources_by_client_id[client_id]
resource.free()
#Pyro4.expose
def allocate_resource(self, client_id, resource_name)
new_resource = Resource(resource_name)
self._resources_by_client_id[client_id].append(new_resource)
server = Server()
daemon.register(server, objectId="server")
daemon.clientDisconnect = server.client_disconnected
daemon.validateHandshake = server.client_connected
daemon.requestLoop()

DHT TCP API using UDP internally to serve requests (twisted)

Not sure if this is the right title for my problem, but here it goes:
I am currently implementing a Distributed Hash Table (DHT) with an API which can be contacted through TCP. It can serve multiple API calls like PUT, GET, Trace, while listening on multiple IP/Port combinations like this:
factory = protocol.ServerFactory()
factory.protocol = DHTServer
for ip in interfaces:
for port in ports:
reactor.listenTCP(int(port), factory, interface=ip)
print ("Listening to: "+ ip +" on Port: "+port)
reactor.run()
Now those "external" API calls are going to be executed by the underlying DHT implementation (Kademlia, Chord or Pastry). Those underlying DHT implementations are using different protocols to communicate with one another. Kademlia for example uses RPC through UDP.
The protocol for the TCP API (DHTServer in the Code above) has an internal DHT protocol like this:
self.protocol = Kademlia(8088, [("192.168.2.1", 8088)])
Now if a client makes two seperate API requests after one another i get this error message on the second request:
line 197, in _bindSocket
raise error.CannotListenError(self.interface, self.port, le)
twisted.internet.error.CannotListenError: Couldn't listen on any:8088: [Errno 10
048] Normalerweise darf jede Socketadresse (Protokoll, Netzwerkadresse oder Ansc
hluss) nur jeweils einmal verwendet werden.
Which basically says that each socket address is only to be used once. I am not quite sure, but i guess it is because for each API request a new DHTServer protocol instance is created, which in turn also creates a new Kademlia instance and both are trying to listen on the same address. But why is this the case? Shouldn't the first DHTServer protocol instance be destroyed after the first request is served? What am i doing wrong? Is there a better way of doing this? I only recently started working with twisted, so please be patient.
Thanks a lot!
I don't know anything about twisted, but kademlia is a stateful network service, having to maintain its routing table and all that.
Consider sharing a single kademlia instance (and thus underlying UDP socket) across your requests.
My solution to this was to write my own Factory with the inner protocol already pre-defined. Thus i can access it from every instance and it stays the same.

Python Twisted: restricting access by IP address

What would be the best method to restrict access to my XMLRPC server by IP address? I see the class CGIScript in web/twcgi.py has a render method that is accessing the request... but I am not sure how to gain access to this request in my server. I saw an example where someone patched twcgi.py to set environment variables and then in the server access the environment variables... but I figure there has to be a better solution.
Thanks.
When a connection is established, a factory's buildProtocol is called to create a new protocol instance to handle that connection. buildProtocol is passed the address of the peer which established the connection and buildProtocol may return None to have the connection closed immediately.
So, for example, you can write a factory like this:
from twisted.internet.protocol import ServerFactory
class LocalOnlyFactory(ServerFactory):
def buildProtocol(self, addr):
if addr.host == "127.0.0.1":
return ServerFactory.buildProtocol(self, addr)
return None
And only local connections will be handled (but all connections will still be accepted initially since you must accept them to learn what the peer address is).
You can apply this to the factory you're using to serve XML-RPC resources. Just subclass that factory and add logic like this (or you can do a wrapper instead of a subclass).
iptables or some other platform firewall is also a good idea for some cases, though. With that approach, your process never even has to see the connection attempt.
Okay, another answer is to get the ip address from the transport, inside any protocol:
d = self.transport.getHost () ; print d.type, d.host, d.port
Then use the value to filter it in any way you want.
I'd use a firewall on windows, or iptables on linux.

Categories

Resources