python imaplib gmail connection with user pass proxy [duplicate] - python

Neither poplib or imaplib seem to offer proxy support and I couldn't find much info about it despite my google-fu attempts.
I'm using python to fetch emails from various imap/pop enabled servers and need to be able to do it through proxies.
Ideally, I'd like to be able to do it in python directly but using a wrapper (external program/script, OSX based) to force all traffic to go through the proxy might be enough if I can't find anything better.
Could anyone give me a hand? I can't imagine I'm the only one who ever needed to fetch emails through a proxy in python...
** EDIT Title edit to remove HTTP, because I shouldn't type so fast when I'm tired, sorry for that guys **
The proxies I'm planning to use allow socks in addition to http.
Pop or Imap work through http wouldn't make much sense (stateful vs stateless) but my understanding is that socks would allow me to do what I want.
So far the only way to achieve what I want seems to be dirty hacking of imaplib... would rather avoid it if I can.

You don't need to dirtily hack imaplib. You could try using the SocksiPy package, which supports socks4, socks5 and http proxy (connect):
Something like this, obviously you'd want to handle the setproxy options better, via extra arguments to a custom __init__ method, etc.
from imaplib import IMAP4, IMAP4_SSL, IMAP4_PORT, IMAP4_SSL_PORT
from socks import sockssocket, PROXY_TYPE_SOCKS4, PROXY_TYPE_SOCKS5, PROXY_TYPE_HTTP
class SocksIMAP4(IMAP4):
def open(self,host,port=IMAP4_PORT):
self.host = host
self.port = port
self.sock = sockssocket()
self.sock.setproxy(PROXY_TYPE_SOCKS5,'socks.example.com')
self.sock.connect((host,port))
self.file = self.sock.makefile('rb')
You could do similar with IMAP4_SSL. Just take care to wrap it into an ssl socket
import ssl
class SocksIMAP4SSL(IMAP4_SSL):
def open(self, host, port=IMAP4_SSL_PORT):
self.host = host
self.port = port
#actual privoxy default setting, but as said, you may want to parameterize it
self.sock = create_connection((host, port), PROXY_TYPE_HTTP, "127.0.0.1", 8118)
self.sslobj = ssl.wrap_socket(self.sock, self.keyfile, self.certfile)
self.file = self.sslobj.makefile('rb')

Answer to my own question...
There's a quick and dirty way to force trafic from a python script to go through a proxy without hassle using Socksipy (thanks MattH for pointing me that way)
import socks
import socket
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS4,proxy_ip,port,True)
socket.socket = socks.socksocket
That global socket override is obviously a bit brutal, but works as a quick fix till I find the time to properly subclass IMAP4 and IMAP4_SSL.

If I understand you correctly you're trying to put a square peg in a round hole.
An HTTP Proxy only knows how to "talk" HTTP so can't connect to a POP or IMAP server directly.
If you want to do this you'll need to implement your own server somewhere to talk to the mail servers. It would receive HTTP Requests and then make the appropriate calls to the Mail Server. E.g.:
How practical this would be I don't know since you'd have to convert a stateful protocol into a stateless one.

Related

How to Use IMAP through a proxy? [duplicate]

Neither poplib or imaplib seem to offer proxy support and I couldn't find much info about it despite my google-fu attempts.
I'm using python to fetch emails from various imap/pop enabled servers and need to be able to do it through proxies.
Ideally, I'd like to be able to do it in python directly but using a wrapper (external program/script, OSX based) to force all traffic to go through the proxy might be enough if I can't find anything better.
Could anyone give me a hand? I can't imagine I'm the only one who ever needed to fetch emails through a proxy in python...
** EDIT Title edit to remove HTTP, because I shouldn't type so fast when I'm tired, sorry for that guys **
The proxies I'm planning to use allow socks in addition to http.
Pop or Imap work through http wouldn't make much sense (stateful vs stateless) but my understanding is that socks would allow me to do what I want.
So far the only way to achieve what I want seems to be dirty hacking of imaplib... would rather avoid it if I can.
You don't need to dirtily hack imaplib. You could try using the SocksiPy package, which supports socks4, socks5 and http proxy (connect):
Something like this, obviously you'd want to handle the setproxy options better, via extra arguments to a custom __init__ method, etc.
from imaplib import IMAP4, IMAP4_SSL, IMAP4_PORT, IMAP4_SSL_PORT
from socks import sockssocket, PROXY_TYPE_SOCKS4, PROXY_TYPE_SOCKS5, PROXY_TYPE_HTTP
class SocksIMAP4(IMAP4):
def open(self,host,port=IMAP4_PORT):
self.host = host
self.port = port
self.sock = sockssocket()
self.sock.setproxy(PROXY_TYPE_SOCKS5,'socks.example.com')
self.sock.connect((host,port))
self.file = self.sock.makefile('rb')
You could do similar with IMAP4_SSL. Just take care to wrap it into an ssl socket
import ssl
class SocksIMAP4SSL(IMAP4_SSL):
def open(self, host, port=IMAP4_SSL_PORT):
self.host = host
self.port = port
#actual privoxy default setting, but as said, you may want to parameterize it
self.sock = create_connection((host, port), PROXY_TYPE_HTTP, "127.0.0.1", 8118)
self.sslobj = ssl.wrap_socket(self.sock, self.keyfile, self.certfile)
self.file = self.sslobj.makefile('rb')
Answer to my own question...
There's a quick and dirty way to force trafic from a python script to go through a proxy without hassle using Socksipy (thanks MattH for pointing me that way)
import socks
import socket
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS4,proxy_ip,port,True)
socket.socket = socks.socksocket
That global socket override is obviously a bit brutal, but works as a quick fix till I find the time to properly subclass IMAP4 and IMAP4_SSL.
If I understand you correctly you're trying to put a square peg in a round hole.
An HTTP Proxy only knows how to "talk" HTTP so can't connect to a POP or IMAP server directly.
If you want to do this you'll need to implement your own server somewhere to talk to the mail servers. It would receive HTTP Requests and then make the appropriate calls to the Mail Server. E.g.:
How practical this would be I don't know since you'd have to convert a stateful protocol into a stateless one.

Twisted reverse proxy SSL backend

I'm fairly new to twisted, and trying to utilize twisted.web.proxy.ReverseProxyResource to create a reverse proxy. Ultimately I want clients to connect to it using SSL, then I'll validate the request, and pass it only to an SSL backend server.
I'm starting out with the below (very) basic code, but struggling to get it to connect to an SSL backend, and am finding the documentation lacking. Would anyone be able to give me some good pointers, or ideally some example code?
In the code below it obviously won't work because its expecting to hit a plain HTTP server, how would I 'ssl' this?
As always any help is very, very, much appreciated all.
Thanks
Alex
from twisted.internet import reactor
from twisted.web import proxy, server
from twisted.web.resource import Resource
class Simple(Resource):
isLeaf = False
def getChild(self, name, request):
print "getChild called with name:'%s'" % name
#host = request.getAllHeaders()['host']
host = "127.0.0.1" #yes there is an SSL host listening here
return proxy.ReverseProxyResource(host, 443, "/"+name)
simple = Simple()
site = server.Site(simple)
reactor.listenTCP(8000, site)
reactor.run()
ReverseProxyResource does not support TLS. When you write ReverseProxyResource(host, 443, "/"+name) you're creating a resource which will establish a normal TCP connection to host on port 443. The TCP connection attempt will succeed but the TLS handshake will definitely fail - because the client won't even attempt one.
This is a limitation of the current ReverseProxyResource: it doesn't support the feature you want. It's somewhat likely that this feature could be implemented fairly easily. Since ReverseProxyResource was implemented, Twisted has introduced the concept of "endpoints" which make it much easier to write code that is transport-agnostic.
ReverseProxyResource could be updated to work in terms of "endpoints" (preserving backwards compatibility with the current API, though, required by Twisted). This doesn't complicate the implementation much (it may actually simplify it) and would allow you to proxy over any kind of transport for which an endpoint implementation exists (there is one for TLS, there are also many more kinds).

Connecting to .onion network with python

I want make python to get into .onion sites from console, below example can use tor in python but when i try to connect to .onion sites it gives error such as "Name or service not known", how do i fix this ?
Sample Code:
import socket
import socks
import httplib
def connectTor():
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5,"127.0.0.1",9050,True)
socket.socket = socks.socksocket
print "Connected to tor"
def newIdentity():
HOST = '127.0.0.1'
socks.setdefaultproxy()
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST,9051))
s.send("AUTHENTICATE\r\n")
response = s.recv(128)
if response.startswith("250"):
s.send("SIGNAL NEWNYM\r\n"),
s.close()
connectTor()
def readPage(page):
conn = httplib.HTTPConnection(page)
conn.request("GET","/")
response = conn.getresponse()
print (response.read())
def main():
connectTor()
print "Tor Ip Address :"
readPage("my-ip.heroku.com")
print "\n\n"
readPage("od6j46sy5zg7aqze.onion")
return 0
if __name__ == '__main__':
main()
I think this is your problem, but I may be wrong.
You're relying on monkeypatching socket.socket to force HTTPConnection to use your SOCKS5 proxy to talk to TOR. But HTTPConnection calls socket.create_connection, which in turns calls socket.getaddrinfo to resolve the name before calling socket.socket to create the socket. And getaddrinfo doesn't use socket. So, it's not patched, so it's not talking to your SOCKS5 proxy, so it's using your default name resolver.
This works fine for proxying connections to normal internet hosts, because TOR is going to return the same DNS result for "my-ip.heroku.com" as your normal name resolver. But it won't work for "od6j46sy5zg7aqze.onion", because there is no .onion TLD in your normal name resolver.
If you're curious, you can see the source to HTTPConnection.connect, socket.create_connection, and getaddrinfo (the last in C, and scattered throughout the module depending on your platform).
So, how do you solve this? Well, looking at two of the SOCKS5 modules that are called socks, one has a function that could be directly monkeypatched in place of create_connection (its API is not identical, but it's close enough for what HTTPConnection needs); the other doesn't, but you could pretty easily write one (just call socks.socksocket and then call its connect method). Or you could modify HTTPConnection to create a socket.socket and call its connect method.
Finally, you may be wondering why most of the different socks modules have a setdefaultproxy function that with a parameter named remote_dns that specifically claims it causes DNS resolving to be performed remotely, when that doesn't actually work. Well, it does work if you use a socks.socksocket, but it can't possibly work if you use socket.getaddrinfo.
By the way, if you haven't read DnsResolver and TorifyHOWTO, read them before going any further, because just trying to slap together code that works without knowing why it works is almost guaranteed to lead to you (or your users) leaking information when you thought you were being anonymous.
You can add port 80 to the onion address to avoid DNS look up.
e.g. readPage("od6j46sy5zg7aqze.onion:80")
with urllib2 you need to specify also the protocol (i.e. http)
e.g.
import urllib2
print urllib2.urlopen("http://od6j46sy5zg7aqze.onion:80").read()

Which ports will python use to send html request? with urllib or urllib2

I'm using webpy to make a small site. When I want to use OAuth, i find that the firewall will stop the http request to any site, I even can't use IE to browse the Internet.
So i asked the aministrator to open some ports for me, but i don't know which ports will be used by python or IE to send http request.
Thanks!
I assume you're talking about the remote ports. In that case, just tell the admin to open the standard web ports. Really, if your admin doesn't know how to make IE work through the firewall, he's hopeless. I suggest walking up to random people on the street and say "80 and 443" until someone looks up, then fire your admin and hire that guy; he can't be any worse.
If your admin does know what he's doing, and wants you to use an HTTP proxy instead of connecting directly, ask him to set up the proxy for you in IE, look at the settings he uses, then come back here and search for how to use HTTP proxies in Python (there are lots of answers on that), and ask for help if you get stuck.
If you're talking about the local ports, because you're got an insane firewall, they'll be picked at random from some large range. If you want to cover every common OS, you need all of 1024-65535 to be opened up, although if you only need to deal with a single platform, most use a smaller range than that, and if the machine won't be doing much but running your program, most have a way to restrict it to an even smaller range (e.g., as small as 255 ports on Windows). Google "ephemeral port" for details.
If you need to restrict your local port, the key is to call bind on your socket before calling connect. If you think you're talking about the local ports, you're probably wrong. Go ask your admin (or the new one you just hired) and make sure. But if you are…
If you're using urllib/urllib2, it does not have any way to do what you want, so you can't do that anymore. You can drop down to httplib, which lets you pass a source_address, a (host, port) tuple that it will use to bind the socket before connecting. It's not as simple as what you're using, but it's a lot easier than implementing HTTP from scratch.
You might also want to look at requests, which I know has its own native OAuth support, and probably has a way to specify a source address. (I haven't looked, but I usually find that whenever I want to know if requests can do X, it can, and in the most obvious way I think of…) The API for requests is generally similar to urllib2 when urllib2 is sane, simpler and cleaner when urllib2 is messy, so it's usually pretty easy to port things.
At any rate, however you do this, you will have to consider the fact that only one socket can be bound to the same local port at a time. So, what happens if two programs are running at once, and they both need to make an outbound connections, and your admin has only given you one port? One of them will fail. Is that acceptable?
If not, what you really need to do is open a range of ports, and write code that does a random.shuffle on the range, then tries to bind them until one succeeds. Which means you'll need an HTTP library that lets you feed in a socket factory or a pre-opened socket instead of just specifying a port, which most of them do not, which probably means you'll be hacking up a copy of the httplib source.
If all else fails, you can always set up a local proxy that binds to whatever source port (or port range) you want when proxying outward. Then you can just use your favorite high-level library, as-is, and connect to the local proxy, and there's no way the firewall can tell what's going on behind the proxy.
As you can see, this is not easy. That's mainly because you very actually rarely this.
Generally when a program wants to use a port but doesn't care which number it has, it uses an "ephemeral port." This is typical for client applications, where the remote port is fixed (by the server), but the local port doesn't make any difference.
Often a firewall will allow outgoing connections from any port, simply blocking incoming connections to unknown ports, on the theory that internal machines making outgoing requests should be allowed to decide what is proper, and that bad actors are all on the "public" side.
You may find that your administrator requires you to use an "HTTP proxy." If so, here are the instructions for Ruby which I imagine you can port to Python: Ruby and Rails - oauth and http proxy

DNS over proxy?

I've been pulling my hair out over the past few days looking around for a good solution to prevent DNS leaks over a socks4/5 proxy.
I've looked into the SocksiPy(-branch) module, and tried to wrap a number of things (urllib,urllib2,dnstools), but they all seem to still leak DNS requests. So does pyCurl.
I know that proxychains/proxyresolv can throw DNS requests over a socks4/5 proxy, and it does all it's magic with some LD_PRELOAD libraries to monkey-patch socket's functions, much like SocksiPy does, but I can't seem to figure out why it doesn't send DNS over either a socks4 or socks5 proxy.
I suppose for linux I may be able to use CTypes with libproxychains.so to do my resolution, but I'm looking for something multi-platform, so I think monkey-patching the socket module is the way to go.
Has anyone figured out a good way to get around this? I want to do it all in-code for portability's sake, and I don't want to resort to running another proxy server!
Thanks!
Well I figured it out. You need to set your default proxy BEFORE you start using the socket (e.g. before you import anything that uses it.). You also need to monkeypatch the getaddrinfo part of socket, then everything works fine.
import socks
import socket
# Can be socks4/5
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS4,'127.0.0.1', 9050)
socket.socket = socks.socksocket
# Magic!
def getaddrinfo(*args):
return [(socket.AF_INET, socket.SOCK_STREAM, 6, '', (args[0], args[1]))]
socket.getaddrinfo = getaddrinfo
import urllib
This works and proxies all DNS requests through whatever module you import in lieu of urllib. Hope it helps someone out there!
EDIT: You can find updated code and stuff on my blog

Categories

Resources