I've got a python script that simply grabs a page with urllib2, and then proceeds to use BeautifulSoup to parse that stuff. Code is:
class Foo(Bar):
def fetch(self):
try:
self.mypage = urllib2.urlopen(self.url + 'MainPage.htm', timeout=30).read()
except urllib2.URLError:
sys.stderr.write("Error: system at %s not responding\n" % self.url)
sys.exit(1)
the system I'm trying to access is remote and behind a linux router that does port forwarding between the public static ip and the lan ip of the actual system.
I was getting failures on some systems and at first I thought about a bug in urllib2/python, or some weird TCP stuff (the http server is actually an embedded card in some industrial system). But then I tried other systems and urllib2 works as expected, and I can also correctly access the http server using links2 or wget even on systems where urllib2 fails.
Ubuntu 10.04 LTS 32bit behind Apple Airport nat on remote adsl: everythin works
Mac OSX 10.6 in LAN with the server, remote behind nat, etc...: everything works
Ubuntu 10.04 LTS 64bit with public ip: urllib2 times out, links and wget work
Gentoo Linux with public ip: urllib2 times out, links and wget work
I have verified with tcpdump on the linux router (http server end) and urllib2 always completes the tcp handshake even from the problematic systems, but then it seems to hang there. I tried toggling on/off syncookies and ECN but that didn't change anything.
How could I debug and possibly solve this issue?
You could also switch to using httplib2.
After nearly 17 months I don't have access to that specific system anymore, so I won't be able to accept any real answer to this question.
At least I can tell future readers what answers are not good:
changing to httplib2
no, we're not getting ICMP redirects
no, we don't even drop ICMP fragmentation packets
cheers.
Related
Im very new to network programming and faced a following problem:
Im working on VMware CentOS7 virtual machine on Windows10 host.
My script should send WHOIS queries and parse their output (e.g. expiration date).
However, an attempt to send a query leads to a connection error:
>>>import whois
>>>whois.query('google.com')
WhoisCommandFailed: connect: Network is unreachable
I tried to whois from terminal, but error was the same.
When i tried to use whois directly from Windows, which hosts virtual machine, the error seemed to look same as well (connection timeout).
As i found out, it was most likely related to access to port 43. I created rules (for in and out) for Windows firewall for this port by a guide , but error still persisted.
It looks like access to this port was blocked by ISP (however ping command is working).
To sum up, I got two questions there:
1) (less important) How to check if port 43 is blocked by firewall either by ISP?
2) (most important) Is it possible somehow to reconfigure WHOIS for usage of another port (i.e. 23) for sending queries by Python script?
Unfortunately, ISP security policy doesn't allow them to open 43 port.
Mostly ISP doesn't block any port but yes, this is not 100% true.
Testing connection:
run tcpdump (install command: yum install tcpdump) command on CentOS: tcpdump -peni any tcp and port 43
You have to see lines with the following text: 192.168.1.1.57350 > 192.34.234.30.43 where 192.34.234.30 IP address means the remote whois server.
Try to telnet to remote server's TCP/43 port: telnet 192.34.234.30 43
You should see the following:
Trying 192.34.234.30...
Connected to 192.34.234.30.
Escape character is '^]'.
If you can`t see context like that and you get back prompt immediately you will a firewall rule somewhere what is block connection. I recommend to switch off firewall temporarily and test again.
You cannot change port number, because it is configured on the remote side, on the server.
Can CentOS7 server communicate towards the internet? In example can you install packages?
Is there any router between windows machine and ISP?
I have a python script (using pysnmp lib) running on a real device with Ubuntu 14.04 LTS which do internal polling. It sends keepalives and SNMPv3 Traps to Nagios. Snmptrapd is receiving traps and passing it to Snmptt, which works very well.
Ive been trying the same scenario in VirtualBox with same distr. Ubuntu 14.04
but on Nagios side all the Time i get
snmptrapd[7540]: Authentication failed for hostname
I couldnt figure out what is the problem. Capturing with Wireshark, i can see both packets are coming in, from the real host and virtualbox guesthost. With createUser derivative, i add two users with same engineId,SHA and AES encryp but only keepalives as well as SNMPv3 Traps from Real Host are logged and past to SNMPTT but not from VirtualBox.
Is there anything iam missing ?
Any suggestions are highly appreciated.
Try this:
disableAuthorization yes
in the /etc/sysconfig/snmptrapd.conf on nagios.
I'm running anaconda python 2.7 and the latest Requests library on a Windows 7 desktop connected to a corporate network with an outbound proxy server at 10.0.0.255.
My python script reads as follows:
import requests
r = requests.get("http://google.com")
I've also tried many different intranet and internet urls, HTTP and HTTPs all with the same result: 503 error.
I've thought somehow the proxy is at fault. I've added the 'proxies = prox' statement with the following definition"
prox={
"http" : "10.0.0.255:80"
"https" : "10.0.0.225:443"
}
Which made no difference, but its entirely possible that my ports are wrong as the documentation is a bit sparse on the statement (only one example).
I did try localhost and it gave me a different error:
ConnectionError: ('Connection aborted.', error(10061, 'No connection could be made because the target machine actively refused it'))
My machine hates me. Great.
At this point I'm stumped. Its probably something related to all the security c_rp on this machine, but I'm not sure what my next move is.
I am a N00b to python, and haven't coded in 20 years. That said, I wrote hard core C and ran memory debugs deep in architecture to find leaks, so I'm not completely dumb, just very, very rusty.
Doing a GET request on localhost won't do anything unless there is a webserver running on localhost:80. Setup a node.js webserver running on localhost and then try again.
Most corporate proxies use port 8080 for all traffic.
I have a NodeJS-socketIO server that has clients listening from JS, PHP & Python. It works like a charm when the communication happens over plain HTTP/WS channel.
Now, when i try to secure this communication, the websocket transport is not working anymore. It falls back to xhr-polling(long polling) transport. Xhr-polling still works for JS client but not on python which purely depends on socket transport.
Things i tried:
On node, Using https(with commercial certificates) instead of http - Works good for serving pages via Node but not for socketIO
Proxy via HAProxy (1.15-dev19). From HTTPS(HAProxy) to HTTP(Node). Couldn't get Websocket transport working and it falls back to xhr-polling on JS. Python gets 502 on handshake.
Proxy via STunnel (for HTTPS) -> HAProxy(Websocket Proxy) -> Node(SocketIO) - This doesnt work either. Python client still gets 502 on handshake.
Proxy via Stunnel(HTTPS) -> Node(SocketIO) - This doesnt work too. Not sure if STunnel support websocket proxy
node-http-proxy : Throws 500(An error has occurred: {"code":"ECONNRESET"}) on websocket and falls back to xhr-polling
Im sure its a common use case and there is a solution exist. Would really appreciate any help.
Thanks in advance!
My case seems to be a rare one. I built this whole environment on a EC2 instance based on Amazon Linux. As almost all the yum packages are not up to date, i had to install pretty much every yum packages from source. By doing so i could have missed configuration unchanged/added. Or HAProxy required lib could have been not the latest.
In any case, i tried building the environment again on ubuntu 12.04 based EC2 instance. HAProxy worked like a charm with a bit of configuration tweaks. I can now connect my SocketIO server from JS, Python & PHP over SSL without any problem. I could also create a Secured TCP Amazon ELB that listens on 443 and proxy it to non-standard port (8xxx).
Let me know if anyone else encounters a similar problem, I will be happy to help!
The code below:
import urllib2
file = urllib2.urlopen("http://foo.bar.com:82")
works just fine on my mac (OS X 10.8.4 running Python 2.7.1. It opens the URL and I can parse the file with no problems.
When I try the EXACT same code (these two lines) in GoDaddy Python 2.7.3 (or 2.4) I receive an error:
urllib2.URLError: <urlopen error (111, 'Connection refused')
The problem has something to do with the port :82 that is an essential part of the address. I have tried using a forwarding address with masking, etc., and nothing works.
Any idea why it would work in one environment and not in the other (ostensibly similar) environment? Any ideas how to get around this? I also tried Mechanize to no avail. Previous posts have suggested focusing on urllib2.HTTPBasicAuthHandler, but it works fine on my OS X environment without anything special.
Ideas are welcome.
Connection refused means that your operating system tried to contact the remote host, but got a "closed port" message.
Most likely, this is because of a firewall between GoDaddy and foo.bar.com. Most likely, foo.bar.com is only reachable from your computer or your local network, but it also could be GoDaddy preventing access to strange ports.
From a quick look at the GoDaddy support forums, it looks like they only support outgoing requests to ports 80 (HTTP) and 443 (HTTPS) on their shared hosts. See e.g.
http://support.godaddy.com/groups/web-hosting/forum/topic/curl-to-ports-other-than-80/