My Python code downloads a file from our website. The code fails to download the file on certain clients computers. I cant for the life of me figure out why the file fails to download when the script runs on certain computers but works on others.
The error that occurs on certain computers is:
<urlopen error [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>
The clients confirm they are connected to the internet and they can successfully download the same file(same url) through a web browser. Its incredibly weird that the script works on some computers and not on others, that they are connected to the internet but cannot download the file and that they can download the file through a browser but not through my script? Maybe the cause it that they are not an admin user?
What can cause this kind of error?
My simple code:
try:
source_buffer = urllib2.urlopen(URL)
source_code = source_buffer.read()
source_buffer.close()
return source_code
except Exception, e:
print e
PS: Do you think this is a proxy error? If it is can you explain what exactly is going wrong? Proxies have always confused me - well I understand when using a proxy all http, https, ftp requests go through a proxy computer (intermediary) before going out to the internet but I dont understand how this error can be caused from a proxy? Whats going wrong? Whats occurring?
It could be proxy, or looking at the error message, it could also be that local/personal firewall settings are blocking the outgoing requests from your application, or responses from the server from reaching your application. Local firewall settings could easily vary between computers, and this might account for the problem.
Related
I'm trying to run a simple Python script which connects to a website and reads a document on the website, but it fails. Any help, would be most welcome, thank you!
The code is as follows:
import urllib.request
fhand = urllib.request.urlopen('http://www.py4inf.com/code/romeo.txt')
for line in fhand:
print (line.strip())
And I'm getting the following two error messages:
TimeoutError: [WinError 10060] A connection attempt failed because the
connected party did not properly respond after a period of time, or
established connection failed because connected host has failed to
respond
urllib.error.URLError: {urlopen error [WinError 10060] A connection
attempt failed because the connected party did not properly respond
after a period of time, or established connection failed because
connected host has failed to respond}
This is the windows error, not urllib's one. The latter has a handy default timeout value of never, but the former depends on settings of your proxy (with default being 60 seconds for initial connection and 120 for any further GET requests.
I can't help much with windows, but at least now you know where to look.
I had this same issue, and tried specifying a timeout in the urllib.request.urlopen, but no dice.
What did work was including a header, as described here. The first answer worked, and so did the second, so I used the simpler, second solution.
I relatively new to python, but have been writing some basic scripts for my job to check the status of files on specific servers by ftp. I understand there are better modules for ftp, but due to security restrictions on our work computers we are limited to the basic modules installed on our system which need to handle ftp, sftp, and ftps. Pycurl is the only module we can currently work with.
Now pycurl works successfully at testing the connection by printing the directory and pushing or pulling a file to or from a server via ftp, sftp, fops. Thats not our current issue. The issue is the error response that Pycurl spits out. It doesn't display the ACTUAL error that occurred that you would see from verbose. If we put the wrong remote directory it continue to connect after showing the error in verbose then say something like "Could not access user certificates". WE would like to hand the errors so they display what actually occurred. We saw options such as BUFFERERROR but we haven't figured out how to use them properly. basically, if a sever name is incorrect we would like it to say that.
Does anybody have some experience with pycurl? or know of any debugging script to catch and display the actual errors? I would greatly appreciate it!
you can debug the error by making use of VERBOSE
c = pycurl.Curl()
c.setopt(pycurl.URL,url)
c.setopt(pycurl.HTTPHEADER, ['Authorization: Bearer ' + token)
c.setopt(pycurl.CUSTOMREQUEST, "PUT")
c.setopt(pycurl.POSTFIELDS,data)
c.setopt(pycurl.VERBOSE, 1)
c.perform()
c.close()
An application I'm using has the option to enable an API that sends some data when certain events occur to a URL. I configured it to send the data to http://localhost:666/simple/ and used a short program (written by someone else in C#) that takes the data and dumps them to a text file. The author said that you need to run the .exe as administrator to be able to listen to http events and it did indeed work.
I'm trying to achieve the same using python. I took this short example from the requests library and adapted it to the following:
import requests
url = 'http://localhost:666/simple/'
r = requests.get(url, stream=True)
for line in r.iter_lines():
print(line)
I launched command prompt with administrator privileges, but when I try to run this script I get the following error: ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it.
Since the other program is working correctly, I am assuming I am doing something wrong in my code and I'm looking for some help to fix it or an alternative to try out.
Requests are used for, well, making requests from a server, not for being a server.
You may want to look at the docs
use socket to listen data on concrete port
I'm looking to get more information about IOError: [Errno socket error] [Errno 10060] when using urlopen in Python 2.7. I am using my personal 35MB/s Internet connection (no proxy).
I've been opening multiple webpages from various websites using a Python script and randomly get this error message from time to time:
webpage = urlopen('http://www.thewebpage.com')
IOError: [Errno socket error] [Errno 10060] A connection attempt
failed because the connected party did not properly respond after a
period of time, or established connection failed because connected
host has failed to respond
This error appeared after trying to open pages from different websites. Therefore, it doesn't seem to be related exclusively to the opening of pages from one particular website. I also got this error using mechanize.
My questions are :
Is this error related to the fact that I am sending multiple requests to the same server within a short amount of time? Would a time-out reduces the chance of getting this error?
Is there any way to prevent it? Could I use a conditional statement to prevent the script from crashing?
My script takes around an hour to run and having to rerun it due to this error is fairly unpleasant.
Sending multiple requests to the same server in short succession could very well cause the server not to respond, since your requests might look like a ddos attack. You can catch the exception with a try-except clause, and try again.
I'm using PyAPNS module and Bottle framework in demo of my app to send push notifications to all registered devices.
At the beginning everything works fine, I've followed manual for PyAPNS. But after some time my service is running in the background on server, I start to recieve error:
SSLError: [Errno 1] _ssl.c:1217: error:1409F07F:SSL routines:SSL3_WRITE_PENDING:bad write retry
After restarting service everything works fine again. What should I do with that? Or how should I run such service in background? (for now I'm just running it in another screen)
I had the same issue as you did when using this library (I'm assuming you are in fact using https://github.com/simonwhitaker/PyAPNs, which is what I'm using. There is at least one other lib out there with a similar name, but I don't think you'd be using that).
AFAIK when you're using the simple notification service the APNS server might hang up on you for reasons including: using an incorrect token, having a malformed request, etc. Or perhaps your connection might get broken if your network connection drops out or you. The PyAPNS code doesn't handle such a hangup very gracefully right now and it attempts to re-use the socket even after it has been closed. My experience with seeing the SSL3_WRITE_PENDING error was that I would always see an error such as "error: [Errno 110] Connection timed out" happen on the socket before I would then get SSL3_WRITE_PENDING error when PyAPNS tried to re-use the socket.
If you are seeing the server hangup on you and you want to know why it's doing that, it helps to use the enhanced version of APNS, so that the server will write back info about what you did wrong.
As it happens, there is currently a pull request (https://github.com/simonwhitaker/PyAPNs/pull/23/files) that both moves PyAPNS to use enhanced APNS AND handles disconnections more gracefully. You'll see I commented on that pull request and have created my own fork of PyAPNS that handles disconnections in the way that suited my use case the best.
So you can use the code from pull request to perhaps find out why the APNS server is hanging up on you. And / or you could use it to simplify your failure recovery so you just retry the send if an exception is thrown rather than have to re-create the APNS object.
Hopefully the pull request will be merged to master soon (possibly including my changes as well).