I have this very simple code to check if a site is up or down.
import httplib2
h = httplib2.Http()
response, content = h.request("http://www.folksdhhkjd.com")
if response.status == 200:
print "Site is Up"
else:
print "Site is down"
When I enter a valid URL then it properly prints Site is Up because the status is 200 as expected. But, when I enter an invalid URL, should it not print Site is down? Instead it prints an exception something like this
Traceback (most recent call last):
File "C:\Documents and Settings\kripya\Desktop\1.py", line 3, in <module>
response, content = h.request("http://www.folksdhhkjd.com")
File "C:\Python27\lib\site-packages\httplib2\__init__.py", line 1436, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "C:\Python27\lib\site-packages\httplib2\__init__.py", line 1188, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "C:\Python27\lib\site-packages\httplib2\__init__.py", line 1129, in _conn_request
raise ServerNotFoundError("Unable to find the server at %s" % conn.host)
ServerNotFoundError: Unable to find the server at www.folksdhhkjd.com
How can I override this exception and print my custom defined "Site is down" message? Any guidance, please?
EDIT
Also one more question... what is the difference between using
h = httplib2.Http('.cache')
and
h = httplib2.Http()
try:
response, content = h.request("http://www.folksdhhkjd.com")
if response.status==200:
print "Site is Up"
except httplib2.ServerNotFoundError:
print "Site is Down"
The issue with your code is that if the host doesn't respond, the request doesn't return ANY status code, and so the library throws an error (I think it's a peculiarity of the library itself, doing some sort of DNS resolution before trying to make the request).
h = httplib2.Http('.cache')
Caches the stuff it retrieves in a directory called .cache so if you do the same request twice it might not have to actually get everything twice; a file starting with a dot is hidden in POSIX filesystems (like on Linux).
h = httplib2.Http()
Doesn't cache it's results, so you have to get everything requested every time.
Related
I have this piece of code that tries to get the page content from a given url.
import httplib2
start_url = "https://www.somesite.com"
http = httplib2.Http(disable_ssl_certificate_validation=True)
status, response = http.request(start_url)
However, when I run it, I get this error:
Traceback (most recent call last): File "C:\Documents and Settings\DD\Desktop\crawler.py", line 15, in <module>
resp, content = h.request(start_url, "GET") File "C:\Python27\lib\site-packages\httplib2-0.9-py2.7.egg\httplib2\__init__.py", line 1593, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey) File "C:\Python27\lib\site-packages\httplib2-0.9-py2.7.egg\httplib2\__init__.py", line 1335, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers) File "C:\Python27\lib\site-packages\httplib2-0.9-py2.7.egg\httplib2\__init__.py", line 1257, in _conn_request
conn.connect() File "C:\Python27\lib\site-packages\httplib2-0.9-py2.7.egg\httplib2\__init__.py", line 1044, in connect
raise SSLHandshakeError(e) httplib2.SSLHandshakeError: [Errno 1] _ssl.c:510: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac
As you may have seen, I tried to disable the ssl validation but with no succsess.
Any Help?
Thanks!
SSL3_GET_RECORD:decryption failed or bad record mac
This has nothing to do with SSL validation. It might be that the server simply does not talk SSL or that there are other SSL related problems, but validation is not one of them at this stage of communication.
If you provide the real URL or a full packet capture (file or at cloudshark.org) one might analyze the information in more detail.
I am trying to create an automated test using python for testing a youtube API request and response and all of those happening in random quick timely manner.
What I have been getting is an unstable HTTP response from the server.
I am not using the same object for every connection as I made it in separate methods, but when testing it, I test it all in same method (i.e. call create,edit, and delete consecutively)
Here is the Error that I got.
File "/var/lib/jenkins/shiningpanda/jobs/2a430f4f/virtualenvs/d41d8cd9/local/lib/python2.7/site-packages/oauth2client/client.py", line 490, in new_request
redirections, connection_type)
File "/var/lib/jenkins/shiningpanda/jobs/2a430f4f/virtualenvs/d41d8cd9/local/lib/python2.7/site-packages/httplib2/__init__.py", line 1570, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/var/lib/jenkins/shiningpanda/jobs/2a430f4f/virtualenvs/d41d8cd9/local/lib/python2.7/site-packages/httplib2/__init__.py", line 1317, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/var/lib/jenkins/shiningpanda/jobs/2a430f4f/virtualenvs/d41d8cd9/local/lib/python2.7/site-packages/httplib2/__init__.py", line 1286, in _conn_request
response = conn.getresponse()
File "/usr/lib/python2.7/httplib.py", line 1018, in getresponse
raise ResponseNotReady()
ResponseNotReady
I was thinking that I should have a time.delay() between each HTTP request?
What do you guys think and suggest me to do on this case, since I am still learning on this matter?
Thank you for all the sugesstion and help ;)
Basic Idea of the Code I am using :
yt_service = gdata.youtube.service.YouTubeService()
yt_service.email = 'exampple#gmail.com'
yt_service.password = 'password'
def GetAndPrintUserUploads(username):
yt_service = gdata.youtube.service.YouTubeService()
uri = 'http://gdata.youtube.com/feeds/api/users/%s/uploads' % username
PrintVideoFeed(yt_service.GetYouTubeVideoFeed(uri))
and for testing it I authenticate the user and print user upload a few time after that consecutively
I have the following code in a Python desktop application that authorizes users before using the AppHarbor API. I am following the steps mentioned in the knowledge base and have the following authentication code:
def OnAuthenticate(self, event):
client_id = "" # My App's client id
client_secret_key = "" # My App's secret key
consumer = oauth2.Consumer(key=client_id, secret=client_secret_key)
request_token_url = "https://appharbor.com/user/authorizations/new?client_id="+client_id+"&redirect_uri=http://localhost:8095"
client = oauth2.Client( consumer )
resp, content = client.request(request_token_url, "GET")
...
However, on sending the request, the response is incorrect, this is the error:
client.request(request_token_url, "GET")
TypeError: must be string or buffer, not None
Is there something that I am missing here?
Edit: Following is the stack trace that is thrown up:
resp, content = client.request(request_token_url, "GET")
File "C:\Python27\Lib\site-packages\oauth2-1.5.211-py2.7.egg\oauth2\__init__.py", line 682, in request
connection_type=connection_type)
File "C:\Python27\lib\site-packages\httplib2-0.7.4-py2.7.egg\httplib2\__init__.py", line 1544, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "C:\Python27\lib\site-packages\httplib2-0.7.4-py2.7.egg\httplib2\__init__.py", line 1342, in _request
(response, content) = self.request(location, redirect_method, body=body, headers = headers, redirections = redirections - 1)
File "C:\Python27\Lib\site-packages\oauth2-1.5.211-py2.7.egg\oauth2\__init__.py", line 662, in request
req.sign_request(self.method, self.consumer, self.token)
File "C:\Python27\Lib\site-packages\oauth2-1.5.211-py2.7.egg\oauth2\__init__.py", line 493, in sign_request
self['oauth_body_hash'] = base64.b64encode(sha(self.body).digest())
TypeError: must be string or buffer, not None
Upon debugging into the call, I reached httplib2._request function that issued a request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
This resulted in the following page with error response = 302 (presented in content object)
<html><head><title>Object moved</title></head><body>
<h2>Object moved to <a href="https://appharbor.com/session/new?returnUrl=%2Fuser%
2Fauthorizations%2Fnew%3Foauth_body_hash%3D2jmj7l5rSw0yVb%252FvlWAYkK%252FYBwk%
253D%26oauth_nonce%3D85804131%26oauth_timestamp%3D1340873274%
26oauth_consumer_key%3D26bacb38-ce5a-4699-9342-8e496c16dc49%26oauth_signature_method%
3DHMAC-SHA1%26oauth_version%3D1.0%26redirect_uri%3Dhttp%253A%252F%252Flocalhost%
253A8095%26client_id%3D26bacb38-ce5a-4699-9342-8e496c16dc49%26oauth_signature%
3DXQtYvWIsvML9ZM6Wfs1Wp%252Fy3No8%253D">here</a>.</h2>
</body></html>
The function next removed the body from the content to call another request with body set to None, resulting in the error that was thrown.
(response, content) = self.request(location, redirect_method, body=body, headers = headers, redirections = redirections - 1)
I'm not familiar with the Python lib, but have you considered whether this is because you need to take the user through the three-legged flow Twitter flow and use the url you mention in your question as the authorize_url? Once you have the code, you retrieve the token by POST'ing to this url: https://appharbor.com/tokens.
You might also want to take a closer look at the desktop OAuth .NET sample to get a better understanding of how this works.
I have a web-service deployed in my box. I want to check the result of this service with various input. Here is the code I am using:
import sys
import httplib
import urllib
apUrl = "someUrl:somePort"
fileName = sys.argv[1]
conn = httplib.HTTPConnection(apUrl)
titlesFile = open(fileName, 'r')
try:
for title in titlesFile:
title = title.strip()
params = urllib.urlencode({'search': 'abcd', 'text': title})
conn.request("POST", "/somePath/", params)
response = conn.getresponse()
data = response.read().strip()
print data+"\t"+title
conn.close()
finally:
titlesFile.close()
This code is giving an error after same number of lines printed (28233). Error message:
Traceback (most recent call last):
File "testService.py", line 19, in ?
conn.request("POST", "/somePath/", params)
File "/usr/lib/python2.4/httplib.py", line 810, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.4/httplib.py", line 833, in _send_request
self.endheaders()
File "/usr/lib/python2.4/httplib.py", line 804, in endheaders
self._send_output()
File "/usr/lib/python2.4/httplib.py", line 685, in _send_output
self.send(msg)
File "/usr/lib/python2.4/httplib.py", line 652, in send
self.connect()
File "/usr/lib/python2.4/httplib.py", line 636, in connect
raise socket.error, msg
socket.error: (99, 'Cannot assign requested address')
I am using Python 2.4.3. I am doing conn.close() also. But why is this error being given?
This is not a python problem.
In linux kernel 2.4 the ephemeral port range is from 32768 through 61000. So number of available ports = 61000-32768+1 = 28233. From what i understood, because the web-service in question is quite fast (<5ms actually) thus all the ports get used up. The program has to wait for about a minute or two for the ports to close.
What I did was to count the number of conn.close(). When the number was 28000 wait for 90sec and reset the counter.
BIGYaN identified the problem correctly and you can verify that by calling "netstat -tn" right after the exception occurs. You will see very many connections with state "TIME_WAIT".
The alternative to waiting for port numbers to become available again is to simply use one connection for all requests. You are not required to call conn.close() after each call of conn.request(). You can simply leave the connection open until you are done with your requests.
I too faced similar issue while executing multiple POST statements using python's request library in Spark. To make it worse, I used multiprocessing over each executor to post to a server. So thousands of connections created in seconds that took few seconds each to change the state from TIME_WAIT and release the ports for the next set of connections.
Out of all the available solutions available over the internet that speak of disabling keep-alive, using with request.Session() et al, I found this answer to be working which makes use of 'Connection' : 'close' configuration as header parameter. You may need to put the header content in a separte line outside the post command though.
headers = {
'Connection': 'close'
}
with requests.Session() as session:
response = session.post('https://xx.xxx.xxx.x/xxxxxx/x', headers=headers, files=files, verify=False)
results = response.json()
print results
This is my answer to the similar issue using the above solution.
I am trying to use the python-rest-client ( http://code.google.com/p/python-rest-client/wiki/Using_Connection ) to perform testing of some RESTful webservices. Since I'm just learning, I've been pointing my tests at the sample services provided at http://www.predic8.com/rest-demo.htm.
I have no problems with creating entries, updating entries, or retrieving entries (POST and GET requests). When I try make a DELETE request, it fails. I can use the Firefox REST Client to perform DELETE requests and they work. I can also make DELETE requests on other services, but I've been driving myself crazy trying to figure out why it doesn't work in this case. I'm using Python 3 with updated Httplib2, but I also tried Python 2.5 so that I could use the python-rest-client with the included version of Httplib2. I see the same problem in either case.
The code is simple, matching the documented use:
from restful_lib import Connection
self.base_url = "http://www.thomas-bayer.com"
self.conn = Connection(self.base_url)
response = self.conn.request_delete('/sqlrest/CUSTOMER/85')
I've looked at the resulting HTTP requests from the browser tool and from my code and I can't see why one works and the other doesn't. This is the trace I receive:
Traceback (most recent call last):
File "/home/fmk/python/rest-client/src/TestExampleService.py", line 68, in test_CRUD
self.Delete()
File "/home/fmk/python/rest-client/src/TestExampleService.py", line 55, in Delete
response = self.conn.request_delete('/sqlrest/CUSTOMER/85')
File "/home/fmk/python/rest-client/src/restful_lib.py", line 64, in request_delete
return self.request(resource, "delete", args, headers=headers)
File "/home/fmk/python/rest-client/src/restful_lib.py", line 138, in request
resp, content = self.h.request("%s://%s%s" % (self.scheme, self.host, '/'.join(request_path)), method.upper(), body=body, headers=headers )
File "/home/fmk/python/rest-client/src/httplib2/__init__.py", line 1175, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/home/fmk/python/rest-client/src/httplib2/__init__.py", line 931, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/home/fmk/python/rest-client/src/httplib2/__init__.py", line 897, in _conn_request
response = conn.getresponse()
File "/usr/lib/python3.2/http/client.py", line 1046, in getresponse
response.begin()
File "/usr/lib/python3.2/http/client.py", line 346, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.2/http/client.py", line 316, in _read_status
raise BadStatusLine(line)
http.client.BadStatusLine: ''
What's breaking? What do I do about it? Actually, I'd settle for advice on debugging it. I've changed the domain in my script and pointed it at my own machine so I could view the request. I've viewed/modified the Firefox requests in BurpProxy to make them match my script requests. The modified Burp requests still work and the Python requests still don't.
Apparently the issue is that the server expects there to be some message body for DELETE requests. That's an unusual expectation for a DELETE, but by specifying Content-Length:0 in the headers, I'm able to successfully perform DELETEs.
Somewhere along the way (in python-rest-client or httplib2), the Content-Length header is wiped out if I try to do:
from restful_lib import Connection
self.base_url = "http://www.thomas-bayer.com"
self.conn = Connection(self.base_url)
response = self.conn.request_delete('/sqlrest/CUSTOMER/85', headers={'Content-Length':'0'})
Just to prove the concept, I went to the point in the stack trace where the request was happening:
File "/home/fmk/python/rest-client/src/httplib2/__init__.py", line 897, in _conn_request
response = conn.getresponse()
I printed the headers parameter there to confirm that the content length wasn't there, then I added:
if(method == 'DELETE'):
headers['Content-Length'] = '0'
before the request.
I think the real answer is that the service is wonky, but at least I got to know httplib2 a little better. I've seen some other confused people looking for help with REST and Python, so hopefully I'm not the only one who got something out of this.
The following script correctly produces 404 response from the server:
#!/usr/bin/env python3
import http.client
h = http.client.HTTPConnection('www.thomas-bayer.com', timeout=10)
h.request('DELETE', '/sqlrest/CUSTOMER/85', headers={'Content-Length': 0})
response = h.getresponse()
print(response.status, response.version)
print(response.info())
print(response.read()[:77])
python -V => 3.2
curl -X DELETE http://www.thomas-bayer.com/sqlrest/CUSTOMER/85
curl: (52) Empty reply from server
Status-Line is not optional; HTTP server must return it. Or at least send 411 Length Required response.
curl -H 'Content-length: 0' -X DELETE \
http://www.thomas-bayer.com/sqlrest/CUSTOMER/85
Returns correctly 404.