I am working on a tool that queries a number of APIs, one of which is a RESTful API. All of the other functions (API calls) of my program work fine with requests.get(), however with the REST API, I do not seem to be able to access the actual content of the response, only the status code. i.e. when I simply print the response, (not response.status_code) I get: <Response [200]> output to the screen. Any ideas?
Snippet of code:
# The URL is correct in my program, For sure.
url = ('http://APIurl/%s' % entry)
try:
response = requests.get(url)
# prints <Response [200]>
print response
# Fails, expecting JSON that isn't there
results.append(response.json())
print the response object's attributes to see what it has available:
print response.__dict__
response.text is your friend, if the content is not valid json.
You need to print response.context or response.text. Your data is probably there.
Sometimes when your request is wrong, the API returns the whole error page (in HTML). So if you're getting a bunch of HTML code, make sure you're request parameters are ok.
Related
I am testing the Python library request to see if it is suitable for my work. Here is my sample code for reference:
import requests
url = "http://www.genenetwork.org/webqtl/main.py?cmd=sch&gene=Grin2b&tissue=hip&format=text"
print url
print requests.get(url)
My Output:
http://www.genenetwork.org/webqtl/main.py?cmd=sch&gene=Grin2b&tissue=hip&format=text
Response [200]
Output that I get from my browser & my expected result:
What made the differences? How can I get my expected results? I wanted to process the data inside the webpage.
Your code is currently printing the status code of your GET request. You can access the requested content via the text attribute of the Response class returned by the get method.
import requests
r = requests.get("http://www.genenetwork.org/webqtl/main.py?cmd=sch&gene=Grin2b&tissue=hip&format=text")
r.text
I posted another question but made a bit of a mess of it in the comments section. Basically, I am trying to use the requests library in Python in order to accomplish what I normally would by using CURL in order to process a GET request to an API. From what I have learned from a very helpful person here, I can process the request, as well as the authorisation header by doing the following:
Original CURL Command that I would normally use:
curl -X GET -H 'Authorization: exapi:111:58d351234e1a:LA2' 'http://api.example.com/v1.14/user?id=1234'
This is the Python code I am using for my script:
import requests
import json
url = "http://api.example.com/v1.14/user?id=1234"
headers = {"Authorization": "exapi:111:58d351234e1a:LA2"}
response = requests.get(url, headers=headers)
print response.json
However, when I run my code, I receive bound method response.json of <response [200]> instead of data that I was expecting from the GET. Can someone help me figure out what I am doing wrong here? I am guessing that I am doing something wrong with my header but I am not sure.
As #juanpa.arrivilaga has already mentioned and as the printed message clearly says, you need to call the bound json method. The source of confusion is likely from content, which is an attribute.
response.json() # method
response.content # attribute
How about using json module explicitly:
data = json.loads(response.text)
print data
I'm trying to workd with the Cisco Prime API, and it seems to work when using Postman (the output is an xml file). However, when I try to reproduce the same with Python:
request.get(url, verify=False, auth=credentials)
print(response)
the only response I get is <Response [200]> (and a warning for disabling SSL, but that's not relevant)... I use requests.auth.HTTPBasicAuth to generate the "crdentials" variable
You need to get with
print(response.json())
Or
print(response.text)
Well, of course, it's just after posting this that I found the answer: first, you need to search for the json, not the xml and second, when you want to cast the json, you need do
response.json()
do like this to get a response as text
response = request.get(url, verify=False, auth=credentials)
print response.text
that's it best luck
I'm trying to crawl my college website and I set cookie, add headers then:
homepage=opener.open("website")
content = homepage.read()
print content
I can get the source code sometimes but sometime just nothing.
I can't figure it out what happened.
Is my code wrong?
Or the web matters?
Does one geturl() can use to get double or even more redirect?
redirect = urllib2.urlopen(info_url)
redirect_url = redirect.geturl()
print redirect_url
It can turn out the final url, but sometimes gets me the middle one.
Rather than working around redirects with urlopen, you're probably better off using a more robust requests library: http://docs.python-requests.org/en/latest/user/quickstart/#redirection-and-history
r = requests.get('website', allow_redirects=True)
print r.text
I need to get the output printed on the screen on accessing a url with username and password. When I access the url through my browser, I get a popup where I enter the credentials and get the output in the browser. How do I do it using python script? I tried the following, but it only returns <Response [200]> which means that the request is successful. The output I want is a simple text message.
import requests
response = requests.get(url, auth=(username, password))
print response
I have tried requests.post also, with same results.
print response tries to print out a Response object. If you want the text of the response, use print response.text.
You may want to read the Quickstart documentation for the python-requests library here: http://docs.python-requests.org/en/latest/user/quickstart/.