I would like to be able to enter a server response code and have Requests tell me what the code means. For example, code 200 --> ok
I found a link to the source code which shows the dictionary structure of the codes and descriptions. I see that Requests will return a response code for a given description:
print requests.codes.processing # returns 102
print requests.codes.ok # returns 200
print requests.codes.not_found # returns 404
But not the other way around:
print requests.codes[200] # returns None
print requests.codes.viewkeys() # returns dict_keys([])
print requests.codes.keys() # returns []
I thought this would be a routine task, but cannot seem to find an answer to this in online searching, or in the documentation.
Alternatively, in case of Python 2.x, you can use httplib.responses:
>>> import httplib
>>> httplib.responses[200]
'OK'
>>> httplib.responses[404]
'Not Found'
In Python 3.x, use http module:
In [1]: from http.client import responses
In [2]: responses[200]
Out[2]: 'OK'
In [3]: responses[404]
Out[3]: 'Not Found'
One possibility:
>>> import requests
>>> requests.status_codes._codes[200]
('ok', 'okay', 'all_ok', 'all_okay', 'all_good', '\\o/', '\xe2\x9c\x93')
The first value in the tuple is used as the conventional code key.
I had the same problem before and found the
answer in this question
Basically:
responsedata.status_code - gives you the integer status code
responsedata.reason - gives the text/string representation of the status code
requests.status_codes.codes.OK
works nicely and makes it more readable in my application code
Notice that in source code: the requests.status_codes.codes is of type LookupDict which overrides method getitem
You could see all the supported keys with - dir(requests.status_codes.codes)
When using in combination with FLASK:
i like use following enum from flask-api plugin
from flask_api import status where i get more descriptive version of HTTP status codes as in -
status.HTTP_200_OK
With Python 3.x this will work
>>> from http import HTTPStatus
>>> HTTPStatus(200).phrase
'OK'
Related
I am trying to do 2 things with Python-
Execute a command at the terminal
Save the output of the above command to a variable.
This is my code till now -
import subprocess
ip_address = "http://ipwho.is/104.123.204.11"
query_reply = subprocess.run(["curl" , ip_address])
print("The exit code was: %d" % query_reply.returncode)
The query_reply only saves the return code as I am using query_reply.returncode. Whereas I want to save the general output of the command curl http://ipwho.is/104.123.204.11 to a variable.
This output is a dict-like structure -
{"ip":"104.123.204.11","success":true,"type":"IPv4","continent":"North America","continent_code":"NA","country":"United States","country_code":"US","region":"California","region_code":"CA","city":"San Jose","latitude":37.3382082,"longitude":-121.8863286,"is_eu":false,"postal":"95113","calling_code":"1","capital":"Washington D.C.","borders":"CA,MX","flag":{"img":"https:\/\/cdn.ipwhois.io\/flags\/us.svg","emoji":"\ud83c\uddfa\ud83c\uddf8","emoji_unicode":"U+1F1FA U+1F1F8"},"connection":{"asn":16625,"org":"Akamai Technologies, Inc.","isp":"Akamai Technologies, Inc.","domain":"gwu.edu"},"timezone":{"id":"America\/Los_Angeles","abbr":"PDT","is_dst":true,"offset":-25200,"utc":"-07:00","current_time":"2022-07-25T11:26:47-07:00"}}
The final goal is to access the fields like the region, city etc. inside the above structure. What is the general process to approach this sort of problem?
there is an arg to get output
import subprocess
import json
r = subprocess.run(["curl" , "http://ipwho.is/104.123.204.11"], capture_output=True)
djson = json.loads(r.stdout.decode())
djson["region"], djson["city"]
or better just querying it
import requests
with requests.get("http://ipwho.is/104.123.204.11") as response:
djson = response.json()
djson["region"], djson["city"]
You can use subprocess.check_output, which returns the subprocess output as a string:
import subprocess
import json
raw = subprocess.check_output(["curl", "http://ipwho.is/104.123.204.11"])
data = json.loads(raw)
print(data)
Although this works, it's rarely if ever a good idea to shell out to curl to retrieve a URL in Python. Instead, use the excellent requests library to get the URL directly:
import requests
req = requests.get("http://ipwho.is/104.123.204.11")
req.raise_for_status()
data = req.json()
print(data)
This is simpler, handles errors better (raise_for_status will raise an easy-to-understand exception if the server returns an error code), faster, and does not depend on the curl program being present.
(Note: although similar to the other answer, the code snippets in this answer should work directly without modification).
Hi I am new to python I use python 3 on a mac. I don't know if this is relevant. Now to the question. I need for school data from an api, but I get an error.
<module 'requests' from '/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/__init__.py'>. Can somebody explain what this means
import requests
requests.get('https://api.github.com')
print(requests)
You are printing the module requests instead of the response of your request.
Try this one:
import requests
res = requests.get('https://api.github.com')
print(res.content)
So I`m trying to send a request to a webpage and read its response. I did a code that compares the request and the page, and I cant get the same page text. Am I using "requests" correctly?
I really think that I misunderstand how requests function works and what it does. Can someone help me please?
import requests
import urllib
def search():
pr = {'q':'pink'}
r = requests.get('http://stackoverflow.com/search',params=pr)
returntext = r.text
urllibtest(returntext)
def urllibtest(returntext):
connection = urllib.urlopen("http://stackoverflow.com/search?q=pink")
output = connection.read()
connection.close()
if output == returntext:
print("ITS THE SAME PAGE")
else:
print("ITS NOT THE SAME PAGE")
search()
First of all, there is no good reason to expect two different stack overflow searches to return the exact same response anyway.
There is one logical difference here too, requests automatically decodes the output for you:
>>> type(output)
str
>>> type(r.text)
unicode
You can use the content instead if you don't want it decoded, and use a more predictable source to see the same content returned - for example:
>>> r1 = urllib.urlopen('http://httpbin.org').read()
>>> r2 = requests.get('http://httpbin.org').content
>>> r1 == r2
True
I'm playing around in Python and and there's a URL that I'm trying to use which goes like this
https://[username#domain.com]:[password]#domain.com/blah
This is my code:
response =urllib2.urlopen("https://[username#domain.com]:[password]#domain.com/blah")
html = response.read()
print ("data="+html)
This isn't going through, it doesn't like the # symbols and probably the : too. I tried searching, and I read something about unquote, but that's not doing anything. This is the error I get:
raise InvalidURL("nonnumeric port: '%s'" % host[i+1:])
httplib.InvalidURL: nonnumeric port: 'password#updates.opendns.com'
How do I get around this? The actual site is "https://updates.opendns.com/nic/update?hostname=
thank you!
URIs have a bunch of reserved characters separating distinguishable parts of the URI (/, ?, &, # and a few others). If any of these characters appears in either username (# does in your case) or password, they need to be percent encoded or the URI becomes invalid.
In Python 3:
>>> from urllib import parse
>>> parse.quote("p#ssword?")
'p%40ssword%3F'
In Python 2:
>>> import urllib
>>> urllib.quote("p#ssword?")
'p%40ssword%3F'
Also, don't put the username and password in square brackets, this is not valid either.
use urlencode! Not sure if urllib2 has it, but urllib has an urlencode function. One sec and i'll get back to you.
I did a quick check, and it seems that you need to use urrlib instead of urllib2 for that...importing urllib and then using urllib.urlencode(YOUR URL) should work!
import urllib
url = urllib.urlencode(<your_url_here>)
EDIT: it's actually urlllib2.quote()!
I was looking into python urllib2 download size question.
Although the method RanRag or jterrace suggested worked fine for me but I was wondering how to use the urllib2.Request.get_header method to achieve the same. So, I tried the below line of code:
>>> import urllib2
>>> req_info = urllib2.Request('http://mirror01.th.ifl.net/releases//precise/ubuntu-12.04-desktop-i386.iso')
>>> req_info.header_items()
[]
>>> req_info.get_header('Content-Length')
>>>
As, you can see the get_header returned nothing and neither does header_items.
So, what is the correct way to use the above methods?
The urllib2.Request class is just "an abstraction of a URL request" (http://docs.python.org/library/urllib2.html#urllib2.Request), and does not do any actual retrieval of data. You must use urllib2.urlopen to retrieve data. urlopen either takes the url directly as a string, or you can pass an instance of the Request object too.
For example:
>>> req_info = urllib2.urlopen('https://www.google.com/logos/2012/javelin-2012-hp.jpg')
>>> req_info.headers.keys()
['content-length', 'x-xss-protection', 'x-content-type-options', 'expires', 'server', 'last-modified', 'connection', 'cache-control', 'date', 'content-type']
>>> req_info.headers.getheader('Content-Length')
'52741'