So I recently stumbled upon this great library for handling HTTP requests in Python; found here http://docs.python-requests.org/en/latest/index.html.
I love working with it, but I can't figure out how to add headers to my get requests. Help?
According to the API, the headers can all be passed in with requests.get():
import requests
r=requests.get("http://www.example.com/", headers={"Content-Type":"text"})
This answer taught me that you can set headers for an entire session:
s = requests.Session()
s.auth = ('user', 'pass')
s.headers.update({'x-test': 'true'})
# both 'x-test' and 'x-test2' are sent
s.get('http://httpbin.org/headers', headers={'x-test2': 'true'})
Bonus: Sessions also handle cookies.
Seems pretty straightforward, according to the docs on the page you linked (emphasis mine).
requests.get(url, params=None, headers=None, cookies=None, auth=None,
timeout=None)
Sends a GET request.
Returns Response object.
Parameters:
url – URL for the new
Request object.
params – (optional)
Dictionary of GET Parameters to send
with the Request.
headers – (optional)
Dictionary of HTTP Headers to send
with the Request.
cookies – (optional)
CookieJar object to send with the
Request.
auth – (optional) AuthObject
to enable Basic HTTP Auth.
timeout –
(optional) Float describing the
timeout of the request.
Go to http://myhttpheader.com
copy attributes - typically 'Accept-Language' and 'User-Agent'.
Wrap them in the dictionary:
headers = { 'Accept-Language' : content-copied-from-myhttpheader,
'User-Agent':content-copied-from-myhttpheader}
pass headers in your request
requests.get(url=your_url,headers=headers)
Related
I used to selenium for downloading special reports from webpage where I have to login. Webpage has integrated OKTA Authentication plugin . I find out that there would be better and more effective use internal API requests. So I tried find how to use request python library with creating session, but I am unsuccessful. I tried this code, but it ends with 400 error.
payload = {"password":"password","username":"username","options":{"warnBeforePasswordExpired": True,"multiOptionalFactorEnroll": True}}
with requests.Session() as s:
p = s.post('https://sso.johndeere.com/api/v1/authn', data=payload)
r = s.get("requested_url")
print(p)
I am unable get throw auth. Has anybody experience with breaking OKTA auth plugin using requests library?
Thanks
Best Regards
Merry Christmas and Welcome to Stackoverflow!
Firstly, an HTTP error code of 400 error means one or more settings is wrong at the client side. You can learn more about it here.
You seem to be missing out important headers configuration. You need to set the content-type header correctly otherwise the destination server won't be able to process your data.
Also, as a bonus point. You need to format your payload into a valid JSON string before sending out the request too.
import requests
import json
# Setup proper headers
headers = {
"accept": "application/json, text/plain, */*",
"content-type": "application/json; charset=UTF-8"
}
# Your body data here
payload = {"password":"password","username":"username","options":{"warnBeforePasswordExpired": True,"multiOptionalFactorEnroll": True}}
payload_json = json.dumps(payload) # Format it into a valid JSON str
with requests.Session() as s:
p = s.post('https://sso.johndeere.com/api/v1/authn', headers=headers, data=payload_json)
r = s.get("requested_url")
print(p.content)
This works fine, I can get data returned:
r = urllib2.Request("http://myServer.com:12345/myAction")
data = json.dumps(q) #q is a python dict
r.add_data(data)
r=urllib2.urlopen(r)
But doing the same with requests package fails:
r=requests.get("http://myServer.com:12345/myAction", data=q)
r.text #This will return a message that says method is not allowed.
It works if I make it a post request: r=requests.post("http://myServer.com:12345/myAction", data=json.dumps(q))
But why?
According to the urllib2.urlopen documentation:
the HTTP request will be a POST instead of a GET when the data parameter is provided.
This way, r=urllib2.urlopen(r) is also making a POST request. That is why your requests.get does not work, but requests.post does.
Set up a session
import session
session = requests.Session()
r = session.get("http://myServer.com:12345/myAction", data=q)
print r.content (<- or could us r.raw)
I was making slack api calls through python library slackclient which is a wrapper around slack api. However, for some cases I need to make conventional api calls also with url and get/post method. I was trying to open a direct message channel with another user by my bot. The documentation - https://api.slack.com/methods/im.open says to "Present these parameters as part of an application/x-www-form-urlencoded querystring or POST body. application/json is not currently accepted."
Now in python, I can write,
url = 'https://slack.com/api/im.open'
headers = {'content-type':'x-www-form-urlencoded'}
data = {'token':BOT_TOKEN, 'user':user_id, 'include_locale':'true','return_im':'true'}
r= requests.post(url,headers,data )
print r.text
The message I get is {"ok":false,"error":"not_authed"}
I know the message is "not authed" although I use my bot token and another user id, my hunch is that I'm sending the request in wrong format because I just wrote it some way reading the documentation. I'm not sure how to exactly send these requests.
Any help?
since the Content-Type header is x-www-form-urlencoded sending data in form of dictionary does not work. you can try something like this.
import requests
url = 'https://slack.com/api/im.open'
headers = {'content-type': 'x-www-form-urlencoded'}
data = [
('token', BOT_TOKEN),
('user', user_id),
('include_locale', 'true'),
('return_im', 'true')
]
r = requests.post(url, data, **headers)
print r.text
The second parameter in requests.post is used for data, so in your request you're actually posting the headers dictionary. If you want to use headers you can pass arguments by name.
r= requests.post(url, data, headers=headers)
However this is not necessary in this case because 'x-www-form-urlencoded' is the default when posting form data.
I'm trying to make a request to the particle servers in python in a google app engine app.
In my terminal, I can complete the request simply and successfully with requests as:
res = requests.get('https://api.particle.io/v1/devices', params={"access_token": {ACCESS_TOKEN}})
But in my app, the same thing doesn't work with urlfetch, which keeps telling me it can't find the access token:
url = 'https://api.particle.io/v1/devices'
payload = {"access_token": {ACCESS_TOKEN}}
form_data = urllib.urlencode(payload)
res = urlfetch.fetch(
url=url,
payload=form_data,
method=urlfetch.GET,
headers={
'Content-Type':
'application/x-www-form-urlencoded'
},
follow_redirects=False
)
I have no idea what the problem is, and no way to debug. Thanks!
In a nutshell, your problem is that in your urlfetch sample you're embedding your access token into the request body, and since you're issuing a GET request -which cannot carry any request body with them- this information gets discarded.
Why does your first snippet work?
Because requests.get() takes that optional params argument that means: "take this dictionary I give you, convert all its key/value pairs into a query string and append it to the main URL"
So, behind the curtains, requests.get() is building a string like this:
https://api.particle.io/v1/devices?access_token=ACCESS_TOKEN
That's the correct endpoint you should point your GET requests to.
Why doesn't your second snippet work?
This time, urlfetch.fetch() uses a different syntax than requests.get() (but equivalent nonetheless). The important bit to note here is that payload argument doesn't mean the same as our params argument that you used before in requests.get().
urlfetch.fetch() expects our query string -if any- to be already urlencoded into the URL (that's why urllib.urlencode() comes into play here). On the other hand, payload is where you should put your request body in case you were issuing a POST, PUT or PATCH request, but particle.io's endpoint is not expecting your OAuth access token to be there.
Something like this should work (disclaimer: not tested):
auth = {"access_token": {ACCESS_TOKEN}}
url_params = urllib.urlencode(auth)
url = 'https://api.particle.io/v1/devices?%s' % url_params
res = urlfetch.fetch(
url=url,
method=urlfetch.GET,
follow_redirects=False
)
Notice how now we don't need your previous Content-type header anymore, since we aren't carrying any content after all. Hence, headers parameter can be removed from this example call.
For further reference, take a look at urlfetch.fetch() reference and this SO thread that will hopefully give you a better insight into HTTP methods, parameters and request bodies than my poor explanation here.
PS: If particle.io servers support it (they should), you should move away from this authentication schema and carry your tokens in a Authorization: Bearer <access_token> header instead. Carrying access tokens in URLs is not a good idea because they are much more visible that way and tend to stay logged in servers, hence posing a security risk. On the other hand, in a TLS session all request headers are always encrypted so your auth tokens are well hidden there.
Ok, so, as it turns out, one cannot include a payload for a GET request using Urlfetch. Instead, one has to include the parameters in the url using the '?' syntax as follows:
url = 'https://api.particle.io/v1/devices'
url = url + '?access_token=' + {ACCESS_TOKEN}
res = urlfetch.fetch(
url=url,
method=urlfetch.GET,
follow_redirects=False
)
this worked for me.
I am using python requests library for a POST request and I expect a return message with an empty payload. I am interested in the headers of the returned message, specifically the 'Location' attribute. I tried the following code:
response=requests.request(method='POST', url=url, headers={'Content-Type':'application/json'}, data=data)
print response.headers ##Displays a case-insensitve map
print response.headers['Location'] ##blows up
Strangely the 'Location' attribute is missing in the headers map. If I try the same POST request on postman, I do get a valid Location attribute. Has anyone else seen this? Is this a bug in the requests library?
Sounds like everything's working as expected? Check your response.history
From the Requests documentation:
Requests will automatically perform location redirection for all verbs except HEAD.
>>> r = requests.get('http://github.com')
>>> r.url
'https://github.com/'
>>> r.status_code
200
>>> r.history
[<Response [301]>]
From the HTTP Location page on wikipedia:
The HTTP Location header field is returned in responses from an HTTP server under two circumstances:
To ask a web browser to load a different web page. In this circumstance, the Location header should be sent with an HTTP status code of 3xx. It is passed as part of the response by a web server when the requested URI has:
Moved temporarily, or
Moved permanently
To provide information about the location of a newly-created resource. In this circumstance, the Location header should be sent with an HTTP status code of 201 or 202.1
The requests library follows redirections automatically.
To take a look at the redirections, look at the history of the requests. More details in the docs.
Or you pass the extra allow_redirects=False parameter when making the request.