Python POST binary data - python

I am writing some code to interface with redmine and I need to upload some files as part of the process, but I am not sure how to do a POST request from python containing a binary file.
I am trying to mimic the commands here:
curl --data-binary "#image.png" -H "Content-Type: application/octet-stream" -X POST -u login:password http://redmine/uploads.xml
In python (below), but it does not seem to work. I am not sure if the problem is somehow related to encoding the file or if something is wrong with the headers.
import urllib2, os
FilePath = "C:\somefolder\somefile.7z"
FileData = open(FilePath, "rb")
length = os.path.getsize(FilePath)
password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_manager.add_password(None, 'http://redmine/', 'admin', 'admin')
auth_handler = urllib2.HTTPBasicAuthHandler(password_manager)
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
request = urllib2.Request( r'http://redmine/uploads.xml', FileData)
request.add_header('Content-Length', '%d' % length)
request.add_header('Content-Type', 'application/octet-stream')
try:
response = urllib2.urlopen( request)
print response.read()
except urllib2.HTTPError as e:
error_message = e.read()
print error_message
I have access to the server and it looks like a encoding error:
...
invalid byte sequence in UTF-8
Line: 1
Position: 624
Last 80 unconsumed characters:
7z¼¯'ÅÐз2^Ôøë4g¸R<süðí6kĤª¶!»=}jcdjSPúá-º#»ÄAtD»H7Ê!æ½]j):
(further down)
Started POST "/uploads.xml" for 192.168.0.117 at 2013-01-16 09:57:49 -0800
Processing by AttachmentsController#upload as XML
WARNING: Can't verify CSRF token authenticity
Current user: anonymous
Filter chain halted as :authorize_global rendered or redirected
Completed 401 Unauthorized in 13ms (ActiveRecord: 3.1ms)

Basically what you do is correct. Looking at redmine docs you linked to, it seems that suffix after the dot in the url denotes type of posted data (.json for JSON, .xml for XML), which agrees with the response you get - Processing by AttachmentsController#upload as XML. I guess maybe there's a bug in docs and to post binary data you should try using http://redmine/uploads url instead of http://redmine/uploads.xml.
Btw, I highly recommend very good and very popular Requests library for http in Python. It's much better than what's in the standard lib (urllib2). It supports authentication as well but I skipped it for brevity here.
import requests
with open('./x.png', 'rb') as f:
data = f.read()
res = requests.post(url='http://httpbin.org/post',
data=data,
headers={'Content-Type': 'application/octet-stream'})
# let's check if what we sent is what we intended to send...
import json
import base64
assert base64.b64decode(res.json()['data'][len('data:application/octet-stream;base64,'):]) == data
UPDATE
To find out why this works with Requests but not with urllib2 we have to examine the difference in what's being sent. To see this I'm sending traffic to http proxy (Fiddler) running on port 8888:
Using Requests
import requests
data = 'test data'
res = requests.post(url='http://localhost:8888',
data=data,
headers={'Content-Type': 'application/octet-stream'})
we see
POST http://localhost:8888/ HTTP/1.1
Host: localhost:8888
Content-Length: 9
Content-Type: application/octet-stream
Accept-Encoding: gzip, deflate, compress
Accept: */*
User-Agent: python-requests/1.0.4 CPython/2.7.3 Windows/Vista
test data
and using urllib2
import urllib2
data = 'test data'
req = urllib2.Request('http://localhost:8888', data)
req.add_header('Content-Length', '%d' % len(data))
req.add_header('Content-Type', 'application/octet-stream')
res = urllib2.urlopen(req)
we get
POST http://localhost:8888/ HTTP/1.1
Accept-Encoding: identity
Content-Length: 9
Host: localhost:8888
Content-Type: application/octet-stream
Connection: close
User-Agent: Python-urllib/2.7
test data
I don't see any differences which would warrant different behavior you observe. Having said that it's not uncommon for http servers to inspect User-Agent header and vary behavior based on its value. Try to change headers sent by Requests one by one making them the same as those being sent by urllib2 and see when it stops working.

This has nothing to do with a malformed upload. The HTTP error clearly specifies 401 unauthorized, and tells you the CSRF token is invalid. Try sending a valid CSRF token with the upload.
More about csrf tokens here:
What is a CSRF token ? What is its importance and how does it work?

you need to add Content-Disposition header, smth like this (although I used mod-python here, but principle should be the same):
request.headers_out['Content-Disposition'] = 'attachment; filename=%s' % myfname

You can use unirest, It provides easy method to post request.
`
import unirest
def callback(response):
print "code:"+ str(response.code)
print "******************"
print "headers:"+ str(response.headers)
print "******************"
print "body:"+ str(response.body)
print "******************"
print "raw_body:"+ str(response.raw_body)
# consume async post request
def consumePOSTRequestASync():
params = {'test1':'param1','test2':'param2'}
# we need to pass a dummy variable which is open method
# actually unirest does not provide variable to shift between
# application-x-www-form-urlencoded and
# multipart/form-data
params['dummy'] = open('dummy.txt', 'r')
url = 'http://httpbin.org/post'
headers = {"Accept": "application/json"}
# call get service with headers and params
unirest.post(url, headers = headers,params = params, callback = callback)
# post async request multipart/form-data
consumePOSTRequestASync()

Related

POST method in Python: errno 104

I am trying to query a website in Python. I need to use a POST method (according to what is happening in my browser when I monitor it with the developer tools).
If I query the website with cURL, it works well:
curl -i --data "param1=var1&param2=var2" http://www.test.com
I get this header:
HTTP/1.1 200 OK
Date: Tue, 26 Sep 2017 08:46:18 GMT
Server: Apache/1.3.33 (Unix) mod_gzip/1.3.26.1a mod_fastcgi/2.4.2 PHP/4.3.11
Transfer-Encoding: chunked
Content-Type: text/html
But when I do it in Python 3, I get an error 104.
Here is what I tried so far. First, with urllib (getting inspiration from this thread to manage to use a POST method instead of GET):
import re
from urllib import request as ur
# URL to handle request
url = "http://www.test.com"
data = "param1=var1&param2=var2"
# Build a request dictionary
preq = [re.findall("[^=]+", i) for i in re.findall("[^\&]+", data)]
dreq = {i[0]: i[1] if len(i) == 2 else "" for i in preq}
# Initiate request & add method
ureq = ur.Request(url)
ureq.get_method = lambda: "POST"
# Send request
req = ur.urlopen(ureq, data=str(dreq).encode())
I did basically the same with requests:
import re
import requests
# URL to handle request
url = "http://www.test.com"
data = "param1=var1&param2=var2"
# Build a request dictionary
preq = [re.findall("[^=]+", i) for i in re.findall("[^\&]+", data)]
dreq = {i[0]: i[1] if len(i) == 2 else "" for i in preq}
# Initiate request & add method
req = requests.post(url, data=dreq)
In both cases, I get a HTTP 104 error:
ConnectionResetError: [Errno 104] Connection reset by peer
That I don't understand since the same request is working with cURL. I guess I misunderstood something with Python request but so far I'm stuck. Any hint would be appreciated!
I've just figured out I did not pass data in the right format. I thought it needed to be store in a dict; that is not the case and it is therefore much more simple that what I tried previously.
With urllib:
req = ur.urlopen(ureq, data=str(data).encode())
With requests:
req = requests.post(url, data=data)

How to avoid InValidClientError with requests-oauthlib token fetching?

I'm using the standard oauth2 web flow example from requests-oauthlib to get a token from QuickBooks Online's API but whilst I've successfully redirected to the QBO site, authorised and has a code returned to request a token I am getting a:
oauthlib.oauth2.rfc6749.errors.InvalidClientError
Error when I am actually making the token request. Having googled a lot it seems like I am not authorising correctly. QBO says the actual request might look like the following:
POST https://oauth.platform.intuit.com/oauth2/v1/tokens/bearer HTTP/1.1
Accept: application/json
Authorization: Basic UTM0dVBvRDIwanp2OUdxNXE1dmlMemppcTlwM1d2
NzRUdDNReGkwZVNTTDhFRWwxb0g6VEh0WEJlR3dheEtZSlVNaFhzeGxma1l
XaFg3ZlFlRzFtN2szTFRwbw==
Content-Type: application/x-www-form-urlencoded
Host: oauth.platform.intuit.com
Body: grant_type=authorization_code&
code=L3114709614564VSU8JSEiPkXx1xhV8D9mv4xbv6sZJycibMUI&
redirect_uri=https://www.mydemoapp.com/oauth-redirect
And that I need to generate the authorization headers like this:
"Basic " + base64encode(client_id + ":" + client_secret)
I've created the above like this:
auth_code = client_id + ":" + client_secret
auth_code = base64.b64encode(bytes(auth_code, 'utf-8'))
auth_code = auth_code.decode('utf-8')
But passing this through in the token code like:
token = qbo.fetch_token(token_url, code=code, authorization_response=redirect_uri, headers={'Authorization' : auth_code})
Resulted in the InvalidClientError message. I feel like I'm really close but failing at the final oauth2 hurdle but can someone help me authorise for the token please?
Many thanks,
John
------ edit 1 -----
Thanks stovfl, that's super helpful and I've tried a couple of new things now based on your advice. I've just updated the code so that it reads like this:
headers = {'Accept':'application/json'}
headers.update({'Content-Type': 'application/x-www-form-urlencoded'})
auth = HTTPBasicAuth(client_id, client_secret)
body = 'grant_type=authorization_code&code=' + code + '&redirect_uri=' + redirect_uri_token
token = qbo.fetch_token(token_url, auth=auth, body=body, headers=headers)
That produced another error:
ValueError: Please supply either code or authorization_response parameters.
Checking the actual oauth library code it's mandatory to give the code/authresponse so I updated again to get the code from the callback and use it in my fetch_token request:
all_args = request.args.to_dict()
code = all_args['code']
headers = {'Accept':'application/json'}
headers.update({'Content-Type': 'application/x-www-form-urlencoded'})
auth = HTTPBasicAuth(client_id, client_secret)
body = 'grant_type=authorization_code&code=' + code + '&redirect_uri=' + redirect_uri_token
token = qbo.fetch_token(token_url, code=code, auth=auth, body=body, headers=headers)
I've also tried removing the code and just having in the main call so the body looks like this:
body = 'grant_type=authorization_code' + '&redirect_uri=' + redirect_uri_token
But either way it now goes back to the previous error of:
oauthlib.oauth2.rfc6749.errors.InvalidClientError: (invalid_client)
I'm not sure where to go from here.
Question: QuickBooks intuit Developer: Exchange authorization code for access token, using OAut2.0.
According to the given Request Example, you should do:
LINE: POST https://oauth.platform.intuit.com/oauth2/v1/tokens/bearer HTTP/1.1
POST to https://oauth.plat... HTTP Version 1.1, Defaults using OAuth2.fetch_token(....
url_endpoint = 'https://oauth.platform.intuit.com/oauth2/v1/tokens/bearer'
LINE: Accept: application/json
You have to accept application/json, OAuth2.fetch_token(... accepts anything by default.
Nothing to do, but for completeness you can define it as follows:
headers = {'Accept':'application/json'}
LINE: Authorization: Basic a2V5OnNlY3JldA==
It's required that you pass auth credentials in a Basic Auth header
from requests.auth import HTTPBasicAuth
auth = HTTPBasicAuth(client_id, client_secret)
LINE: Content-Type: application/x-www-form-urlencoded
The body Data has to passed form-urlencoded.
headers.update({'Content-Type': 'application/x-www-form-urlencoded'})
LINE: Host: oauth.platform.intuit.com
User specific, nothing to do
LINE: Body: grant_type=authorization_code&code=L311...&redirect_uri=https://www.mydemoapp.com/oauth-redirect
Data placement in body.
body = 'grant_type=authorization_code&code=L311...&redirect_uri=https://www.mydemoapp.com/oauth-redirect'
Exchange authorization code for access token
token = qbo.fetch_token(url_endpoint, auth=auth, body=body, headers=headers)
Note: Cant't verify with QuickBooks intuit Developer but requests give me the following Request Message:
Request Message:
headers:
Accept: application/json
Accept-Encoding: gzip, deflate
Content-Length: 139
User-Agent: python-requests/2.11.1
Authorization: Basic a2V5OnNlY3JldA==
Host: ... omited, because user specific
Content-Type: application/x-www-form-urlencoded
form:
grant_type: authorization_code
code: L3114709614564VSU8JSEiPkXx1xhV8D9mv4xbv6sZJycibMUI
redirect_uri: https://www.mydemoapp.com/oauth-redirect
Tested with Python:3.4.2 - requests:2.11.1

Create a HTTP object from a string in Python

I am having a device which is sending the following http message to my RaspberryPi:
POST /sinvertwebmonitor/InverterService/InverterService.asmx/CollectInverterData HTTP/1.1
Host: www.automation.siemens.com
Content-Type: application/x-www-form-urlencoded
Content-Length: 349
xmlData=<rd><m>xxxxx</m><s>yyyyyy</s><d t="1483019400" l="600"><p i="1">460380AE</p><p i="2">43655DE7</p><p i="3">4212C986</p><p i="4">424805BC</p><p i="5">4604E3D1</p><p i="6">441F616A</p><p i="7">4155E7F5</p><p i="8">E1</p><p i="9">112</p><p i="C">153</p><p i="D">4</p><p i="E">11ABAC</p><p i="F">22A48C</p><p i="10">0</p></d></rd>
I cannot change anything on the device.
On the RaspberryPi im running a script to listen and receive the message from a socket.
This works so far and the received message is the one above.
Now, I would like to create a HTTP object from this message and then extract comfortably the header, content and so on.
Similar to the following example:
r = requests.get('https://www.google.com')
r.status_code
However, without "getting" an url. I just want to read the string I already have.
Pseudo-example:
r = requests.read(hereComesTheString)
r.status_code
I hope the problem became understandable.
Would be glad to get some hints.
Thanks and best regards,
Christoph
You use the status_code property in your example, but what you are receiving is a request not a response. However you can still create a simple object for accessing the data in the request.
It is probably easiest to create your own custom class:
import mimetools
from StringIO import StringIO
request = """POST /sinvertwebmonitor/InverterService/InverterService.asmx/CollectInverterData HTTP/1.1
Host: www.automation.siemens.com
Content-Type: application/x-www-form-urlencoded
Content-Length: 349
xmlData=<rd><m>xxxxx</m><s>yyyyyy</s><d t="1483019400" l="600"><p i="1">460380AE</p><p i="2">43655DE7</p><p i="3">4212C986</p><p i="4">424805BC</p><p i="5">4604E3D1</p><p i="6">441F616A</p><p i="7">4155E7F5</p><p i="8">E1</p><p i="9">112</p><p i="C">153</p><p i="D">4</p><p i="E">11ABAC</p><p i="F">22A48C</p><p i="10">0</p></d></rd>"""
class Request:
def __init__(self, request):
stream = StringIO(request)
request = stream.readline()
words = request.split()
[self.command, self.path, self.version] = words
self.headers = mimetools.Message(stream, 0)
self.content = stream.read()
def __getitem__(self, key):
return self.headers.get(key, '')
r = Request(request)
print(r.command)
print(r.path)
print(r.version)
for header in r.headers:
print(header, r[header])
print(r.content)
This outputs:
POST
/sinvertwebmonitor/InverterService/InverterService.asmx/CollectInverterData
HTTP/1.1
('host', 'www.automation.siemens.com')
('content-type', 'application/x-www-form-urlencoded')
('content-length', '349')
xmlData=<rd><m>xxxxx</m><s>yyyyyy</s><d t="1483019400" l="600"><p i="1">460380AE</p><p i="2">43655DE7</p><p i="3">4212C986</p><p i="4">424805BC</p><p i="5">4604E3D1</p><p i="6">441F616A</p><p i="7">4155E7F5</p><p i="8">E1</p><p i="9">112</p><p i="C">153</p><p i="D">4</p><p i="E">11ABAC</p><p i="F">22A48C</p><p i="10">0</p></d></rd>
If you're using plain socket server, then you need to implement an HTTP server so that you can split the request and respond according to the protocol.
It's probably easier just to use an existing HTTP server and app server. Flask is ideal for this:
from flask import Flask
from flask import request
app = Flask(__name__)
#app.route("/sinvertwebmonitor/InverterService/InverterService.asmx/CollectInverterData", methods=['POST'])
def dataCollector():
data = request.form['xmlData']
print(data)
# parseData. Take a look at ElementTree
if __name__ == "__main__":
app.run(host=0.0.0.0, port=80)
Thanks Alden. Below your code with a few changes so it works with Python3.
import email
from io import StringIO
request = """POST /sinvertwebmonitor/InverterService/InverterService.asmx/CollectInverterData HTTP/1.1
Host: www.automation.siemens.com
Content-Type: application/x-www-form-urlencoded
Content-Length: 349
xmlData=<rd><m>xxxxx</m><s>yyyyyy</s><d t="1483019400" l="600"><p i="1">460380AE</p><p i="2">43655DE7</p><p i="3">4212C986</p><p i="4">424805BC</p><p i="5">4604E3D1</p><p i="6">441F616A</p><p i="7">4155E7F5</p><p i="8">E1</p><p i="9">112</p><p i="C">153</p><p i="D">4</p><p i="E">11ABAC</p><p i="F">22A48C</p><p i="10">0</p></d></rd>"""
class Request:
def __init__(self, request):
stream = StringIO(request)
request = stream.readline()
words = request.split()
[self.command, self.path, self.version] = words
self.headers = email.message_from_string(request)
self.content = stream.read()
def __getitem__(self, key):
return self.headers.get(key, '')
r = Request(request)
print(r.command)
print(r.path)
print(r.version)
for header in r.headers:
print(header, r[header])
print(r.content)

Python Requests - Is it possible to receive a partial response after an HTTP POST?

I am using the Python Requests Module to datamine a website. As part of the datamining, I have to HTTP POST a form and check if it succeeded by checking the resulting URL. My question is, after the POST, is it possible to request the server to not send the entire page? I only need to check the URL, yet my program downloads the entire page and consumes unnecessary bandwidth. The code is very simple
import requests
r = requests.post(URL, payload)
if 'keyword' in r.url:
success
fail
An easy solution, if it's implementable for you. Is to go low-level. Use socket library.
For example you need to send a POST with some data in its body. I used this in my Crawler for one site.
import socket
from urllib import quote # POST body is escaped. use quote
req_header = "POST /{0} HTTP/1.1\r\nHost: www.yourtarget.com\r\nUser-Agent: For the lulz..\r\nContent-Type: application/x-www-form-urlencoded; charset=UTF-8\r\nContent-Length: {1}"
req_body = quote("data1=yourtestdata&data2=foo&data3=bar=")
req_url = "test.php"
header = req_header.format(req_url,str(len(req_body))) #plug in req_url as {0}
#and length of req_body as Content-length
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) #create a socket
s.connect(("www.yourtarget.com",80)) #connect it
s.send(header+"\r\n\r\n"+body+"\r\n\r\n") # send header+ two times CR_LF + body + 2 times CR_LF to complete the request
page = ""
while True:
buf = s.recv(1024) #receive first 1024 bytes(in UTF-8 chars), this should be enought to receive the header in one try
if not buf:
break
if "\r\n\r\n" in page: # if we received the whole header(ending with 2x CRLF) break
break
page+=buf
s.close() # close the socket here. which should close the TCP connection even if data is still flowing in
# this should leave you with a header where you should find a 302 redirected and then your target URL in "Location:" header statement.
There's a chance the site uses Post/Redirect/Get (PRG) pattern. If so then it's enough to not follow redirect and read Location header from response.
Example
>>> import requests
>>> response = requests.get('http://httpbin.org/redirect/1', allow_redirects=False)
>>> response.status_code
302
>>> response.headers['location']
'http://httpbin.org/get'
If you need more information on what would you get if you had followed redirection then you can use HEAD on the url given in Location header.
Example
>>> import requests
>>> response = requests.get('http://httpbin.org/redirect/1', allow_redirects=False)
>>> response.status_code
302
>>> response.headers['location']
'http://httpbin.org/get'
>>> response2 = requests.head(response.headers['location'])
>>> response2.status_code
200
>>> response2.headers
{'date': 'Wed, 07 Nov 2012 20:04:16 GMT', 'content-length': '352', 'content-type':
'application/json', 'connection': 'keep-alive', 'server': 'gunicorn/0.13.4'}
It would help if you gave some more data, for example, a sample URL that you're trying to request. That being said, it seems to me that generally you're checking if you had the correct URL after your POST request using the following algorithm relying on redirection or HTTP 404 errors:
if original_url == returned request url:
correct url to a correctly made request
else:
wrong url and a wrongly made request
If this is the case, what you can do here is use the HTTP HEAD request (another type of HTTP request like GET, POST, etc.) in Python's requests library to get only the header and not also the page body. Then, you'd check the response code and redirection url (if present) to see if you made a request to a valid URL.
For example:
def attempt_url(url):
'''Checks the url to see if it is valid, or returns a redirect or error.
Returns True if valid, False otherwise.'''
r = requests.head(url)
if r.status_code == 200:
return True
elif r.status_code in (301, 302):
if r.headers['location'] == url:
return True
else:
return False
elif r.status_code == 404:
return False
else:
raise Exception, "A status code we haven't prepared for has arisen!"
If this isn't quite what you're looking for, additional detail on your requirements would help. At the very least, this gets you the status code and headers without pulling all of the page data.

Python urllib2 Basic Auth Problem

Update: based on Lee's comment I decided to condense my code to a really simple script and run it from the command line:
import urllib2
import sys
username = sys.argv[1]
password = sys.argv[2]
url = sys.argv[3]
print("calling %s with %s:%s\n" % (url, username, password))
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, url, username, password)
urllib2.install_opener(urllib2.build_opener(urllib2.HTTPBasicAuthHandler(passman)))
req = urllib2.Request(url)
f = urllib2.urlopen(req)
data = f.read()
print(data)
Unfortunately it still won't generate the Authorization header (per Wireshark) :(
I'm having a problem sending basic AUTH over urllib2. I took a look at this article, and followed the example. My code:
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, "api.foursquare.com", username, password)
urllib2.install_opener(urllib2.build_opener(urllib2.HTTPBasicAuthHandler(passman)))
req = urllib2.Request("http://api.foursquare.com/v1/user")
f = urllib2.urlopen(req)
data = f.read()
I'm seeing the following on the Wire via wireshark:
GET /v1/user HTTP/1.1
Host: api.foursquare.com
Connection: close
Accept-Encoding: gzip
User-Agent: Python-urllib/2.5
You can see the Authorization is not sent, vs. when I send a request via curl: curl -u user:password http://api.foursquare.com/v1/user
GET /v1/user HTTP/1.1
Authorization: Basic =SNIP=
User-Agent: curl/7.19.4 (universal-apple-darwin10.0) libcurl/7.19.4 OpenSSL/0.9.8k zlib/1.2.3
Host: api.foursquare.com
Accept: */*
For some reason my code seems to not send the authentication - anyone see what I'm missing?
thanks
-simon
The problem could be that the Python libraries, per HTTP-Standard, first send an unauthenticated request, and then only if it's answered with a 401 retry, are the correct credentials sent. If the Foursquare servers don't do "totally standard authentication" then the libraries won't work.
Try using headers to do authentication:
import urllib2, base64
request = urllib2.Request("http://api.foursquare.com/v1/user")
base64string = base64.b64encode('%s:%s' % (username, password))
request.add_header("Authorization", "Basic %s" % base64string)
result = urllib2.urlopen(request)
Had the same problem as you and found the solution from this thread: http://forums.shopify.com/categories/9/posts/27662
(copy-paste/adapted from https://stackoverflow.com/a/24048772/1733117).
First you can subclass urllib2.BaseHandler or urllib2.HTTPBasicAuthHandler, and implement http_request so that each request has the appropriate Authorization header.
import urllib2
import base64
class PreemptiveBasicAuthHandler(urllib2.HTTPBasicAuthHandler):
'''Preemptive basic auth.
Instead of waiting for a 403 to then retry with the credentials,
send the credentials if the url is handled by the password manager.
Note: please use realm=None when calling add_password.'''
def http_request(self, req):
url = req.get_full_url()
realm = None
# this is very similar to the code from retry_http_basic_auth()
# but returns a request object.
user, pw = self.passwd.find_user_password(realm, url)
if pw:
raw = "%s:%s" % (user, pw)
auth = 'Basic %s' % base64.b64encode(raw).strip()
req.add_unredirected_header(self.auth_header, auth)
return req
https_request = http_request
Then if you are lazy like me, install the handler globally
api_url = "http://api.foursquare.com/"
api_username = "johndoe"
api_password = "some-cryptic-value"
auth_handler = PreemptiveBasicAuthHandler()
auth_handler.add_password(
realm=None, # default realm.
uri=api_url,
user=api_username,
passwd=api_password)
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
Here's what I'm using to deal with a similar problem I encountered while trying to access MailChimp's API. This does the same thing, just formatted nicer.
import urllib2
import base64
chimpConfig = {
"headers" : {
"Content-Type": "application/json",
"Authorization": "Basic " + base64.encodestring("hayden:MYSECRETAPIKEY").replace('\n', '')
},
"url": 'https://us12.api.mailchimp.com/3.0/'}
#perform authentication
datas = None
request = urllib2.Request(chimpConfig["url"], datas, chimpConfig["headers"])
result = urllib2.urlopen(request)
The second parameter must be a URI, not a domain name. i.e.
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, "http://api.foursquare.com/", username, password)
I would suggest that the current solution is to use my package urllib2_prior_auth which solves this pretty nicely (I work on inclusion to the standard lib.

Categories

Resources