Is there any way to do HTTP PUT in python - python

I need to upload some data to a server using HTTP PUT in python. From my brief reading of the urllib2 docs, it only does HTTP POST. Is there any way to do an HTTP PUT in python?

I've used a variety of python HTTP libs in the past, and I've settled on requests as my favourite. Existing libs had pretty useable interfaces, but code can end up being a few lines too long for simple operations. A basic PUT in requests looks like:
payload = {'username': 'bob', 'email': 'bob#bob.com'}
>>> r = requests.put("http://somedomain.org/endpoint", data=payload)
You can then check the response status code with:
r.status_code
or the response with:
r.content
Requests has a lot synactic sugar and shortcuts that'll make your life easier.

import urllib2
opener = urllib2.build_opener(urllib2.HTTPHandler)
request = urllib2.Request('http://example.org', data='your_put_data')
request.add_header('Content-Type', 'your/contenttype')
request.get_method = lambda: 'PUT'
url = opener.open(request)

Httplib seems like a cleaner choice.
import httplib
connection = httplib.HTTPConnection('1.2.3.4:1234')
body_content = 'BODY CONTENT GOES HERE'
connection.request('PUT', '/url/path/to/put/to', body_content)
result = connection.getresponse()
# Now result.status and result.reason contains interesting stuff

You can use the requests library, it simplifies things a lot in comparison to taking the urllib2 approach. First install it from pip:
pip install requests
More on installing requests.
Then setup the put request:
import requests
import json
url = 'https://api.github.com/some/endpoint'
payload = {'some': 'data'}
# Create your header as required
headers = {"content-type": "application/json", "Authorization": "<auth-key>" }
r = requests.put(url, data=json.dumps(payload), headers=headers)
See the quickstart for requests library. I think this is a lot simpler than urllib2 but does require this additional package to be installed and imported.

This was made better in python3 and documented in the stdlib documentation
The urllib.request.Request class gained a method=... parameter in python3.
Some sample usage:
req = urllib.request.Request('https://example.com/', data=b'DATA!', method='PUT')
urllib.request.urlopen(req)

You should have a look at the httplib module. It should let you make whatever sort of HTTP request you want.

I needed to solve this problem too a while back so that I could act as a client for a RESTful API. I settled on httplib2 because it allowed me to send PUT and DELETE in addition to GET and POST. Httplib2 is not part of the standard library but you can easily get it from the cheese shop.

I also recommend httplib2 by Joe Gregario. I use this regularly instead of httplib in the standard lib.

Have you taken a look at put.py? I've used it in the past. You can also just hack up your own request with urllib.

You can of course roll your own with the existing standard libraries at any level from sockets up to tweaking urllib.
http://pycurl.sourceforge.net/
"PyCurl is a Python interface to libcurl."
"libcurl is a free and easy-to-use client-side URL transfer library, ... supports ... HTTP PUT"
"The main drawback with PycURL is that it is a relative thin layer over libcurl without any of those nice Pythonic class hierarchies. This means it has a somewhat steep learning curve unless you are already familiar with libcurl's C API. "

If you want to stay within the standard library, you can subclass urllib2.Request:
import urllib2
class RequestWithMethod(urllib2.Request):
def __init__(self, *args, **kwargs):
self._method = kwargs.pop('method', None)
urllib2.Request.__init__(self, *args, **kwargs)
def get_method(self):
return self._method if self._method else super(RequestWithMethod, self).get_method()
def put_request(url, data):
opener = urllib2.build_opener(urllib2.HTTPHandler)
request = RequestWithMethod(url, method='PUT', data=data)
return opener.open(request)

You can use requests.request
import requests
url = "https://www.example/com/some/url/"
payload="{\"param1\": 1, \"param1\": 2}"
headers = {
'Authorization': '....',
'Content-Type': 'application/json'
}
response = requests.request("PUT", url, headers=headers, data=payload)
print(response.text)

A more proper way of doing this with requests would be:
import requests
payload = {'username': 'bob', 'email': 'bob#bob.com'}
try:
response = requests.put(url="http://somedomain.org/endpoint", data=payload)
response.raise_for_status()
except requests.exceptions.RequestException as e:
print(e)
raise
This raises an exception if there is an error in the HTTP PUT request.

Using urllib3
To do that, you will need to manually encode query parameters in the URL.
>>> import urllib3
>>> http = urllib3.PoolManager()
>>> from urllib.parse import urlencode
>>> encoded_args = urlencode({"name":"Zion","salary":"1123","age":"23"})
>>> url = 'http://dummy.restapiexample.com/api/v1/update/15410' + encoded_args
>>> r = http.request('PUT', url)
>>> import json
>>> json.loads(r.data.decode('utf-8'))
{'status': 'success', 'data': [], 'message': 'Successfully! Record has been updated.'}
Using requests
>>> import requests
>>> r = requests.put('https://httpbin.org/put', data = {'key':'value'})
>>> r.status_code
200

Related

From HTTPResponse to str in Python 3.6

From a POST request to Vimeo API I get a JSON object encoded as HTTPResponse.
r = http.request('POST', 'https://api.vimeo.com/oauth/authorize/client?grant_type=client_credentials', headers={'Authorization': 'basic XXX'})
I do not find a way to convert the HTTPResponse to a str or Json object. In stackoverflow I found and tried the following options:
json.loads(r.decode('utf-8'))
json.loads(r.readall().decode('utf-8'))
str(r, 'utf-8')
but none of them worked.
Please can you help?
Thanks
try with requests module
import requests
import json
r=requests.post('https://api.vimeo.com/oauth/authorize/client?grant_type=client_credentials', varData, headers={'Authorization': 'basic XXX'})
response = json.loads(r.text)
From Python docs (emphasis mine):
class http.client.HTTPResponse(sock, debuglevel=0, method=None, url=None)
Class whose instances are returned upon successful connection. Not instantiated directly by user.
And also:
See also The Requests package is recommended for a higher-level HTTP client interface.
So you're probably better off using requests directly.
After having made your request, just use json.loads(r.text).
Use can use http.client module. Example:
import http.client
import json
conn = http.client.HTTPConnection('https://api.vimeo.com/oauth/authorize/client?grant_type=client_credentials')
headers = {'Authorization': 'basic XXX'}
params = varData
conn.request('POST', '', params, headers)
response = conn.getresponse()
content = bytes.decode(response.read(), 'utf-8') #return string value
res_map = json.loads(content) #if content is json string
For more information, refer this: http.client

Can someone give a python requests example of uploading a release asset in github?

url = 'https://github.abc.defcom/api/v3/repos/abc/def/releases/401/assets?name=foo.sh'
r = requests.post(url, headers={'Content-Type':'application/binary'}, data=open('sometext.txt','r'), auth=('user','password'))
This is giving me
>>> r.text
u'{"message":"Not Found","documentation_url":"https://developer.github.com/enterprise/2.4/v3"}'
Where am I going wrong?
So I'll preface this with the advice that if you use a library it's as easy as:
from github3 import GitHubEnterprise
gh = GitHubEnterprise(token=my_token)
repository = gh.repository('abc', 'def')
release = repository.release(id=401)
asset = release.upload_asset(content_type='application/binary', name='foo.sh', asset=open('sometext.txt', 'rb'))
With that in mind, I'll also preface this with "application/binary" is not a real media type (see: https://www.iana.org/assignments/media-types/media-types.xhtml)
Next, if you read the documentation, you'll notice that GitHub requires clients that have real SNI (Server Name Indication), so depending on your version of Python, you may also have to install pyOpenSSL, pyasn1, and ndg-httpsclient from PyPI.
I'm not sure what the URL looks like for enterprise instances, but for public GitHub, it looks like:
https://uploads.github.com/repos/octocat/Hello-World/releases/1/assets?name=foo.sh
So you're going to have that as url, plus you're going to want your auth credentials (in your case you seem to want to use basic auth). Then you're going to want a valid media-type in the headers, e.g.,
headers = {'Content-Type': 'text/plain'}
And your call would look pretty much exactly correct:
requests.post(url, headers=headers, data=open('file.txt', 'rb'), auth=(username, password))
To get the correct url, you should do:
release = requests.get(release_url, auth=(username, password))
upload_url = release.json().get('upload_url')
Note this is a URITemplate. You'll need to remove the templating or use a library like uritemplate.py to parse it and use it to build your URL for you.
One last reminder, github3.py (the library in the original example) takes care of all of this for you.
APIv3 upload example without any external dependencies
Usage:
GITHUB_TOKEN=<token> ./create-release username/reponame <tag-name> <path-to-upload>
Script:
#!/usr/bin/env python3
import json
import os
import sys
from urllib.parse import urlencode
from urllib.request import Request, urlopen
repo = sys.argv[1]
tag = sys.argv[2]
upload_file = sys.argv[3]
token = os.environ['GITHUB_TOKEN']
url_template = 'https://{}.github.com/repos/' + repo + '/releases'
# Create.
_json = json.loads(urlopen(Request(
url_template.format('api'),
json.dumps({
'tag_name': tag,
'name': tag,
'prerelease': True,
}).encode(),
headers={
'Accept': 'application/vnd.github.v3+json',
'Authorization': 'token ' + token,
},
)).read().decode())
release_id = _json['id']
# Upload.
with open(upload_file, 'br') as myfile:
content = myfile.read()
_json = json.loads(urlopen(Request(
url_template.format('uploads') + '/' + str(release_id) + '/assets?' \
+ urlencode({'name': os.path.split(upload_file)[1]}),
content,
headers={
'Accept': 'application/vnd.github.v3+json',
'Authorization': 'token ' + token,
'Content-Type': 'application/zip',
},
)).read().decode())
Superset question with any language: How to release a build artifact asset on GitHub with a script?

Python request with authentication (access_token)

I am trying to use an API query in Python. From the command line I can use curl like so:
curl --header "Authorization:access_token myToken" https://website.example/id
This gives some JSON output. myToken is a hexadecimal variable that remains constant throughout.
I would like to make this call from python so that I can loop through different ids and analyze the output. Before authentication was needed I had done that with urllib2. I have also taken a look at the requests module but couldn't figure out how to authenticate with it.
The requests package has a very nice API for HTTP requests, adding a custom header works like this (source: official docs):
>>> import requests
>>> response = requests.get(
... 'https://website.example/id', headers={'Authorization': 'access_token myToken'})
If you don't want to use an external dependency, the same thing using urllib2 of the standard library looks like this (source: the missing manual):
>>> import urllib2
>>> response = urllib2.urlopen(
... urllib2.Request('https://website.example/id', headers={'Authorization': 'access_token myToken'})
I had the same problem when trying to use a token with Github.
The only syntax that has worked for me with Python 3 is:
import requests
myToken = '<token>'
myUrl = '<website>'
head = {'Authorization': 'token {}'.format(myToken)}
response = requests.get(myUrl, headers=head)
>>> import requests
>>> response = requests.get('https://website.com/id', headers={'Authorization': 'access_token myToken'})
If the above doesnt work , try this:
>>> import requests
>>> response = requests.get('https://api.buildkite.com/v2/organizations/orgName/pipelines/pipelineName/builds/1230', headers={ 'Authorization': 'Bearer <your_token>' })
>>> print response.json()
import requests
BASE_URL = 'http://localhost:8080/v3/getPlan'
token = "eyJhbGciOiJSUzI1NiIsImtpZCI6ImR"
headers = {'Authorization': "Bearer {}".format(token)}
auth_response = requests.get(BASE_URL, headers=headers)
print(auth_response.json())
Output :
{
"plans": [
{
"field": false,
"description": "plan 12",
"enabled": true
}
]
}
A lot of good answers already, but I didn't see this option yet:
If you're using requests, you could also specify a custom authentication class, similar to HTTPBasicAuth. For example:
from requests.auth import AuthBase
class TokenAuth(AuthBase):
def __init__(self, token, auth_scheme='Bearer'):
self.token = token
self.auth_scheme = auth_scheme
def __call__(self, request):
request.headers['Authorization'] = f'{self.auth_scheme} {self.token}'
return request
This could be used as follows (using the custom auth_scheme from the example):
response = requests.get(
url='https://example.com',
auth=TokenAuth(token='abcde', auth_scheme='access_token'),
)
This may look like a more complicated way to set the Request.headers attribute, but it can be advantageous if you want to support multiple types of authentication. Note this allows us to use the auth argument instead of the headers argument.
Have you tried the uncurl package (https://github.com/spulec/uncurl)? You can install it via pip, pip install uncurl. Your curl request returns:
>>> uncurl "curl --header \"Authorization:access_token myToken\" https://website.com/id"
requests.get("https://website.com/id",
headers={
"Authorization": "access_token myToken"
},
cookies={},
)
I'll add a bit hint: it seems what you pass as the key value of a header depends on your authorization type, in my case that was PRIVATE-TOKEN
header = {'PRIVATE-TOKEN': 'my_token'}
response = requests.get(myUrl, headers=header)
One of the option used in python to retrieve below:
import requests
token="abcd" < retrieved based>
headers = {'Authorization': "Bearer {}".format(token)}
response = requests.get(
'https://<url api>',
headers=headers,
verify="root ca certificate"
)
print(response.content)
If you get hostname mismatch error then additional SANs need to be configured in the server with the hostnames.
Hope this helps.

How to do a x-http request (client) with Python

I am trying to reproduce a x-http request captured with Charles (Web Debugging Proxy) with Python but I can't find any documentation (or don't know what or where to look for).
I'd use the requests library for this, as it makes tasks like these easier.
The request you captured seems to be posting JSON data, albeit with a text/javascript content type:
import requests
import json
headers = {'Content-Type': 'text/javascript;charset=utf-8')
data = json.dumps({'mod': 'calendar.field', 'action': 'mini', 'vars': {"current": 0}})
r = requests.post('http://www.kavka.be/xhttp.mod', data=data, headers=headers)
where data is a JSON string created from the same information as your proxy-captured POST.
Alternatively, if you only want to use the standard library, use urllib2:
import urllib2
import json
headers = {'Content-Type': 'text/javascript;charset=utf-8')
data = json.dumps({'mod': 'calendar.field', 'action': 'mini', 'vars': {"current": 0}})
req = urllib2.Request('http://www.kavka.be/xhttp.mod', data, headers)
r = urllib2.urlopen(req)

How do you send a HEAD HTTP request in Python 2?

What I'm trying to do here is get the headers of a given URL so I can determine the MIME type. I want to be able to see if http://somedomain/foo/ will return an HTML document or a JPEG image for example. Thus, I need to figure out how to send a HEAD request so that I can read the MIME type without having to download the content. Does anyone know of an easy way of doing this?
urllib2 can be used to perform a HEAD request. This is a little nicer than using httplib since urllib2 parses the URL for you instead of requiring you to split the URL into host name and path.
>>> import urllib2
>>> class HeadRequest(urllib2.Request):
... def get_method(self):
... return "HEAD"
...
>>> response = urllib2.urlopen(HeadRequest("http://google.com/index.html"))
Headers are available via response.info() as before. Interestingly, you can find the URL that you were redirected to:
>>> print response.geturl()
http://www.google.com.au/index.html
edit: This answer works, but nowadays you should just use the requests library as mentioned by other answers below.
Use httplib.
>>> import httplib
>>> conn = httplib.HTTPConnection("www.google.com")
>>> conn.request("HEAD", "/index.html")
>>> res = conn.getresponse()
>>> print res.status, res.reason
200 OK
>>> print res.getheaders()
[('content-length', '0'), ('expires', '-1'), ('server', 'gws'), ('cache-control', 'private, max-age=0'), ('date', 'Sat, 20 Sep 2008 06:43:36 GMT'), ('content-type', 'text/html; charset=ISO-8859-1')]
There's also a getheader(name) to get a specific header.
Obligatory Requests way:
import requests
resp = requests.head("http://www.google.com")
print resp.status_code, resp.text, resp.headers
I believe the Requests library should be mentioned as well.
Just:
import urllib2
request = urllib2.Request('http://localhost:8080')
request.get_method = lambda : 'HEAD'
response = urllib2.urlopen(request)
response.info().gettype()
Edit: I've just came to realize there is httplib2 :D
import httplib2
h = httplib2.Http()
resp = h.request("http://www.google.com", 'HEAD')
assert resp[0]['status'] == 200
assert resp[0]['content-type'] == 'text/html'
...
link text
For completeness to have a Python3 answer equivalent to the accepted answer using httplib.
It is basically the same code just that the library isn't called httplib anymore but http.client
from http.client import HTTPConnection
conn = HTTPConnection('www.google.com')
conn.request('HEAD', '/index.html')
res = conn.getresponse()
print(res.status, res.reason)
import httplib
import urlparse
def unshorten_url(url):
parsed = urlparse.urlparse(url)
h = httplib.HTTPConnection(parsed.netloc)
h.request('HEAD', parsed.path)
response = h.getresponse()
if response.status/100 == 3 and response.getheader('Location'):
return response.getheader('Location')
else:
return url
As an aside, when using the httplib (at least on 2.5.2), trying to read the response of a HEAD request will block (on readline) and subsequently fail. If you do not issue read on the response, you are unable to send another request on the connection, you will need to open a new one. Or accept a long delay between requests.
I have found that httplib is slightly faster than urllib2. I timed two programs - one using httplib and the other using urllib2 - sending HEAD requests to 10,000 URL's. The httplib one was faster by several minutes. httplib's total stats were: real 6m21.334s
user 0m2.124s
sys 0m16.372s
And urllib2's total stats were: real 9m1.380s
user 0m16.666s
sys 0m28.565s
Does anybody else have input on this?
And yet another approach (similar to Pawel answer):
import urllib2
import types
request = urllib2.Request('http://localhost:8080')
request.get_method = types.MethodType(lambda self: 'HEAD', request, request.__class__)
Just to avoid having unbounded methods at instance level.
Probably easier: use urllib or urllib2.
>>> import urllib
>>> f = urllib.urlopen('http://google.com')
>>> f.info().gettype()
'text/html'
f.info() is a dictionary-like object, so you can do f.info()['content-type'], etc.
http://docs.python.org/library/urllib.html
http://docs.python.org/library/urllib2.html
http://docs.python.org/library/httplib.html
The docs note that httplib is not normally used directly.

Categories

Resources