I can make get or post request using urllib, but how do I make DELETE- and PUT-requests?
The requests library can handle POST, PUT, DELETE, and all other HTTP methods, and is significantly less scary than urllib, httplib and their variants.
You can override get_method with something like this:
def _make_request(url, data, method):
request.urllib2.Request(url, data=data)
request.get_method = lambda: method
Then you pass "DELETE" as method.
This answer covers the details.
PUT request can be performed by httplib2
http://code.google.com/p/httplib2
http://twistedmatrix.com/documents/current/web/howto/client.html
If you're looking to work with HTTP in twisted using the client side I'd suggest checking that out. It demonstrates how you can really easily make a request using the agent class.
As far as I know, urllib and urllib2 only support GET and POST requests. You should probably take a look at httplib or httplib2.
The method is set implicitly in the urlopen call
When you provide the data parameter a POST will be used.
urllib.request.urlopen(url, data=None[, timeout])
I don't think it's possible to use a DELETE HTTP method with urlib because of this line:
Request.get_method()
Return a string
indicating the HTTP request method.
This is only meaningful for HTTP
requests, and currently always returns
'GET' or 'POST'.
Consider using httplib, httplib2, or Twisted instead .for better support of HTTP methods.
The default HTTP methods in urllib library are POST and GET:
def get_method(self):
"""Return a string indicating the HTTP request method."""
default_method = "POST" if self.data is not None else "GET"
return getattr(self, 'method', default_method)
But we can override this get_method() function to get DELETE request:
req = urllib.request.Request(new_url)
req.get_method = lambda: "DELETE"
Related
Is there a way to send CoAP requests, like HTTP requests, using Python. I tried below one but I got many errors.
rec = reuest.get(coap://localhost:5683/other/block)
You can use a library such as CoAPython to act as a CoAP client:
from coapthon.client.helperclient import HelperClient
client = HelperClient(server=('127.0.0.1', 5683))
response = client.get('other/block')
client.stop()
The response is of type Response. The methods available on the response are listed in the documentation, which you must build yourself.
Since you haven't listed what you want to do with the response, you can use the documentation to find out the methods available and get the values you want.
Frustratingly I'm needing to develop something on Python 2.6.4, and need to send a delete request to a server that seems to only support http 1.1. Here is my code:
httpConnection = httplib.HTTPConnection("localhost:9080")
httpConnection.request('DELETE', remainderURL)
httpResponse = httpConnection.getresponse()
The response code I then get is: 505 (HTTP version not supported)
I've tested sending a delete request via Firefox's RESTClient to the same URL and that works.
I can't use urllib2 because it doesn't support the DELETE request. Is the HTTPConnection object http 1.0 only? Or am I doing something wrong?
The HTTPConnection class uses HTTP/1.1 throughout, and the 505 seems to indicate it's the server that cannot handle HTTP/1.1 requests.
However, if you need to make DELETE requests, why not use the Requests package instead? A DELETE is as simple as:
import requests
requests.delete(url)
That won't magically solve your HTTP version mismatch, but you can enable verbose logging to figure out what is going on:
import sys
requests.delete(url, config=dict(verbose=sys.stderr))
You can use urllib2:
req = urllib2.Request(query_url)
req.get_method = lambda: 'DELETE' # creates the delete method
url = urllib2.urlopen(req)
httplib uses HTTP/1.1 (see HTTPConnection.putRequest method documentation).
Check httpResponse.version to see what version the server is using.
For a given url, how can I detect final internet location after HTTP redirects, without downloading final page (e.g. HEAD request.) using python. I am trying to write a mass downloader, my downloading mechanism needs to know internet location of page before downloading it.
edit
I ended up doing this, I hope this helps other people. I am still open to other methods.
import urlparse
import httplib
def getFinalUrl(url):
"Navigates Through redirections to get final url."
parsed = urlparse.urlparse(url)
conn = httplib.HTTPConnection(parsed.netloc)
conn.request("HEAD",parsed.path)
response = conn.getresponse()
if str(response.status).startswith("3"):
new_location = [v for k,v in response.getheaders() if k == "location"][0]
return getFinalUrl(new_location)
return url
I strongly suggest you to use requests library. It is well coded and actively maintained. Requests can make anything you need like prefetch/
From the Requests' documentation http://docs.python-requests.org/en/latest/user/advanced/ :
By default, when you make a request, the body of the response is downloaded immediately. You can override this behavior and defer downloading the response body until you access the Response.content attribute with the prefetch parameter:
tarball_url = 'https://github.com/kennethreitz/requests/tarball/master'
r = requests.get(tarball_url, prefetch=False)
At this point only the response headers have been downloaded and the connection remains open, hence allowing us to make content retrieval conditional:
if int(r.headers['content-length']) < TOO_LONG:
content = r.content
...
You can further control the workflow by use of the Response.iter_content and Response.iter_lines methods, or reading from the underlying urllib3 urllib3.HTTPResponse at Response.raw
You can use httplib to send HEAD requests.
You can also have a look at python-requests, which seems to be the new trendy API for HTTP requests, replacing the possibly awkward httplib2. (see Why Not httplib2)
It also has a head() method for this.
I dont want to use html file, but only with django I have to make POST request.
Just like urllib2 sends a get request.
Here's how you'd write the accepted answer's example using python-requests:
post_data = {'name': 'Gladys'}
response = requests.post('http://example.com', data=post_data)
content = response.content
Much more intuitive. See the Quickstart for more simple examples.
In Python 2, a combination of methods from urllib2 and urllib will do the trick. Here is how I post data using the two:
post_data = [('name','Gladys'),] # a sequence of two element tuples
result = urllib2.urlopen('http://example.com', urllib.urlencode(post_data))
content = result.read()
urlopen() is a method you use for opening urls.
urlencode() converts the arguments to percent-encoded string.
The only thing you should look at now:
https://requests.readthedocs.io/en/master/
You can use urllib2 in django. After all, it's still python. To send a POST with urllib2, you can send the data parameter (taken from here):
urllib2.urlopen(url[, data][, timeout])
[..] the HTTP request will be a POST instead of a GET when the data parameter is provided
Pay attention, that when you're using 🐍 requests , and make POST request passing your dictionary in data parameter like this:
payload = {'param1':1, 'param2':2}
r = request.post('https://domain.tld', data=payload)
you are passing parameters form-encoded.
If you want to send POST request with only JSON (most popular type in server-server integration) you need to provide a str() in data parameter. In case with JSON, you need to import json lib and make like this:
payload = {'param1':1, 'param2':2}
r = request.post('https://domain.tld', data=json.dumps(payload))`
documentation is here
OR:
just use json parameter with provided data in the dict
payload = {'param1':1, 'param2':2}
r = request.post('https://domain.tld', json=payload)`
I'm writing a pythonic web API wrapper with a class like this
import httplib2
import urllib
class apiWrapper:
def __init__(self):
self.http = httplib2.Http()
def _http(self, url, method, dict):
'''
Im using this wrapper arround the http object
all the time inside the class
'''
params = urllib.urlencode(dict)
response, content = self.http.request(url,params,method)
as you can see I'm using the _http() method to simplify the interaction with the httplib2.Http() object. This method is called quite often inside the class and I'm wondering what's the best way to interact with this object:
create the object in the __init__ and then reuse it when the _http() method is called (as shown in the code above)
or create the httplib2.Http() object inside the method for every call of the _http() method (as shown in the code sample below)
import httplib2
import urllib
class apiWrapper:
def __init__(self):
def _http(self, url, method, dict):
'''Im using this wrapper arround the http object
all the time inside the class'''
http = httplib2.Http()
params = urllib.urlencode(dict)
response, content = http.request(url,params,method)
Supplying 'connection': 'close' in your headers should according to the docs close the connection after a response is received.:
headers = {'connection': 'close'}
resp, content = h.request(url, headers=headers)
You should keep the Http object if you reuse connections. It seems httplib2 is capable of reusing connections the way you use it in your first code, so this looks like a good approach.
At the same time, from a shallow inspection of the httplib2 code, it seems that httplib2 has no support for cleaning up unused connections, or to even notice when a server has decided to close a connection it no longer wants. If that is indeed the case, it looks like a bug in httplib2 to me - so I would rather use the standard library (httplib) instead.