Is there any way to perform PUT request in tornado httpclient?
For example are there any ways to replace urllib with Requests Library?
Or maybe subclass own client and inject there construction from this answer:
import urllib2
opener = urllib2.build_opener(urllib2.HTTPHandler)
request = urllib2.Request('http://example.org', data='your_put_data')
request.add_header('Content-Type', 'your/contenttype')
request.get_method = lambda: 'PUT'
url = opener.open(request)
Any painless patches, hacks, suggestions..
I want this construction to work propertly:
response = yield gen.Task(http_client.fetch, opt.site_url + '/api/user/', method="PUT", body=urlencode(pdata))
For now it's not sending body.
Nope, Tornado doesn't use urllib (presumably it blocks). The trick to using httpclient for anything more complicated than a basic GET is to create an HTTPRequest.
Untested, but should work:
from tornado.httpclient import HTTPRequest
request = HTTPRequest(opt.site_url + '/api/user/', method="PUT", body=urlencode(pdata))
response = yield gen.Task(http_client.fetch, request)
Related
Using Python and Requests, I am trying to upload a file to an API using multipart/form-data POST requests with a custom authentication mechanism (Just adding an 'Authentication-Token' header).
Here is the code I use:
import requests
from requests.auth import AuthBase
class MyAuth(AuthBase):
def __init__(self, token):
self.token = token
def __call__(self, r):
r.headers['Authentication-Token'] = self.token
return r
token = "175a5607d2e79109539b490f0f8ffe60"
url = "https://my-ap.com/files"
r = requests.post(url,
files={'file': open('qsmflkj.jpg', 'rb')},
data={'id': 151},
auth=MyAuth(token)
)
Unfortunately, this request causes the following exception on the server (the backend is using the Spring framework) :
org.apache.tomcat.util.http.fileupload.FileUploadException: the request was rejected because no multipart boundary was found
I have tried a lot of things, like adding the header myself, but this seems to be the "right" way to do it, and it fails. I know that the API is used from various mobile clients, and hence, that it is possible to upload picture there. What could I do differently to be able to use this API in Python?
In the end, I never managed to do this with Requests, but I succeeded with urllib2 and the poster library:
from poster.encode import multipart_encode
from poster.streaminghttp import register_openers
import urllib2
register_openers()
token = "175a5607d2e79109539b490f0f8ffe60"
with open('qsmflkj.jpg', 'rb') as f:
datagen, headers = multipart_encode({"file": f})
headers['Authentication-Token'] = token
request = urllib2.Request("https://myserver.com/files", \
datagen, headers)
response = urllib2.urlopen(request)
I would like to authenticate to linkedin.com and get some content
I use requests python module and do something like this:
import requests
from BeautifulSoup import BeautifulSoup
client = requests.Session()
HOMEPAGE_URL = 'https://www.linkedin.com'
LOGIN_URL = HOMEPAGE_URL + '/uas/login-submit'
html = client.get(HOMEPAGE_URL).content
soup = BeautifulSoup(html)
csrf = soup.find(id="loginCsrfParam-login")['value']
login_information = {
'session_key':'my_login',
'session_password':'my_password',
'loginCsrfParam': csrf,
}
client.post(LOGIN_URL, data=login_information)
content = client.get(HOMEPAGE_URL + 'vsearch/c').content
And, got the content, all right,
But, now I want to use tornado framework to do the same work
I get loginCsrfParam in the similar way and make post request:
login_information = {
'session_key':'my_login',
'session_password':'my_password',
'loginCsrfParam':csrf
}
body = urllib.urlencode(login_information)
http_client.fetch(LOGIN_URL,
handle_request_post,
method='POST',
headers=None,
body=body)
And after arriving response
http_client.fetch(HOMEPAGE_URL + '/vsearch/c',
handle_request_get_content,
method = 'GET')
But I get simply a login page
What's wrong?
Tornado's AsyncHTTPClient doesn't have any concept of a session; each request is independent. It looks like requests.Session is transferring something from the login request to the vsearch request, probably cookies. You'll need to handle the Set-Cookie header from the login request and transfer the cookies to any following requests (perhaps using the http.cookiejar module)
I need to accomplish a login task in my own project.Luckily I found someone has it done already.
Here is the related code.
import re,urllib,urllib2,cookielib
class Login():
cj = cookielib.LWPCookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
def __init__(self,name='',password='',domain=''):
self.name=name
self.password=password
self.domain=domain
urllib2.install_opener(self.opener)
def login(self):
params = {'domain':self.domain,'email':self.name,'password':self.password}
req = urllib2.Request(
website_url,
urllib.urlencode(params)
)
self.openrate = self.opener.open(req)
print self.openrate.geturl()
info = self.openrate.read()
I've tested the code, it works great (according to info).
Now I want to port it to Python 3 as well as using requests lib instead of urllib2.
My thoughts:
since the original code use opener, though not sure, I think its equivalent in requests is requests.Session
Am I supposed to pass in a jar = cookiejar.CookieJar() when making request? Not sure either.
I've tried something like
import requests
from http import cookiejar
from urllib.parse import urlencode
jar = cookiejar.CookieJar()
s = requests.Session()
s.post(
website_url,
data = urlencode(params),
allow_redirects = True,
cookies = jar
)
Also, followed the answer in Putting a `Cookie` in a `CookieJar`, I tried making the same request again, but none of these worked.
That's why I'm here for help.
Will someone show me the right way to do this job? Thank you~
An opener and a Session are not entirely analogous, but for your particular use-case they match perfectly.
You do not need to pass a CookieJar when using a Session: Requests will automatically create one, attach it to the Session, and then persist the cookies to the Session for you.
You don't need to urlencode the data: requests will do that for you.
allow_redirects is True by default, you don't need to pass that parameter.
Putting all of that together, your code should look like this:
import requests
s = requests.Session()
s.post(website_url, data = params)
Any future requests made using the Session you just created will automatically have cookies applied to them if they are appropriate.
I need to set the timeout on urllib2.request().
I do not use urllib2.urlopen() since i am using the data parameter of request. How can I set this?
Although urlopen does accept data param for POST, you can call urlopen on a Request object like this,
import urllib2
request = urllib2.Request('http://www.example.com', data)
response = urllib2.urlopen(request, timeout=4)
content = response.read()
still, you can avoid using urlopen and proceed like this:
request = urllib2.Request('http://example.com')
response = opener.open(request,timeout=4)
response_result = response.read()
this works too :)
Why not use the awesome requests? You'll save yourself a lot of time.
If you are worried about deployment just copy it in your project.
Eg. of requests:
>>> requests.post('http://github.com', data={your data here}, timeout=10)
I need to have a proxy that acts as an intermediary to fetch images. An example would be, my server requests domain1.com/?url=domain2.com/image.png and domain1.com server will respond with the data at domain2.com/image.png via domain1.com server.
Essentially I want to pass to the proxy the URL I want fetched, and have the proxy server respond with that resource.
Any suggestions on where to start on this?
I need something very easy to use or implement as I'm very much a beginner at all of this.
Most solutions I have found in python and/or django have the proxy acts as a "translater" i.e. domain1.com/image.png translates to domain2.com/image.png, which is obviously not the same.
I currently have the following code, but fetching images results in garbled data:
import httplib2
from django.conf.urls.defaults import *
from django.http import HttpResponse
def proxy(request, url):
conn = httplib2.Http()
if request.method == "GET":
url = request.GET['url']
resp, content = conn.request(url, request.method)
return HttpResponse(content)
Old question but for future googlers, I think this is what you want:
# proxies the google logo
def test(request):
url = "http://www.google.com/logos/classicplus.png"
req = urllib2.Request(url)
response = urllib2.urlopen(req)
return HttpResponse(response.read(), mimetype="image/png")
A very simple Django proxy view with requests and StreamingHttpResponse:
import requests
from django.http import StreamingHttpResponse
def my_proxy_view(request):
url = request.GET['url']
response = requests.get(url, stream=True)
return StreamingHttpResponse(
response.raw,
content_type=response.headers.get('content-type'),
status=response.status_code,
reason=response.reason)
The advantage of this approach is that you don't need to load the complete file in memory before streaming the content to the client.
As you can see, it forwards some response headers. Depending on your needs, you may want to forward the request headers as well; for example:
response = requests.get(url, stream=True,
headers={'user-agent': request.headers.get('user-agent')})
If you need something more complete than my previous answer, you can use this class:
import requests
from django.http import StreamingHttpResponse
class ProxyHttpResponse(StreamingHttpResponse):
def __init__(self, url, headers=None, **kwargs):
upstream = requests.get(url, stream=True, headers=headers)
kwargs.setdefault('content_type', upstream.headers.get('content-type'))
kwargs.setdefault('status', upstream.status_code)
kwargs.setdefault('reason', upstream.reason)
super().__init__(upstream.raw, **kwargs)
for name, value in upstream.headers.items():
self[name] = value
You can use this class like so:
def my_proxy_view(request):
url = request.GET['url']
return ProxyHttpResponse(url, headers=request.headers)
The advantage of this version is that you can reuse it in multiple views. Also, it forwards all headers, and you can easily extend it to add or exclude some other headers.
If the file you're fetching and returning is an image, you'll need to change the mimetype of your HttpResponse Object.
Use mechanize, it allow you to choose a proxy and act like a browser, making it easy to change the user agent, to go back and forth in the history and to handle authentification or cookies.