setting the timeout on a urllib2.request() call - python

I need to set the timeout on urllib2.request().
I do not use urllib2.urlopen() since i am using the data parameter of request. How can I set this?

Although urlopen does accept data param for POST, you can call urlopen on a Request object like this,
import urllib2
request = urllib2.Request('http://www.example.com', data)
response = urllib2.urlopen(request, timeout=4)
content = response.read()

still, you can avoid using urlopen and proceed like this:
request = urllib2.Request('http://example.com')
response = opener.open(request,timeout=4)
response_result = response.read()
this works too :)

Why not use the awesome requests? You'll save yourself a lot of time.
If you are worried about deployment just copy it in your project.
Eg. of requests:
>>> requests.post('http://github.com', data={your data here}, timeout=10)

Related

Trouble getting data from REST http service using requests package

This works fine, I can get data returned:
r = urllib2.Request("http://myServer.com:12345/myAction")
data = json.dumps(q) #q is a python dict
r.add_data(data)
r=urllib2.urlopen(r)
But doing the same with requests package fails:
r=requests.get("http://myServer.com:12345/myAction", data=q)
r.text #This will return a message that says method is not allowed.
It works if I make it a post request: r=requests.post("http://myServer.com:12345/myAction", data=json.dumps(q))
But why?
According to the urllib2.urlopen documentation:
the HTTP request will be a POST instead of a GET when the data parameter is provided.
This way, r=urllib2.urlopen(r) is also making a POST request. That is why your requests.get does not work, but requests.post does.
Set up a session
import session
session = requests.Session()
r = session.get("http://myServer.com:12345/myAction", data=q)
print r.content (<- or could us r.raw)

Is it possible to "refresh" a connection created with urllib2.urlopen?

I am fetching data from a URL using urllib2.urlopen:
from urllib2 import urlopen
...
conn = urlopen(url)
data = conn.read()
conn.close()
Suppose the data did not "come out" as I had expected.
What would be the best method for me to read it again?
I am currently repeating the whole process (open, read, close).
Is there a better way (some sort of connection-refresh perhaps)?
When you call urlopen on a URL, Python makes an HTTP GET request and returns the response; each of these request-response pairs are by nature separate connections. You have to repeat the process for every URL you want to request, although you don't really have to close your urlopen response.
No, repeating the process is the only way to get new data.
you chould close urllib after used to refresh when you open early
try:
import json, urllib
while 1 :
url='http://project/JsonVanner.php'
response = urllib.urlopen(url)
data = json.loads(response.read())
for x in data :
print x['Etat']
if (x['Etat'] == 'OFF'):
print('vanne fermer')
print((int(x['IDVanne'])*10)+0)
else :
print('vanne ouverte')
print((int(x['IDVanne'])*10)+1)
response.close()

Can't post to proxy form

I want to use urllib2 through a proxy http site, post to the form component having name="what", click submit, and return the resulting webpage as a string. I know many have asked this question before, see here for example. However, I couldn't get their solutions to work for my example code below:
url = "http://anonymouse.org/anonwww.html"
posturl = "www.google.ca"
values = {'what':posturl}
data = urllib.urlencode(values)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
html = response.read()
print html
piggybacking on Christian's answer:
requests is a very good library for this stuff... however, urllib2 also suffices:
import urllib2
def get_anon_content(url):
anon_url = 'http://anonymouse.org/cgi-bin/anon-www.cgi/%s' % url
req = urllib2.Request(anon_url)
response = urllib2.urlopen(req)
content = response.read()
return content
url = 'http://www.google.ca'
print get_anon_content(url)
in youre case you can just use this url:
http://anonymouse.org/cgi-bin/anon-www.cgi/http://www.google.ca
its the same thing as using anonymouse except you dont have to go to the site you just use the url
and next time make it easier on youreself and use requests you can get the same effect of urllib but with like 4 lines so check that out
and good luck :)

using python urlopen for a url query

Using urlopen also for url queries seems obvious. What I tried is:
import urllib2
query='http://www.onvista.de/aktien/snapshot.html?ID_OSI=86627'
f = urllib2.urlopen(query)
s = f.read()
f.close()
However, for this specific url query it fails with HTTP error 403 forbidden
When entering this query in my browser, it works.
Also when using http://www.httpquery.com/ to submit the query, it works.
Do you have suggestions how to use Python right to grab the correct response?
Looks like it requires cookies... (which you can do with urllib2), but an easier way if you're doing this, is to use requests
import requests
session = requests.session()
r = session.get('http://www.onvista.de/aktien/snapshot.html?ID_OSI=86627')
This is generally a much easier and less-stressful method of retrieving URLs in Python.
requests will automatically store and re-use cookies for you. Creating a session is slightly overkill here, but is useful for when you need to submit data to login pages etc..., or re-use cookies across a site... etc...
using urllib2 is something like
import urllib2, cookielib
cookies = cookielib.CookieJar()
opener = urllib2.build_opener( urllib2.HTTPCookieProcessor(cookies) )
data = opener.open('url').read()
It appears that the urllib2 default user agent is banned by the host. You can simply supply your own user agent string:
import urllib2
url = 'http://www.onvista.de/aktien/snapshot.html?ID_OSI=86627'
request = urllib2.Request(url, headers={"User-Agent" : "MyUserAgent"})
contents = urllib2.urlopen(request).read()
print contents

How to send a POST request using django?

I dont want to use html file, but only with django I have to make POST request.
Just like urllib2 sends a get request.
Here's how you'd write the accepted answer's example using python-requests:
post_data = {'name': 'Gladys'}
response = requests.post('http://example.com', data=post_data)
content = response.content
Much more intuitive. See the Quickstart for more simple examples.
In Python 2, a combination of methods from urllib2 and urllib will do the trick. Here is how I post data using the two:
post_data = [('name','Gladys'),] # a sequence of two element tuples
result = urllib2.urlopen('http://example.com', urllib.urlencode(post_data))
content = result.read()
urlopen() is a method you use for opening urls.
urlencode() converts the arguments to percent-encoded string.
The only thing you should look at now:
https://requests.readthedocs.io/en/master/
You can use urllib2 in django. After all, it's still python. To send a POST with urllib2, you can send the data parameter (taken from here):
urllib2.urlopen(url[, data][, timeout])
[..] the HTTP request will be a POST instead of a GET when the data parameter is provided
Pay attention, that when you're using 🐍 requests , and make POST request passing your dictionary in data parameter like this:
payload = {'param1':1, 'param2':2}
r = request.post('https://domain.tld', data=payload)
you are passing parameters form-encoded.
If you want to send POST request with only JSON (most popular type in server-server integration) you need to provide a str() in data parameter. In case with JSON, you need to import json lib and make like this:
payload = {'param1':1, 'param2':2}
r = request.post('https://domain.tld', data=json.dumps(payload))`
documentation is here
OR:
just use json parameter with provided data in the dict
payload = {'param1':1, 'param2':2}
r = request.post('https://domain.tld', json=payload)`

Categories

Resources