How to use same cookies in multiple request in python? - python

I am using this code:
def req(url, postfields):
proxy_support = urllib2.ProxyHandler({"http" : "127.0.0.1:8118"})
opener = urllib2.build_opener(proxy_support)
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
return opener.open(url).read()
To make a simple http get request (using tor as proxy).
Now I would like to know how to make multiple request using the same cookie.
For example:
req('http://loginpage', 'postfields')
source = req('http://pageforloggedinonly', 0)
#do stuff with source
req('http://anotherpageforloggedinonly', 'StuffFromSource')
I know that my function req doesn't support POST (yet), but I have sent postfields using httplib so I guess I can figure that by myself, but I don't understand how to use cookies, I saw some examples but they are all one request only, I want to reuse the cookie from the first login request in the succeeding requests, or saving/using the cookie from a file (like curl does), that would make everything easier.
The code I posted I only to illustrate what I am trying to achieve, I think I will use httplib(2) for the final app.
UPDATE:
cookielib.LWPCOokieJar worked fine, here's a sample I did for testing:
import urllib2, cookielib, os
def request(url, postfields, cookie):
urlopen = urllib2.urlopen
cj = cookielib.LWPCookieJar()
Request = urllib2.Request
if os.path.isfile(cookie):
cj.load(cookie)
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
urllib2.install_opener(opener)
txheaders = {'User-agent' : 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'}
req = Request(url, postfields, txheaders)
handle = urlopen(req)
cj.save(cookie)
return handle.read()
print request('http://google.com', None, 'cookie.txt')

The cookielib module is what you need to do this. There's a nice tutorial with some code samples.

Related

Send back cookie in a request

I've navigated through different responses to my question but still didn't manage to run it :(.
I'm logging onto a site using python & mechanize, my code looks like this
br = mechanize.Browser()
# Cookie Jar
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
...
r = br.open('http://...')
html = r.read()
form = br.forms().next()
br.form = form
br.submit()
Sending the form is not a problem, the problem is that when I write br.open() again to perform a GET request, Python doesn't send back the Cookie PHPSESSID (I looked this in wireshark), any ideas?
Thanks!
import cookielib, urllib2
ckjar = cookielib.MozillaCookieJar(os.path.join(’C:\Documents and Settings\tom\Application Data\Mozilla\Firefox\Profiles\h5m61j1i.default’, ‘cookies.txt’))
req = urllib2.Request(url, postdata, header)
req.add_header(’User-Agent’, \
‘Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)’)
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(ckjar) )
f = opener.open(req)
htm = f.read()
f.close()

Check the request headers with url2lib

I am trying to fetch a page with Python, and using the cookie jar.
jar = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar))
opener.addheaders = [('User-agent', 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)')]
response = opener.open('http://www.example.com/')
print response.info()
Using the above, I can get the response headers, but other than WireShark, can I see the request headers? What urllib2 was sending?
No, though the docs indicate that there isn't much added. You could setup an http server in python, send the request to it first, pull out the headers, and then check those. But if you have wireshark, it's less work to just use that.

python urllib2 not returning https page

When I try to post data from http to https, urllib2 does not return desired https webpage instead website asks to enable cookies.
To get first http page:
proxyHandler = urllib2.ProxyHandler({'http': "http://proxy:port" })
opener = urllib2.build_opener(proxyHandler)
opener.addheaders = [('User-agent', 'Mozilla/5.0 (Windows NT 6.1; rv:8.0) Gecko/20100101 Firefox/8.0')]
urllib2.install_opener(opener)
resp = urllib2.urlopen(url)
content = resp.read()
When I extract data from above page and post data to second https page, urllib2 returns success status 200 and page asks to enable cookies.
I've checked the post data, its fine. I'm getting cookies from website but not sure whether they are being sent with next request or not as I read in python docs that urllib2 automatically handles cookies.
To get second https page:
resp = urllib2.urlopen(url, data=postData)
content = resp.read()
I also tried to set proxy handler to this as read in a reply to similar problem on stackoverflow somewhere but got same result:
proxyHandler = urllib2.ProxyHandler({'https': "http://proxy:port" })
urllib2 "handles" cookies in responses but it doesn't not automatically store them and resend them with later requests. You'll need to use the cooklib module for that.
There are some examples in the documentation that show how it works with urllib2.

Login with python - megaupload

I'm trying to fix a program which can login to my MU account and retrieve some data....
I don't know what am I doing wrong....That's the code:
#!/usr/bin/env python
import urllib, urllib2, cookielib
username = 'username'
password = 'password'
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
login_data = urllib.urlencode({'username' : username, 'password' : password})
opener.open('http://megaupload.com/index.php?c=login', login_data)
resp = opener.open('http://www.megaupload.com/index.php?c=filemanager')
print resp.read()
Thx for any answer!
You can simulate the filling of the form.
For that you can use mechanize lib base on perl module WWW::Mechanize.
#!/usr/bin/env python
import urllib, urllib2, cookielib, mechanize
username = 'username'
password = 'password'
br = mechanize.Browser()
cj = cookielib.CookieJar()
br.set_cookiejar(cj)
br.set_handle_robots(False)
br.addheaders = [('User-agent', 'Mozilla/5.0 (Windows; U; Windows NT 6.1; fr; rv:1.9.2) Gecko/20100115 Firefox/3.6')]
br.open('http://www.megaupload.com/?c=login')
br.select_form('loginfrm')
br.form['username'] = username
br.form['password'] = password
br.submit()
resp = br.open('http://www.megaupload.com/index.php?c=filemanager')
print resp.read()
See Use mechanize to log into megaupload
Okay I just implemented it myself and it seems you just forgot one value - that's why I always use TamperData or something similar to just check what my browser is sending to the server - WAY easier and shorter than going through the HTML.
Anyways just add 'redir' : 1 to your dict and it'll work:
import http.cookiejar
import urllib
if __name__ == '__main__':
cj = http.cookiejar.CookieJar()
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
login_data = urllib.parse.urlencode({'username' : username, 'password' : password, 'login' : 1, 'redir' : 1})
response = opener.open("http://www.megaupload.com/?c=login", login_data)
with open("test.txt", "w") as file:
file.write(response.read().decode("UTF-8")) #so we can compare resulting html easily
Although I must say I'll have a look at mechanize and co now - I do something like that often enough that it could be quite worthwhile. Although I can't stress enough that the most important help is still a browser plugin that lets you check the sent data ;)
You might have more luck with mechanize or twill which are designed to streamline these kinds of processes. Otherwise, I think your opener is missing at least one important component: something to process cookies. Here's a bit of code I have laying around from the last time I did this:
# build opener with HTTPCookieProcessor
cookie_jar = cookielib.MozillaCookieJar('tasks.cookies')
o = urllib2.build_opener(
urllib2.HTTPRedirectHandler(),
urllib2.HTTPHandler(debuglevel=0),
urllib2.HTTPSHandler(debuglevel=0),
urllib2.HTTPCookieProcessor(cookie_jar)
)
My guess is to add the c=login name/value pair to login_data rather than including it dorectly on the URL.
You're probably also breaking a TOS/EULA, but I can't say I care that much.

How can I change a user agent string programmatically?

I would like to write a program that changes my user agent string.
How can I do this in Python?
I assume you mean a user-agent string in an HTTP request? This is just an HTTP header that gets sent along with your request.
using Python's urllib2:
import urllib2
url = 'http://foo.com/'
# add a header to define a custon User-Agent
headers = { 'User-Agent' : 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)' }
req = urllib2.Request(url, '', headers)
response = urllib2.urlopen(req).read()
In urllib, it's done like this:
import urllib
class AppURLopener(urllib.FancyURLopener):
version = "MyStrangeUserAgent"
urllib._urlopener = AppURLopener()
and then just use urllib.urlopen normally. In urllib2, use req = urllib2.Request(...) with a parameter of headers=somedict to set all the headers you want (including user agent) in the new request object req that you make, and urllib2.urlopen(req).
Other ways of sending HTTP requests have other ways of specifying headers, of course.
Using Python you can use urllib to download webpages and use the version value to change the user-agent.
There is a very good example on http://wolfprojects.altervista.org/changeua.php
Here is an example copied from that page:
>>> from urllib import FancyURLopener
>>> class MyOpener(FancyURLopener):
... version = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11)
Gecko/20071127 Firefox/2.0.0.11'
>>> myopener = MyOpener()
>>> page = myopener.open('http://www.google.com/search?q=python')
>>> page.read()
[…]Results <b>1</b> - <b>10</b> of about <b>81,800,000</b> for <b>python</b>[…]
urllib2 is nice because it's built in, but I tend to use mechanize when I have the choice. It extends a lot of urllib2's functionality (though much of it has been added to python in recent years). Anyhow, if it's what you're using, here's an example from their docs on how you'd change the user-agent string:
import mechanize
cookies = mechanize.CookieJar()
opener = mechanize.build_opener(mechanize.HTTPCookieProcessor(cookies))
opener.addheaders = [("User-agent", "Mozilla/5.0 (compatible; MyProgram/0.1)"),
("From", "responsible.person#example.com")]
Best of luck.
As mentioned in the answers above, the user-agent field in the http request header can be changed using builtin modules in python such as urllib2. At the same time, it is also important to analyze what exactly the web server sees. A recent post on User agent detection gives a sample code and output, which gives a description of what the web server sees when a programmatic request is sent.
If you want to change the user agent string you send when opening web pages, google around for a Firefox plugin. ;) For example, I found this one. Or you could write a proxy server in Python, which changes all your requests independent of the browser.
My point is, changing the string is going to be the easy part; your first question should be, where do I need to change it? If you already know that (at the browser? proxy server? on the router between you and the web servers you're hitting?), we can probably be more helpful. Or, if you're just doing this inside a script, go with any of the urllib answers. ;)
Updated for Python 3.2 (py3k):
import urllib.request
headers = { 'User-Agent' : 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)' }
url = 'http://www.google.com'
request = urllib.request.Request(url, b'', headers)
response = urllib.request.urlopen(request).read()

Categories

Resources