reading cookies file created by curl - python

I have the following cookie saved by curl (in test.txt, tab-separated, this editor doesn't preserve tabs):
# Netscape HTTP Cookie File
# http://curlm.haxx.se/rfc/cookie_spec.html
# This file was generated by libcurl! Edit at your own risk.
#HttpOnly_my-example.com FALSE / FALSE 0 _rails-root_session test
I'm trying to read it with the following code:
import sys
if sys.version_info < (3,):
from cookielib import Cookie, MozillaCookieJar
else:
from http.cookiejar import Cookie, MozillaCookieJar
def load_cookies_from_mozilla(filename):
ns_cookiejar = MozillaCookieJar()
ns_cookiejar.load(filename, ignore_discard=True)
return ns_cookiejar
cookies = load_cookies_from_mozilla("test.txt")
print (len(cookies))
It outputs 0 (unable to read the cookie).
If I manually modify my cookie to the following line (remove HttpOnly flag and changing 0 to the empty string for expiration time, and again, tab-separated):
my-example.com FALSE / FALSE _rails-root_session test
then it outputs 1 (successfully read the cookie).
What needs to be done to my python code to read the original cookie line? And preferably to be able to save it in the same format (with HttpOnly flag and with 0 instead of empty string for never-expiring cookie)?
Thanks.

This appears to be an open bug: https://bugs.python.org/issue2190.
This bug report contains a link to a workaround: https://gerrit.googlesource.com/git-repo/+/master/subcmds/sync.py#995
In that linked code, the developer creates a temporary cookies file, removes the "#HttpOnly_" prefixes, and then creates a cookiejar with that temporary file.
tmpcookiefile = tempfile.NamedTemporaryFile()
tmpcookiefile.write("# HTTP Cookie File")
try:
with open(cookiefile) as f:
for line in f:
if line.startswith("#HttpOnly_"):
line = line[len("#HttpOnly_"):]
tmpcookiefile.write(line)
tmpcookiefile.flush()
cookiejar = cookielib.MozillaCookieJar(tmpcookiefile.name)
try:
cookiejar.load()
except cookielib.LoadError:
cookiejar = cookielib.CookieJar()
finally:
tmpcookiefile.close()

I tested your code and modified it, it works.
First in the cookie file you have to put off the '#' before your cookie, I think it will comment the data after it.
Second the 0 in the cookie means the expire time, 0 means expire now, so you can change the 0 to empty string or latter time, but i suggest you use the argument ignore_expire=True, the official means:
ignore_discard: save even cookies set to be discarded.
ignore_expires: save even cookies that have expiredThe file is overwritten if it already exists
and the result code is :
import sys
if sys.version_info < (3,):
from cookielib import Cookie, MozillaCookieJar
else:
from http.cookiejar import Cookie, MozillaCookieJar
def load_cookies_from_mozilla(filename):
ns_cookiejar = MozillaCookieJar()
ns_cookiejar.load(filename, ignore_discard=True, ignore_expires=True)
return ns_cookiejar
cookies = load_cookies_from_mozilla("test.txt")
print (len(cookies))
and you can see the link to find more detail:
Using cookies.txt file with Python Requests

Related

How to use existing cookie file in Python request? [duplicate]

I'm trying to access an authenticated site using a cookies.txt file (generated with a Chrome extension) with Python Requests:
import requests, cookielib
cj = cookielib.MozillaCookieJar('cookies.txt')
cj.load()
r = requests.get(url, cookies=cj)
It doesn't throw any error or exception, but yields the login screen, incorrectly. However, I know that my cookie file is valid, because I can successfully retrieve my content using it with wget. Any idea what I'm doing wrong?
Edit:
I'm tracing cookielib.MozillaCookieJar._really_load and can verify that the cookies are correctly parsed (i.e. they have the correct values for the domain, path, secure, etc. tokens). But as the transaction is still resulting in the login form, it seems that wget must be doing something additional (as the exact same cookies.txt file works for it).
MozillaCookieJar inherits from FileCookieJar which has the following docstring in its constructor:
Cookies are NOT loaded from the named file until either the .load() or
.revert() method is called.
You need to call .load() method then.
Also, like Jermaine Xu noted the first line of the file needs to contain either # Netscape HTTP Cookie File or # HTTP Cookie File string. Files generated by the plugin you use do not contain such a string so you have to insert it yourself. I raised appropriate bug at http://code.google.com/p/cookie-txt-export/issues/detail?id=5
EDIT
Session cookies are saved with 0 in the 5th column. If you don't pass ignore_expires=True to load() method all such cookies are discarded when loading from a file.
File session_cookie.txt:
# Netscape HTTP Cookie File
.domain.com TRUE / FALSE 0 name value
Python script:
import cookielib
cj = cookielib.MozillaCookieJar('session_cookie.txt')
cj.load()
print len(cj)
Output:
0
EDIT 2
Although we managed to get cookies into the jar above they are subsequently discarded by cookielib because they still have 0 value in the expires attribute. To prevent this we have to set the expire time to some future time like so:
for cookie in cj:
# set cookie expire date to 14 days from now
cookie.expires = time.time() + 14 * 24 * 3600
EDIT 3
I checked both wget and curl and both use 0 expiry time to denote session cookies which means it's the de facto standard. However Python's implementation uses empty string for the same purpose hence the problem raised in the question. I think Python's behavior in this regard should be in line with what wget and curl do and that's why I raised the bug at http://bugs.python.org/issue17164
I'll note that replacing 0s with empty strings in the 5th column of the input file and passing ignore_discard=True to load() is the alternate way of solving the problem (no need to change expiry time in this case).
I tried taking into account everything that Piotr Dobrogost had valiantly figured out about MozillaCookieJar but to no avail. I got fed up and just parsed the damn cookies.txt myself and now all is well:
import re
import requests
def parseCookieFile(cookiefile):
"""Parse a cookies.txt file and return a dictionary of key value pairs
compatible with requests."""
cookies = {}
with open (cookiefile, 'r') as fp:
for line in fp:
if not re.match(r'^\#', line):
lineFields = line.strip().split('\t')
cookies[lineFields[5]] = lineFields[6]
return cookies
cookies = parseCookieFile('cookies.txt')
import pprint
pprint.pprint(cookies)
r = requests.get('https://example.com', cookies=cookies)
This worked for me:
from http.cookiejar import MozillaCookieJar
from pathlib import Path
import requests
cookies = Path('/Users/name/cookies.txt')
jar = MozillaCookieJar(cookies)
jar.load()
requests.get('https://path.to.site.com', cookies=jar)
<Response [200]>
I tried editing Tristan answer to add some info to it but it seems SO edit q is full therefore, I am writing this answer, since, I have struggled real bad on using existing cookies with python request.
First, get the cookies from the Chrome. Easiest way would be to use an extension called 'cookies.txt'
https://chrome.google.com/webstore/detail/get-cookiestxt/bgaddhkoddajcdgocldbbfleckgcbcid/related
After downloading those cookies, use the below code to make sure that you are able to parse the file without any issues.
import re, requests, pprint
def parseCookieFile(cookiefile):
"""Parse a cookies.txt file and return a dictionary of key value pairs
compatible with requests."""
cookies = {}
with open (cookiefile, 'r') as fp:
for line in fp:
if not re.match(r'^\#', line):
lineFields = re.findall(r'[^\s]+', line) #capturing anything but empty space
try:
cookies[lineFields[5]] = lineFields[6]
except Exception as e:
print (e)
return cookies
cookies = parseCookieFile('cookies.txt') #replace the filename
pprint.pprint(cookies)
Next, use those cookies with python request
x = requests.get('your__url', verify=False, cookies=cookies)
print (x.content)
This should save your day from going on different SO posts and trying those cookielib and other methods which never worked for me.
I finally found a way to make it work (I got the idea by looking at curl's verbose ouput): instead of loading my cookies from a file, I simply created a dict with the required value/name pairs:
cd = {'v1': 'n1', 'v2': 'n2'}
r = requests.get(url, cookies=cd)
and it worked (although it doesn't explain why the previous method didn't). Thanks for all the help, it's really appreciated.

Writing to NamedTemporaryFile fails silently; converting Curl cookie jar to Requests cookies

I'm trying to take the Netscape HTTP Cookie File that Curl spits out and convert it to a Cookiejar that the Requests library can work with. I have netscapeCookieString in my Python script as a variable, which looks like:
# Netscape HTTP Cookie File
# https://curl.haxx.se/docs/http-cookies.html
# This file was generated by libcurl! Edit at your own risk.
.miami.edu TRUE / TRUE 0 PS_LASTSITE https://canelink.miami.edu/psc/PUMI2J/
Since I don't want to parse the cookie file myself, I'd like to use cookielib. Sadly, this means I have to write to disk since cookielib.MozillaCookieJar() won't take a string as input: it has to take a file.
So I'm using NamedTemporaryFile (couldn't get SpooledTemporaryFile to work; again would like to do all of this in memory if possible).
tempCookieFile = tempfile.NamedTemporaryFile()
# now take the contents of the cookie string and put it into this in memory file
# that cookielib will read from. There are a couple quirks though.
for line in netscapeCookieString.splitlines():
# cookielib doesn't know how to handle httpOnly cookies correctly
# so we have to do some pre-processing to make sure they make it into
# the cookielib. Basically just removing the httpOnly prefix which is honestly
# an abuse of the RFC in the first place. note: httpOnly actually refers to
# cookies that javascript can't access, as in only http protocol can
# access them, it has nothing to do with http vs https. it's purely
# to protect against XSS a bit better. These cookies may actually end up
# being the most critical of all cookies in a given set.
# https://stackoverflow.com/a/53384267/2611730
if line.startswith("#HttpOnly_"):
# this is actually how the curl library removes the httpOnly, by doing length
line = line[len("#HttpOnly_"):]
tempCookieFile.write(line)
tempCookieFile.flush()
# another thing that cookielib doesn't handle very well is
# session cookies, which have 0 in the expires param
# so we have to make sure they don't get expired when they're
# read in by cookielib
# https://stackoverflow.com/a/14759698/2611730
print tempCookieFile.read()
cookieJar = cookielib.MozillaCookieJar(tempCookieFile.name)
cookieJar.load(ignore_expires=True)
pprint.pprint(cookieJar)
But here's the kicker, this doesn't work!
print tempCookieFile.read() prints an empty line.
Thus, pprint.pprint(cookieJar) prints an empty cookie jar.
I was easily able to reproduce this on my Mac:
>>> import tempfile
>>> tempCookieFile = tempfile.NamedTemporaryFile()
>>> tempCookieFile.write("hey")
>>> tempCookieFile.flush()
>>> print tempCookieFile.read()
>>>
How can I actually write to a NamedTemporaryFile?
After you write to the file, the pointer to that file is to the location after that written data (in your case end of file) so when you read it returns an empty string (no more data after end of file) just seek to 0 before reading
>>> import tempfile
>>> tempCookieFile = tempfile.NamedTemporaryFile()
>>> tempCookieFile.write("hey")
>>> tempCookieFile.seek(0)
>>> print(tempCookieFile.read())

Should I switch from "urllib.request.urlretrieve(..)" to "urllib.request.urlopen(..)"?

1. Deprecation problem
In Python 3.7, I download a big file from a URL using the urllib.request.urlretrieve(..) function. In the documentation (https://docs.python.org/3/library/urllib.request.html) I read the following just above the urllib.request.urlretrieve(..) docs:
Legacy interface
The following functions and classes are ported from the Python 2 module urllib (as opposed to urllib2). They might become deprecated at some point in the future.
2. Searching an alternative
To keep my code future-proof, I'm on the lookout for an alternative. The official Python docs don't mention a specific one, but it looks like urllib.request.urlopen(..) is the most straightforward candidate. It's at the top of the docs page.
Unfortunately, the alternatives - like urlopen(..) - don't provide the reporthook argument. This argument is a callable you pass to the urlretrieve(..) function. In turn, urlretrieve(..) calls it regularly with the following arguments:
block nr.
block size
total file size
I use it to update a progressbar. That's why I miss the reporthook argument in alternatives.
3. urlretrieve(..) vs urlopen(..)
I discovered that urlretrieve(..) simply uses urlopen(..). See the request.py code file in the Python 3.7 installation (Python37/Lib/urllib/request.py):
_url_tempfiles = []
def urlretrieve(url, filename=None, reporthook=None, data=None):
"""
Retrieve a URL into a temporary location on disk.
Requires a URL argument. If a filename is passed, it is used as
the temporary file location. The reporthook argument should be
a callable that accepts a block number, a read size, and the
total file size of the URL target. The data argument should be
valid URL encoded data.
If a filename is passed and the URL points to a local resource,
the result is a copy from local file to new file.
Returns a tuple containing the path to the newly created
data file as well as the resulting HTTPMessage object.
"""
url_type, path = splittype(url)
with contextlib.closing(urlopen(url, data)) as fp:
headers = fp.info()
# Just return the local path and the "headers" for file://
# URLs. No sense in performing a copy unless requested.
if url_type == "file" and not filename:
return os.path.normpath(path), headers
# Handle temporary file setup.
if filename:
tfp = open(filename, 'wb')
else:
tfp = tempfile.NamedTemporaryFile(delete=False)
filename = tfp.name
_url_tempfiles.append(filename)
with tfp:
result = filename, headers
bs = 1024*8
size = -1
read = 0
blocknum = 0
if "content-length" in headers:
size = int(headers["Content-Length"])
if reporthook:
reporthook(blocknum, bs, size)
while True:
block = fp.read(bs)
if not block:
break
read += len(block)
tfp.write(block)
blocknum += 1
if reporthook:
reporthook(blocknum, bs, size)
if size >= 0 and read < size:
raise ContentTooShortError(
"retrieval incomplete: got only %i out of %i bytes"
% (read, size), result)
return result
4. Conclusion
From all this, I see three possible decisions:
I keep my code unchanged. Let's hope the urlretrieve(..) function won't get deprecated anytime soon.
I write myself a replacement function behaving like urlretrieve(..) on the outside and using urlopen(..) on the inside. Actually, such function would be a copy-paste of the code above. It feels unclean to do that - compared to using the official urlretrieve(..).
I write myself a replacement function behaving like urlretrieve(..) on the outside and using something entirely different on the inside. But hey, why would I do that? urlopen(..) is not deprecated, so why not use it?
What decision would you take?
The following example uses urllib.request.urlopen to download a zip file containing Oceania's crop production data from the FAO statistical database. In that example, it is necessary to define a minimal header, otherwise FAOSTAT throws an Error 403: Forbidden.
import shutil
import urllib.request
import tempfile
# Create a request object with URL and headers
url = “http://fenixservices.fao.org/faostat/static/bulkdownloads/Production_Crops_Livestock_E_Oceania.zip”
header = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) '}
req = urllib.request.Request(url=url, headers=header)
# Define the destination file
dest_file = tempfile.gettempdir() + '/' + 'crop.zip'
print(f“File located at:{dest_file}”)
# Create an http response object
with urllib.request.urlopen(req) as response:
# Create a file object
with open(dest_file, "wb") as f:
# Copy the binary content of the response to the file
shutil.copyfileobj(response, f)
Based on https://stackoverflow.com/a/48691447/2641825 for the request part and https://stackoverflow.com/a/66591873/2641825 for the header part, see also urllib's documentation at https://docs.python.org/3/howto/urllib2.html

python requests with cookielib equivalent to wget with --load-cookies [duplicate]

I'm trying to access an authenticated site using a cookies.txt file (generated with a Chrome extension) with Python Requests:
import requests, cookielib
cj = cookielib.MozillaCookieJar('cookies.txt')
cj.load()
r = requests.get(url, cookies=cj)
It doesn't throw any error or exception, but yields the login screen, incorrectly. However, I know that my cookie file is valid, because I can successfully retrieve my content using it with wget. Any idea what I'm doing wrong?
Edit:
I'm tracing cookielib.MozillaCookieJar._really_load and can verify that the cookies are correctly parsed (i.e. they have the correct values for the domain, path, secure, etc. tokens). But as the transaction is still resulting in the login form, it seems that wget must be doing something additional (as the exact same cookies.txt file works for it).
MozillaCookieJar inherits from FileCookieJar which has the following docstring in its constructor:
Cookies are NOT loaded from the named file until either the .load() or
.revert() method is called.
You need to call .load() method then.
Also, like Jermaine Xu noted the first line of the file needs to contain either # Netscape HTTP Cookie File or # HTTP Cookie File string. Files generated by the plugin you use do not contain such a string so you have to insert it yourself. I raised appropriate bug at http://code.google.com/p/cookie-txt-export/issues/detail?id=5
EDIT
Session cookies are saved with 0 in the 5th column. If you don't pass ignore_expires=True to load() method all such cookies are discarded when loading from a file.
File session_cookie.txt:
# Netscape HTTP Cookie File
.domain.com TRUE / FALSE 0 name value
Python script:
import cookielib
cj = cookielib.MozillaCookieJar('session_cookie.txt')
cj.load()
print len(cj)
Output:
0
EDIT 2
Although we managed to get cookies into the jar above they are subsequently discarded by cookielib because they still have 0 value in the expires attribute. To prevent this we have to set the expire time to some future time like so:
for cookie in cj:
# set cookie expire date to 14 days from now
cookie.expires = time.time() + 14 * 24 * 3600
EDIT 3
I checked both wget and curl and both use 0 expiry time to denote session cookies which means it's the de facto standard. However Python's implementation uses empty string for the same purpose hence the problem raised in the question. I think Python's behavior in this regard should be in line with what wget and curl do and that's why I raised the bug at http://bugs.python.org/issue17164
I'll note that replacing 0s with empty strings in the 5th column of the input file and passing ignore_discard=True to load() is the alternate way of solving the problem (no need to change expiry time in this case).
I tried taking into account everything that Piotr Dobrogost had valiantly figured out about MozillaCookieJar but to no avail. I got fed up and just parsed the damn cookies.txt myself and now all is well:
import re
import requests
def parseCookieFile(cookiefile):
"""Parse a cookies.txt file and return a dictionary of key value pairs
compatible with requests."""
cookies = {}
with open (cookiefile, 'r') as fp:
for line in fp:
if not re.match(r'^\#', line):
lineFields = line.strip().split('\t')
cookies[lineFields[5]] = lineFields[6]
return cookies
cookies = parseCookieFile('cookies.txt')
import pprint
pprint.pprint(cookies)
r = requests.get('https://example.com', cookies=cookies)
This worked for me:
from http.cookiejar import MozillaCookieJar
from pathlib import Path
import requests
cookies = Path('/Users/name/cookies.txt')
jar = MozillaCookieJar(cookies)
jar.load()
requests.get('https://path.to.site.com', cookies=jar)
<Response [200]>
I tried editing Tristan answer to add some info to it but it seems SO edit q is full therefore, I am writing this answer, since, I have struggled real bad on using existing cookies with python request.
First, get the cookies from the Chrome. Easiest way would be to use an extension called 'cookies.txt'
https://chrome.google.com/webstore/detail/get-cookiestxt/bgaddhkoddajcdgocldbbfleckgcbcid/related
After downloading those cookies, use the below code to make sure that you are able to parse the file without any issues.
import re, requests, pprint
def parseCookieFile(cookiefile):
"""Parse a cookies.txt file and return a dictionary of key value pairs
compatible with requests."""
cookies = {}
with open (cookiefile, 'r') as fp:
for line in fp:
if not re.match(r'^\#', line):
lineFields = re.findall(r'[^\s]+', line) #capturing anything but empty space
try:
cookies[lineFields[5]] = lineFields[6]
except Exception as e:
print (e)
return cookies
cookies = parseCookieFile('cookies.txt') #replace the filename
pprint.pprint(cookies)
Next, use those cookies with python request
x = requests.get('your__url', verify=False, cookies=cookies)
print (x.content)
This should save your day from going on different SO posts and trying those cookielib and other methods which never worked for me.
I finally found a way to make it work (I got the idea by looking at curl's verbose ouput): instead of loading my cookies from a file, I simply created a dict with the required value/name pairs:
cd = {'v1': 'n1', 'v2': 'n2'}
r = requests.get(url, cookies=cd)
and it worked (although it doesn't explain why the previous method didn't). Thanks for all the help, it's really appreciated.

Python FileCookieJar.save() issue

I have a problem while trying to save cookies to a file using FileCookieJar's save method. Here is my code:
#!/usr/bin/python
import httplib, cookielib, urllib2, json, time
from datetime import date
class FoN:
def __init__(self):
self.cookiefile = "cookies.txt"
self.cj = cookielib.FileCookieJar(self.cookiefile)
def login (self, login, password):
js = json.JSONEncoder().encode({"login":login,"password":password})
req=urllib2.Request("http://www.example.com/user/login", js)
res=urllib2.urlopen(req)
self.cj.extract_cookies(res,req)
self.cj.save(self.cookiefile, ignore_discard=True)
f.write ("Login: "+login+", result: "+str(res.read().count("true"))+"\n")
time.sleep(2)
return res
So it fails at self.cj.save(self.cookiefile, ignore_discard=True) raising NotImplementedError exception, which is according to the documentation. But my question how do I save cookies to the file then? I even tried to include the code in try clause but that didn't help at all.
The base FileCookieJar does not implement .save To get saving, you should use one of the subclasses like MozillaCookieJar or LWPCookieJar.
I have write sample code to demo:
auto handle cookies: cookie in memory
auto handle cookies: cookie in file
support two format:
LWP
Mozilla
two kind of operation
save into file
load from file
code:
#!/usr/bin/python
# -*- coding: utf-8 -*-
"""
Function: 【整理】Python中Cookie的处理:自动处理Cookie,保存为Cookie文件,从文件载入Cookie
http://www.crifan.com/python_auto_handle_cookie_and_save_to_from_cookie_file
Version: 2013-01-15
Author: Crifan
Contact: admin (at) crifan.com
"""
import os;
import cookielib;
import urllib2;
def pythonAutoHandleCookie():
"""
Demo how to auto handle cookie in Python
cookies in memory
cookies in file:
save cookie to file
LWP format
Mozilla format
load cookie from file
"""
print "1. Demo how to auto handle cookie (in memory)";
cookieJarInMemory = cookielib.CookieJar();
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookieJarInMemory));
urllib2.install_opener(opener);
print "after init, cookieJarInMemory=",cookieJarInMemory; #after init, cookieJarInMemory=
#!!! following urllib2 will auto handle cookies
demoUrl = "http://www.google.com/";
response = urllib2.urlopen(demoUrl);
#here, we already got response cookies
print "after urllib2.urlopen, cookieJarInMemory=",cookieJarInMemory;
#after urllib2.urlopen, cookieJarInMemory= , , ]>
print "2. Demo how to auto handle cookie in file, LWP format";
cookieFilenameLWP = "localCookiesLWP.txt";
cookieJarFileLWP = cookielib.LWPCookieJar(cookieFilenameLWP);
#will create (and save to) new cookie file
cookieJarFileLWP.save();
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookieJarFileLWP));
urllib2.install_opener(opener);
#!!! following urllib2 will auto handle cookies
demoUrl = "http://www.google.com/";
response = urllib2.urlopen(demoUrl);
#update cookies, save cookies into file
cookieJarFileLWP.save();
#for demo, print cookies in file
print "LWP cookies:";
print open(cookieFilenameLWP).read(os.path.getsize(cookieFilenameLWP));
# #LWP-Cookies-2.0
# Set-Cookie3: PREF="ID=34c1415b570a93ae:FF=0:NW=1:TM=1358236121:LM=1358236121:S=gEVVojW4x37ht5n-"; path="/"; domain=".google.com"; path_spec; domain_dot; expires="2015-01-15 07:48:41Z"; version=0
# Set-Cookie3: NID="67=JI_uEwUm5GDrQ_vCwAp2z_YGU7MdLm5CLMa4CNLF7RQuTDMzrrk1EjRddGcnpoFbht81LaV9spxZQQInf0mPS6lDrvcRqBBL5NOTmy8SwOzA6HWC3iTIo4-o3fO1Udkv"; path="/"; domain=".google.com.hk"; path_spec; domain_dot; expires="2013-07-17 07:48:41Z"; HttpOnly=None; version=0
# Set-Cookie3: PREF="ID=8f7e4efca89bdb1b:U=f85a4afa4db021aa:FF=2:LD=zh-CN:NW=1:TM=1358236121:LM=1358236121:S=2WR59hDWutdnUJtF"; path="/"; domain=".google.com.hk"; path_spec; domain_dot; expires="2015-01-15 07:48:41Z"; version=0
print "3. Demo how to auto handle cookie in file, Mozilla Format";
cookieFilenameMozilla = "localCookiesMozilla.txt";
cookieJarFileMozilla = cookielib.MozillaCookieJar(cookieFilenameMozilla);
#will create (and save to) new cookie file
cookieJarFileMozilla.save();
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookieJarFileMozilla));
urllib2.install_opener(opener);
#!!! following urllib2 will auto handle cookies
demoUrl = "http://www.google.com/";
response = urllib2.urlopen(demoUrl);
#update cookies, save cookies into file
cookieJarFileMozilla.save();
#for demo, print cookies in file
print "Mozilla cookies:";
print open(cookieFilenameMozilla).read(os.path.getsize(cookieFilenameMozilla));
# # Netscape HTTP Cookie File
# # http://www.netscape.com/newsref/std/cookie_spec.html
# # This is a generated file! Do not edit.
# .google.com TRUE / FALSE 1421308121 PREF ID=0e05040dd979207c:FF=0:NW=1:TM=1358236121:LM=1358236121:S=jcFid2XgXMIhPUPl
# .google.com.hk TRUE / FALSE 1374047321 NID 67=klMI_Z5ZPWDjUYrWSUHIE_kYI77_ziJaL0kWRoUGThagME86LKY7H-MNa2wAMI_GklIwYcD8t82qPinxzLd4GLDbmWT0OVLCXhRj0wQDC57dTNAsTs4lhVR7Yjvj2tfn
# .google.com.hk TRUE / FALSE 1421308121 PREF ID=028f8b736db06a9a:U=6ba6d080847c8de6:FF=2:LD=zh-CN:NW=1:TM=1358236121:LM=1358236121:S=_1BcC5v3G0ZojVz8
print "4. read cookies from file";
parseAndSavedCookieFile = "parsedAndSavedCookies.txt"
parsedCookieJarFile = cookielib.MozillaCookieJar(parseAndSavedCookieFile);
#parsedCookieJarFile = cookielib.MozillaCookieJar(cookieFilenameMozilla);
print parsedCookieJarFile; #
parsedCookieJarFile.load(cookieFilenameMozilla);
print parsedCookieJarFile; #, , ]>
if __name__=="__main__":
pythonAutoHandleCookie();

Categories

Resources