log into website (specifically netflix) with python - python

I am trying to log into netflix with python, would work perfectly but i cant get it to detect weather or not login failed, the code looks like this:
#this is not purely my code! Thanks to Ori for the code
import urllib
username = raw_input('Enter your email: ')
password = raw_input('Enter your password: ')
params = urllib.urlencode(
{'email': username,
'password': password })
f = urllib.urlopen("https://signup.netflix.com/Login", params)
if "The login information you entered does not match an account in our records. Remember, your email address is not case-sensitive, but passwords are." in f.read():
success = False
print "Either your username or password was incorrect."
else:
success = True
print "You are now logged into netflix as", username
raw_input('Press enter to exit the program')
As always, many thanks!!

First, I'll just share some verbiage I noticed on the Netflix site under Limitations on Use:
Any unauthorized use of the Netflix service or its contents will terminate the limited license granted by us and will result in the cancellation of your membership.
In short, I'm not sure what your script does after this, but some activities could jeopardize your relationship with Netflix. I did not read the whole ToS, but you should.
That said, there are plenty of legitimate reasons to scrape html information, and I do it all the time. So my first bet with this specific problem is you're using the wrong detection string... Just send a bogus email/password and print the response... Perhaps you made an assumption about what it looks like when you log in with a browser, but the browser is sending info that gets further into the process.
I wish I could offer specifics on what to do next, but I would rather not risk my relationship with 'flix to give a better answer to the question... so I'll just share a few observations I gleaned from scraping oodles of other websites that made it kindof hard to use web robots...
First, login to your account with Firefox, and be sure to have the Live HTTP Headers add-on enabled and in capture mode... what you will see when you login live is invaluable to your scripting efforts... for instance, this was from a session while I logged in...
POST /Login HTTP/1.1
Host: signup.netflix.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.16) Gecko/20110319 Firefox/3.6.16
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive
Referer: https://signup.netflix.com/Login?country=1&rdirfdc=true
--->Insert lots of private stuff here
Content-Type: application/x-www-form-urlencoded
Content-Length: 168
authURL=sOmELoNgTeXtStRiNg&nextpage=&SubmitButton=true&country=1&email=EmAiLAdDrEsS%40sOmEMaIlProvider.com&password=UnEnCoDeDpAsSwOrD
Pay particular attention to the stuff below "Content-Length" field and all the parameters that come after it.
Now log back out, and pull up the login site page again... chances are, you will see some of those fields hidden as state information in <input type="hidden"> tags... some web apps keep state by feeding you fields and then they use javascript to resubmit that same information in your login POST. I usually use lxml to parse the pages I receive... if you try it, keep in mind that lxml prefers utf-8, so I include code that automagically converts when it sees other encodings...
response = urlopen(req,data)
# info is from the HTTP headers... like server version
info = response.info().dict
# page is the HTML response
page = response.read()
encoding = chardet.detect(page)['encoding']
if encoding != 'utf-8':
page = page.decode(encoding, 'replace').encode('utf-8')
BTW, Michael Foord has a very good reference on urllib2 and many of the assorted issues.
So, in summary:
Using your existing script, dump the results from a known bogus login to be sure you're parsing for the right info... I'm pretty sure you made a bad assumption above
It also looks like you aren't submitting enough parameters in the POST. Experience tells me you need to set authURL in addition to email and password... if possible, I try to mimic what the browser sends...
Occasionally, it matters whether you have set your user-agent string and referring webpage. I always set these when I scrape so I don't waste cycles debugging.
When all else fails, look at info stored in cookies they send
Sometimes websites base64 encode form submission data. I don't know whether Netflix does
Some websites are very protective of their intellectual property, and programatically reading/archiving the information is considered a theft of their IP. Again, read the ToS... I don't know how Netflix views what you want to do.
I am providing this for informational purposes and under no circumstances endorse, or condone the violation of Netflix terms of service... nor can I confirm whether your proposed activity would... I'm just saying it might :-). Talk to a lawyer that specializes in e-discovery if you need an official ruling. Feet first. Don't eat yellow snow... etc...

Related

AppEngine Python urlfetch() fails with 416 error, same query succeeds in a browser

I'm dusting off an app that worked a few months ago. I've made no changes. Here's the code in question:
result = urlfetch.fetch(
url=url,
deadline=TWENTY_SECONDS)
if result.status_code != 200: # pragma: no cover
logging.error('urlfetch failed.')
logging.error('result.status_code = %s' % result.status_code)
logging.error('url =')
logging.error(url)
Here's the output:
WARNING 2015-04-20 01:13:46,473 urlfetch_stub.py:118] No ssl package found. urlfetch will not be able to validate SSL certificates.
ERROR 2015-04-20 01:13:46,932 adminhandlers.py:84] urlfetch failed. url =
ERROR 2015-04-20 01:13:46,933 adminhandlers.py:85] http://www.stubhub.com/listingCatalog/select/?q=%2Bevent_date%3A%5BNOW%20TO%20NOW%2B1DAY%5D%0D%0A%2BancestorGeoDescriptions:%22New%20York%20Metro%22%0D%0A%2BstubhubDocumentType%3Aevent&version=2.2&start=0&rows=1&wt=json&fl=name_primary+event_date_time_local+venue_name+act_primary+ancestorGenreDescriptions+description
When I use a different url, e.g., "http://www.google.com/", the fetch succeeds.
When I paste the url string from the output into Chrome I get this response, which is the one I'm looking for:
{"responseHeader":{"status":0,"QTime":19,"params":{"fl":"name_primary event_date_time_local venue_name act_primary ancestorGenreDescriptions description","start":"0","q":"+event_date:[NOW TO NOW+1DAY]\r\n+ancestorGeoDescriptions:\"New York Metro\"\r\n+stubhubDocumentType:event +allowedViewingDomain:stubhub.com","wt":"json","version":"2.2","rows":"1"}},"response":{"numFound":26,"start":0,"docs":[{"act_primary":"Waka Flocka Flame","description":"Waka Flocka Flame Tickets (18+ Event)","event_date_time_local":"2015-04-20T20:00:00Z","name_primary":"Webster Hall","venue_name":"Webster Hall","ancestorGenreDescriptions":["All tickets","Concert tickets","Artists T - Z","Waka Flocka Flame Tickets"]}]}}
I hope I'm missing something simple. Any suggestions?
Update May 30, 2015
Anzel's suggestion of Apr 23 was correct. I need to add a user agent header. The one supplied by the AppEngine dev server is
AppEngine-Google; (+http://code.google.com/appengine)
The one supplied by hosted AppEngine is
AppEngine-Google; (+http://code.google.com/appengine; appid: s~MY_APP_ID)
The one supplied by requests.get() in pure Python (no AppEngine) on MacOS is
python-requests/2.2.1 CPython/2.7.6 Darwin/14.3.0
When I switch in the Chrome user agent header all is well in pure Python. Stubhub must have changed this since I last tried it. Curious that they would require an interactive user agent for a service that emits JSON, but I'm happy they offer the service at all.
When I add that header in AppEngine, though, AppEngine prepends it to its own user-agent header. Stubhub then turns down the request.
So I've made some progress, but have not yet solved my problem.
FYI:
In AppEngine I supply the user agent like this:
result = urlfetch.fetch(
url=url,
headers = {'user-agent': USER_AGENT_STRING}
)
This is a useful site for determining the user agent string your code or browser is sending:
http://myhttp.info/
I don't have priveledges yet to post comments, so here goes.
Look at the way you are entering the URL into the var 'url'. Is it already encoded as the error message says? I would try to make sure the url is a regular, non-encoded one, and test that, perhaps the library is re-encoding it again, causing problems. If you could give us more surrounding code, that may help in our diagnosis.

Python 10054 error when trying to log in to airline website using Requests Library

I'm learning python and as my first project I want to login to several airline websites and scrape my frequent flyer Mile info. I have successfully been able to login and scrape American Airlines and United but I am unable to do it on Delta, USairways, and Britishairways.
The methodology that I have been using is watching network traffic from Fiddler2, Chrome, or Firebug. Wireshark seems too complicated at the moment.
For my script to work with American and United scraping all I did was watch the traffic on fiddler2, copy the FORM DATA and REQUEST HEADER DATA and then use the python 3rd party Requests library to access the data. Very simple. Very Easy. The other airline website are giving me a lot of trouble.
Let's talk about British Airways specifically. Below are pictures of the FORM DATA and REQUEST HEADER DATA that I took from fiddler when I logged into my dummy BA account. I have also included the test script that I have been using. I wrote two different versions. One using the Requests library and one using urllib. They both produce the same error but I thought I would provide both to make it easier for somebody to help me if they didn't have the Requests library imported. Use the one you would like.
Basically, when I make a request.post I am getting a
10054, 'An existing connection was forcibly closed by the remote host' error.
I have no idea what is going on. Been searching for 3 days and come up with nothing. I hope somebody can help me. The below code is using my dummy BA account info. username:python_noob password:p4ssword. Feel free to use and test it.
Here are some pictures to the fiddler2 data
http://i.imgur.com/iOL91.jpg?1
http://i.imgur.com/meLHL.jpg?1
import requests
import urllib
def get_BA_login_using_requests ():
url_loginSubmit1 = 'https://www.britishairways.com/travel/loginr/public/en_us'
url_viewaccount1 = 'https://www.britishairways.com/travel/viewaccount/public/en_us?eId=106011'
url_viewaccount2 = 'https://www.britishairways.com/travel/viewaccount/execclub/_gf/en_us?eId=106011'
form_data = {
'Directional_Login':'',
'eId':'109001',
'password':'p4ssword',
'membershipNumber':'python_noob',
}
request_headers= {
'Cache-Control':'max-age=0',
'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Charset':'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'Accept-Encoding':'gzip,deflate,sdch',
'Accept-Language':'en-US,en;q=0.8',
'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11',
'Cookie': 'BIGipServerba.com-port80=997762723.20480.0000; v1st=EDAB42A278BE913B; BASessionA=kDtBQWGclJymXtlsTXyYtykDLLsy3KQKvd3wMrbygd7JZZPJfJz2!-1893405604!clx42al01-wl01.baplc.com!7001!-1!-407095676!clx43al01-wl01.baplc.com!7001!-1; BIGipServerba.com-port81=997762723.20736.0000; BA_COUNTRY_CHOICE_COOKIE=us; Allow_BA_Cookies=accepted; BA_COUNTRY_CHOICE_COOKIE=US; opvsreferrer=functional/home/home_us.jsp; realreferrer=; __utma=28787695.2144676753.1356203603.1356203603.1356203603.1; __utmb=28787695.1.10.1356203603; __utmc=28787695; __utmz=28787695.1356203603.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); fsr.s={"v":-2,"rid":"d464cf7-82608645-1f31-3926-49807","ru":"http://www.britishairways.com/travel/globalgateway.jsp/global/public/en_","r":"www.britishairways.com","st":"","to":3,"c":"http://www.britishairways.com/travel/home/public/en_us","pv":1,"lc":{"d0":{"v":1,"s":false}},"cd":0}',
'Content-Length':'78',
'Content-Type':'application/x-www-form-urlencoded',
'Origin':'https://www.britishairways.com',
'Referer':'https://www.britishairways.com/travel/loginr/public/en_us',
'Connection':'keep-alive',
'Host':'www.britishairways.com',
}
print ('Trying to login to British Airways using Requests Library (takes about 1 minute for error to occur)')
try:
r1 = requests.post(url_loginSubmit1, data = form_data, headers = request_headers)
print ('it worked')
except Exception as e:
msg = "An exception of type {0} occured, these were the arguments:\n{1!r}"
print (msg.format(type(e).__name__, e.args))
return
def get_BA_login_using_urllib():
"""Tries to request the URL. Returns True if the request was successful; false otherwise.
https://www.britishairways.com/travel/loginr/public/en_us
response -- After the function has finished, will possibly contain the response to the request.
"""
response = None
print ('Trying to login to British Airways using urllib Library (takes about 1 minute for error to occur)')
# Create request to URL.
req = urllib.request.Request("https://www.britishairways.com/travel/loginr/public/en_us")
# Set request headers.
req.add_header("Connection", "keep-alive")
req.add_header("Cache-Control", "max-age=0")
req.add_header("Origin", "https://www.britishairways.com")
req.add_header("User-Agent", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11")
req.add_header("Content-Type", "application/x-www-form-urlencoded")
req.add_header("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8")
req.add_header("Referer", "https://www.britishairways.com/travel/home/public/en_us")
req.add_header("Accept-Encoding", "gzip,deflate,sdch")
req.add_header("Accept-Language", "en-US,en;q=0.8")
req.add_header("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.3")
req.add_header("Cookie", 'BIGipServerba.com-port80=997762723.20480.0000; v1st=EDAB42A278BE913B; BIGipServerba.com-port81=997762723.20736.0000; BA_COUNTRY_CHOICE_COOKIE=us; Allow_BA_Cookies=accepted; BA_COUNTRY_CHOICE_COOKIE=US; BAAUTHKEY=BA4760A2434L; BA_ENROLMENT_APPLICATION_COOKIE=1356219482491AT; BASessionA=wKG4QWGSTggNGnsLTnrgQnMxGMyzvspGLCYpjdSZgv2pSgYN1YRn!-1893405604!clx42al01-wl01.baplc.com!7001!-1!-407095676!clx43al01-wl01.baplc.com!7001!-1; HOME_AD_DISPLAY=1; previousCountryInfo=us; opvsreferrer=functional/home/home_us.jsp; realreferrer=; __utma=28787695.2144676753.1356203603.1356216924.1356219076.6; __utmb=28787695.15.10.1356219076; __utmc=28787695; __utmz=28787695.1356203603.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); fsr.s={"v":-2,"rid":"d464cf7-82608645-1f31-3926-49807","ru":"http://www.britishairways.com/travel/globalgateway.jsp/global/public/en_","r":"www.britishairways.com","st":"","to":5,"c":"https://www.britishairways.com/travel/home/public/en_us","pv":31,"lc":{"d0":{"v":31,"s":true}},"cd":0,"f":1356219889982,"sd":0}')
# Set request body.
body = b"Directional_Login=&eId=109001&password=p4ssword&membershipNumber=python_noob"
# Get response to request.
try:
response = urllib.request.urlopen(req, body)
print ('it worked')
except Exception as e:
msg = "An exception of type {0} occured, these were the arguments:\n{1!r}"
print (msg.format(type(e).__name__, e.args))
return
def main():
get_BA_login_using_urllib()
print()
get_BA_login_using_requests()
return
main()
Offhand, I'd say you managed to create a malformed or illegal request, and the server (or even proxy) on the other side simply refuses to process it.
Do use the requests library. It's excellent. Urllib is quite outdated (and, well, not fun to use at all.)
Get rid of nearly all of the custom headers. In particular Content-Length, Keep-Alive, Connection and Cookie. The first three you should let the requests library take care of, as they're part of the HTTP 1.1 protocol. With regards to the Cookie: that, too, will be handled by the requests library, depending on how you use sessions. (You might want to consult the documentation there.) Without having any previous cookies, you'll probably get something like a 401 when you try to access the site, or you'll be (transparently) redirected to a login-page. Doing the login will set the correct cookies, after which you should be able to re-try the original request.
If you use a dict for the post-data, you won't need the Content-Type header either. You might want to experiment with using unicode-values in said dict. I've found that that sometimes made a difference.
In other words: try to remove as much as you can, and then build it up from there. Doing things like this typically should not cost more than a handful of lines. Now, scraping a web page, that's another matter: try 'beautifulsoup' for that.
P.S.: Don't ever post cookie-data on public forums: they might contain personal or otherwise sensitive data that shady characters might be able to abuse.
It seems there is a bug in the windows versions of Python 3.3 that is the cause of my problem. I used the answer from here
HTTPS request results in reset connection in Windows with Python 3
to make progress with urllib version of my script. I would like to use Requests so I need to figure out how to do the SSL downgrade work around with that module. I will make that a separate thread. If anybody has an answer to that you can post here as well. thx.

Accepting and Sending Cookies with Mechanize

I need to fill in a login form on a webpage that requires cookies and get some information about the resultant page. Since this needs to be done at very weird hours at night, I'd like to automate the process and am therefore using mechanize (any other suggestions are welcome - note that I have to run my script on a school server, on which I cannot install new software. Mechanize is pure python so I am able to get around this problem).
The problem is that the page that hosts the login form requires that I be able to accept and send cookies. Ideally, I'd like to be able to accept and send all cookies that I the server sends me, rather than hard-code my own cookies.
So, I set out to write my script with mechanize, but I seem to be handling cookies wrong. Since I can't find helpful documentation anywhere (please point it out if I'm blind), I am asking here.
Here is my mechanize script:
import mechanize as mech
br = mech.Browser()
br.set_handle_robots(False)
print "No Robots"
br.set_handle_redirect(True)
br.open("some internal uOttawa website")
br.select_form(nr=0)
br.form['j_username'] = 'my username'
print "Login: ************"
br.form['j_password'] = 'my password'
print "Password: ************"
response = br.submit()
print response.read()
This prints the following
No Robots
Login: ************
Password: ************
<html>
<body>
<img src="/idp/images/uottawa-logo-dark.png" />
<h3>ERROR</h3>
<p>
An error occurred while processing your request. Please contact your helpdesk or
user ID office for assistance.
</p>
<p>
This service requires cookies. Please ensure that they are enabled and try your
going back to your desired resource and trying to login again.
</p>
<p>
Use of your browser's back button may cause specific errors that can be resolved by
going back to your desired resource and trying to login again.
</p>
<p>
If you think you were sent here in error,
please contact technical support
</p>
</body>
</html>
This is indeed the page that I would get if I disabled cookies on my Chrome browser and attempted the same thing.
I've tried adding a cookie jar as follows, with no luck.
br = mech.Browser()
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
I took a look at multiple mechanize documentation sources. One of them mention
A common mistake is to use mechanize.urlopen(), and the .extract_cookies() and
.add_cookie_header() methods on a cookie object themselves.
If you use mechanize.urlopen() (or OpenerDirector.open()),
the module handles extraction and adding of cookies by itself,
so you should not call .extract_cookies() or .add_cookie_header().
This seems to say that my first method should work, but it doesn't.
I'd appreciate any help with this - it's confusing, and there seems to be a severe lack of documentation.
I came across the exact same message while authenticating a Shibboleth website with Mechanize, just because I made the same mistake than you. And it looks like I figured it out.
Short answer
The link you need to open is:
br.open("https://web30.uottawa.ca/Shibboleth.sso/Login?target=https://web30.uottawa.ca/hr/web/post-register")
Instead of:
br.open("https://idp.uottawa.ca/idp/login.jsp?actionUrl=%2Fidp%2FAuthn%2FUserPassword")
Why?
Shibboleth: Connect easily and securely to a variety of services with
one simple login.
The Shibboleth login itself is useless if you don't tell him which service you want to login. Let's analyse the HTTP headers and compare the cookies you get for both queries.
1. Opening https://idp.uottawa.ca/idp/login.jsp?actionUrl=%2Fidp%2FAuthn%2FUserPassword
Cookie: JSESSIONID=C2D4A19B2994BFA287A328F71A281C49; _ga=GA1.2.1233451770.1401374115; arp_scroll_position=-1; tools-resize=tools-resize-small; lang-prev-page=en; __utma=251309913.1233451770.1401374115.1401375882.1401375882.1; __utmb=251309913.14.9.1401376471057; __utmz=251309913.1401375882.1.1.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=(not%20provided); lang=en
2. Opening https://web30.uottawa.ca/Shibboleth.sso/Login?target=https://web30.uottawa.ca/hr/web/post-register
Cookie: JSESSIONID=8D6BEA53823CC1C3045B2CE3B1D61DB0; _idp_authn_lc_key=fc18251e-e5aa-4f77-bb17-5e893d8d3a43; _ga=GA1.2.1233451770.1401374115; arp_scroll_position=-1; tools-resize=tools-resize-small; lang-prev-page=en; __utma=251309913.1233451770.1401374115.1401375882.1401375882.1; __utmb=251309913.16.9.1401378064938; __utmz=251309913.1401375882.1.1.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=(not%20provided); lang=en
What's the difference? You got one more cookie: _idp_authn_lc_key=1c21128c-2fd7-45d2-adac-df9db4d0a9ad;. I suppose it is the cookie saying "I want to login there".
During the authentication process, the IdP will set a cookie named
_idp_authn_lc_key. This cookie contains only information necessary to identify the current authentication process (which usually spans
multiple requests/responses) and is deleted after the authentication
process completes.
Source: https://wiki.shibboleth.net/confluence/display/SHIB2/IdPCookieUsage
How did I find that link? I indeed digged the web and found that https://web30.uottawa.ca/hr/web/en/user/registration redirects to the login form with the following link:
<a href="https://web30.uottawa.ca/Shibboleth.sso/Login?target=https://web30.uottawa.ca/hr/web/post-register"
class="button standard"><span>Create your account using infoweb</span></a>
So that was not a problem with Mechanize, but more that Shibboleth is a little hard to understand at first glance. You will find more information on the Shibboleth authentification flow here.
The website you're submitting your form data to probably needs a CSRF token (a cookie provided in the form you're skipping the download of.)
Try using Requests:
http://docs.python-requests.org/en/latest/user/quickstart/#cookies
Look for the cookies and/or hidden form fields and then fire away.

How can I pass my ID and my password to a website in Python using Google App Engine?

Here is a piece of code that I use to fetch a web page HTML source (code) by its URL using Google App Engine:
from google.appengine.api import urlfetch
url = "http://www.google.com/"
result = urlfetch.fetch(url)
if result.status_code == 200:
print "content-type: text/plain"
print
print result.content
Everything is fine here, but sometimes I need to get an HTML source of a page from a site where I am registered and can only get an access to that page if I firstly pass my ID and password. (It can be any site, actually, like any mail-account-providing site like Yahoo: https://login.yahoo.com/config/mail?.src=ym&.intl=us or any other site where users get free accounts by firstly getting registered there).
Can I somehow do it in Python (trough "Google App Engine")?
You can check for an HTTP status code of 401, "authorization required", and provide the kind of HTTP authorization (basic, digest, whatever) that the site is asking for -- see e.g. here for more details (there's not much that's GAE specific here -- it's a matter of learning HTTP details and obeying them!-).
As Alex said you can check for status code and see what type of autorization it wants, but you can not generalize it as some sites will not give any hint or only allow login thru a non standard form, in those cases you may have to automate the login process using forms, for that you can use library like twill (http://twill.idyll.org/)
or code a specific form submit for each site.

cookielib and form authentication woes in Python

InstaMapper is a GPS tracking service that updates the device's position more frequently when the device is being tracked live on the InstaMapper webpage. I'd like to have this happen all the time so I thought I'd write a python script to login to my account and access the page periodically.
import urllib2, urllib, cookielib
cj = cookielib.LWPCookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
urllib2.install_opener(opener)
params = urllib.urlencode(dict(username_hb='user', password_hb='hunter2'))
opener.open('http://www.instamapper.com/fe?action=login', params)
if not 'id' in [cookie.name for cookie in cj]:
raise ValueError, "Login failed"
# try secured page
resp = opener.open('http://www.instamapper.com/fe?page=track&device_key=abc')
print resp.read()
resp.close()
The ValueError is raised each time. If I remove this and read the response, the page thinks I have disabled cookies and blocks access to that page. Why isn't cj grabbing the InstaMapper cookie?
Are there better ways to make the tracking service think I'm viewing my account constantly?
action=login is part of the parameters, and should be treated accordingly:
params = urllib.urlencode(dict(action='login', username_hb='user', password_hb='hunter2'))
opener.open('http://www.instamapper.com/fe', params)
(Also, this particular username/password combination is invalid, I assume, that you actually use a valid username and password in your actual code, otherwise the login fails corretly.)
Have you looked at whether there is a cookie specifically designed to foil your attempts? I suggest using Wireshark or other inspector to see if there is a cookie that changes (via javascript, etc) when you manually log in.
(Ethical note: You may be violating the terms of service and incurring much more cost to the company than you are paying for. I used to run a service like this and every additional/unplanned location update was between $0.01 - $0.05 but I'm sure its come down.)

Categories

Resources