Python client for multipart form with CAS - python

I am trying to write a Python script to POST a multipart form to a site that requires authentication through CAS.
There are two approaches that both solve part of the problem:
The Python requests library works well for submitting multipart forms.
There is caslib, with a login function. It returns an OpenerDirector that can presumably be used for further requests.
Unfortunately, I can't figure out how to get a complete solution out what I have so far.
There are just some ideas from a couple hours of research; I am open to just about any solution that works.
Thanks for the help.

I accepted J.F. Sebastian's answer because I think it was closest to what I'd asked, but I actually wound up getting it to work by using mechanize, Python library for web browser automation.
import argparse
import mechanize
import re
import sys
# (SENSITIVE!) Authentication info
username = r'username'
password = r'password'
# Command line arguments
parser = argparse.ArgumentParser(description='Submit lab to CS 235 site (Winter 2013)')
parser.add_argument('lab_num', help='Lab submission number')
parser.add_argument('file_name', help='Submission file (zip)')
args = parser.parse_args()
# Go to login site
br = mechanize.Browser()
br.open('https://cas.byu.edu/cas/login?service=https%3a%2f%2fbeta.cs.byu.edu%2f~sub235%2fsubmit.php')
# Login and forward to submission site
br.form = br.forms().next()
br['username'] = username
br['password'] = password
br.submit()
# Submit
br.form = br.forms().next()
br['labnum'] = list(args.lab_num)
br.add_file(open(args.file_name), 'application/zip', args.file_name)
r = br.submit()
for s in re.findall('<h4>(.+?)</?h4>', r.read()):
print s

You could use poster to prepare multipart/form-data. Try to pass poster's opener to the caslib and use caslib's opener to make requests (not tested):
import urllib2
import caslib
import poster.encode
import poster.streaminghttp
opener = poster.streaminghttp.register_openers()
r, opener = caslib.login_to_cas_service(login_url, username, password,
opener=opener)
params = {'file': open("test.txt", "rb"), 'name': 'upload test'}
datagen, headers = poster.encode.multipart_encode(params)
response = opener.open(urllib2.Request(upload_url, datagen, headers))
print response.read()

You could write a Authentication Handler for Requests using caslib. Then you could do something like:
auth = CasAuthentication("url", "login", "password")
response = requests.get("http://example.com/cas_service", auth=auth)
Or if you're making tons of requests against the website:
s = requests.session()
s.auth = auth
s.post('http://casservice.com/endpoint', data={'key', 'value'}, files={'filename': '/path/to/file'})

Related

Scrape data from a page that requires a login

I am new to Python and Web Scraping and I am trying to write a very basic script that will get data from a webpage that can only be accessed after logging in. I have looked at a bunch of different examples but none are fixing the issue. This is what I have so far:
from bs4 import BeautifulSoup
import urllib, urllib2, cookielib
username = 'name'
password = 'pass'
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
login_data = urllib.urlencode({'username' : username, 'password' : password})
opener.open('WebpageWithLoginForm')
resp = opener.open('WebpageIWantToAccess')
soup = BeautifulSoup(resp, 'html.parser')
print soup.prettify()
As of right now when I print the page it just prints the contents of the page as if I was not logged in. I think the issue has something to do with the way I am setting the cookies but I am really not sure because I do not fully understand what is happening with the cookie processor and its libraries.
Thank you!
Current Code:
import requests
import sys
EMAIL = 'usr'
PASSWORD = 'pass'
URL = 'https://connect.lehigh.edu/app/login'
def main():
# Start a session so we can have persistant cookies
session = requests.session(config={'verbose': sys.stderr})
# This is the form data that the page sends when logging in
login_data = {
'username': EMAIL,
'password': PASSWORD,
'LOGIN': 'login',
}
# Authenticate
r = session.post(URL, data=login_data)
# Try accessing a page that requires you to be logged in
r = session.get('https://lewisweb.cc.lehigh.edu/PROD/bwskfshd.P_CrseSchdDetl')
if __name__ == '__main__':
main()
You can use the requests module.
Take a look at this answer that i've linked below.
https://stackoverflow.com/a/8316989/6464893

urllib2, python, garbage response when opening specific site

So I have been looking around and have managed to cobble together some code that lets me login to the website, http://forums.somethingawful.com
It works, I can see from the response that it works.
When I try using the same urllib2 opener that I created for the above login, to visit this part of the site http://forums.somethingawful.com/attachment.php?attachmentid=300 (which I need to be logged in to view) to open this page, I get a response of "ÿØÿà"
EDIT: http://i.imgur.com/PmWl1s4.png
I have included a screenshot of what the target page looks like when logged in, if this is anymore help
Any ideas why?
"""
# Script to log in to website and store cookies.
# run as: python web_login.py USERNAME PASSWORD
#
# sources of code include:
#
# http://stackoverflow.com/questions/2954381/python-form-post-using-urllib2-also-question-on-saving-using-cookies
# http://stackoverflow.com/questions/301924/python-urllib-urllib2-httplib-confusion
# http://www.voidspace.org.uk/python/articles/cookielib.shtml
#
# mashed together by Martin Chorley
#
# Licensed under a Creative Commons Attribution ShareAlike 3.0 Unported License.
# http://creativecommons.org/licenses/by-sa/3.0/
"""
import urllib, urllib2
import cookielib
import sys
import urlparse
from BeautifulSoup import BeautifulSoup as bs
class WebLogin(object):
def __init__(self, username, password):
# url for website we want to log in to
self.base_url = 'http://forums.somethingawful.com/'
# login action we want to post data to
# could be /login or /account/login or something similar
self.login_action = '/account.php?'
# file for storing cookies
self.cookie_file = 'login.cookies'
# user provided username and password
self.username = username
self.password = password
# set up a cookie jar to store cookies
self.cj = cookielib.MozillaCookieJar(self.cookie_file)
# set up opener to handle cookies, redirects etc
self.opener = urllib2.build_opener(
urllib2.HTTPRedirectHandler(),
urllib2.HTTPHandler(debuglevel=0),
urllib2.HTTPSHandler(debuglevel=0),
urllib2.HTTPCookieProcessor(self.cj)
)
# pretend we're a web browser and not a python script
self.opener.addheaders = [('User-agent',
('Chrome/16.0.912.77'))
]
# open the front page of the website to set and save initial cookies
response = self.opener.open(self.base_url)
self.cj.save()
# try and log in to the site
response = self.login()
response2 = self.opener.open("http://forums.somethingawful.com/attachment.php?attachmentid=300")
print response2.read() + "LLLLLL"
# method to do login
def login(self):
# parameters for login action
# may be different for different websites
# check html source of website for specifics
login_data = urllib.urlencode({
'action': 'login',
'username': 'username',
'password': 'password'
})
# construct the url
login_url = self.base_url + self.login_action
# then open it
response = self.opener.open(login_url, login_data)
# save the cookies and return the response
self.cj.save()
return response
if __name__ == "__main__":
username = "username"
password = "password"
# initialise and login to the website
test = WebLogin(username, password)
Try this instead:
import urllib2,cookielib
def login(username,password):
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookielib.CookieJar()))
url1 = "http://forums.somethingawful.com/attachment.php?attachmentid=300"
url2 = "http://forums.somethingawful.com/account.php?action=loginform"
data = "&username="+username+"&password="+password
socket = opener.open(url1)
socket = opener.open(url2,data)
return socket.read()
P.S.: I wrote it as a standalone function; you can integrate it into your class if it works for you. In addition, the call to opener.open(url1) might be redundant; would need a valid pair of username/password in order to verify that...

403 error while calling Reddit API.

I'm trying to access data saved by the user. And it keeps returning a 403 error with this being its api end point.
http://www.reddit.com/dev/api#GET_user_{username}_saved
I'm thoroughly confused what to send in my headers to make this request work and the reddit documentation has no mention of it at all. Help?
I'm using Python-requests library to do this.
Referring to line 686 in reddit's code in listingcontroller.py (here) :
if (where in ('saved', 'hidden') and not
((c.user_is_loggedin and c.user._id == vuser._id) or
c.user_is_admin)):
return self.abort403()
you can clearly see that you must be logged in as username or be an admin in order to get the saved or hidden data - otherwise you get a 403 error.
As #zenpoy already mentioned (and which you already know), you have to be logged in. Therefore, you should save the cookie, which you get as a response of a valid call to api/login. I've written some code, which logs a user in and retrieves all saved things:
import urllib
import urllib2
import cookielib
import json
login_url = 'https://ssl.reddit.com/api/login/'
saved_url = 'https://ssl.reddit.com/user/<username>/saved.json'
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
def login(username, passwd):
values = {'user': username,
'api_type': 'json',
'passwd': passwd}
data = urllib.urlencode(values)
response = opener.open(login_url, data).read()
print json.loads(response)
def retrieve_saved(username):
url = saved_url.replace('<username>', username)
response = opener.open(url).read()
print json.loads(response)
login(<username>, <passwd>)
retrieve_saved(<username>)

How to authenticate a site with Python using urllib2?

After much reading here on Stackoverflow as well as the web I'm still struggling with getting things to work.
My challenge: to get access to a restricted part of a website for which I'm a member using Python and urllib2.
From what I've read the code should be like this:
mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
url = 'http://www.domain.com'
mgr.add_password(None, url, 'username', 'password')
handler = urllib2.HTTPBasicAuthHandler(mgr)
opener = urllib2.build_opener(handler)
urllib2.install_opener(opener)
try:
response = urllib2.urlopen('http://www.domain.com/restrictedpage')
page = response.read()
print page.geturl()
except IOError, e:
print e
The print doesn't print "http://www.domain.com/restrictedpage", but shows "http://www.domain.com/login" so my credentials aren't stored/processed and I'm being redirected.
How can I get this to work? I've been trying for days and keep hitting the same dead ends. I've tried all the examples I could find to no avail.
My main question is: what's needed to authenticate to a website using Python and urllib2?
Quick question: what am I doing wrong?
Check first manually what is really happening when you are successfully authenticated (instructions with Chrome):
Open develper tools in Chrome (Ctrl + Shift + I)
Click Network tab
Go and do the authentication manually (go the the page, type user + passwd + submit)
check the POST method in the Network tab of the developer tools
check the Request Headers, Query String Parameters and Form Data. There you find all the information needed what you need to have in your own POST.
Then install "Advanced Rest Client (ARC)" Chrome extension
Use the ARC to construct a valid POST for authentication.
Now you know what to have in your headers and form data. Here's a sample code using Requests that worked for me for one particular site:
import requests
USERNAME = 'user' # put correct usename here
PASSWORD = 'password' # put correct password here
LOGINURL = 'https://login.example.com/'
DATAURL = 'https://data.example.com/secure_data.html'
session = requests.session()
req_headers = {
'Content-Type': 'application/x-www-form-urlencoded'
}
formdata = {
'UserName': USERNAME,
'Password': PASSWORD,
'LoginButton' : 'Login'
}
# Authenticate
r = session.post(LOGINURL, data=formdata, headers=req_headers, allow_redirects=False)
print r.headers
print r.status_code
print r.text
# Read data
r2 = session.get(DATAURL)
print "___________DATA____________"
print r2.headers
print r2.status_code
print r2.text
For HTTP Basic Auth you can refer this : http://www.voidspace.org.uk/python/articles/authentication.shtml

pass session cookies in http header with python urllib2?

I'm trying to write a simple script to log into Wikipedia and perform some actions on my user page, using the Mediawiki api. However, I never seem to get past the first login request (from this page: https://en.wikipedia.org/wiki/Wikipedia:Creating_a_bot#Logging_in). I don't think the session cookie that I set is being sent. This is my code so far:
import Cookie, urllib, urllib2, xml.etree.ElementTree
url = 'https://en.wikipedia.org/w/api.php?action=login&format=xml'
username = 'user'
password = 'password'
user_data = [('lgname', username), ('lgpassword', password)]
#Login step 1
#Make the POST request
request = urllib2.Request(url)
data = urllib.urlencode(user_data)
login_raw_data1 = urllib2.urlopen(request, data).read()
#Parse the XML for the login information
login_data1 = xml.etree.ElementTree.fromstring(login_raw_data1)
login_tag = login_data1.find('login')
token = login_tag.attrib['token']
cookieprefix = login_tag.attrib['cookieprefix']
sessionid = login_tag.attrib['sessionid']
#Set the cookies
cookie = Cookie.SimpleCookie()
cookie[cookieprefix + '_session'] = sessionid
#Login step 2
request = urllib2.Request(url)
session_cookie_header = cookieprefix+'_session='+sessionid+'; path=/; domain=.wikipedia.org; HttpOnly'
request.add_header('Set-Cookie', session_cookie_header)
user_data.append(('lgtoken', token))
data = urllib.urlencode(user_data)
login_raw_data2 = urllib2.urlopen(request, data).read()
I think the problem is somewhere in the request.add_header('Set-Cookie', session_cookie_header) line, but I don't know for sure. How do I use these python libraries to send cookies in the header with every request (which is necessary for a lot of API functions).
The latest version of requests has support for sessions (as well as being really simple to use and generally great):
with requests.session() as s:
s.post(url, data=user_data)
r = s.get(url_2)

Categories

Resources