How to "log in" to a website using Python's Requests module? - python

I am trying to post a request to log in to a website using the Requests module in Python but its not really working. I'm new to this...so I can't figure out if I should make my Username and Password cookies or some type of HTTP authorization thing I found (??).
from pyquery import PyQuery
import requests
url = 'http://www.locationary.com/home/index2.jsp'
So now, I think I'm supposed to use "post" and cookies....
ck = {'inUserName': 'USERNAME/EMAIL', 'inUserPass': 'PASSWORD'}
r = requests.post(url, cookies=ck)
content = r.text
q = PyQuery(content)
title = q("title").text()
print title
I have a feeling that I'm doing the cookies thing wrong...I don't know.
If it doesn't log in correctly, the title of the home page should come out to "Locationary.com" and if it does, it should be "Home Page."
If you could maybe explain a few things about requests and cookies to me and help me out with this, I would greatly appreciate it. :D
Thanks.
...It still didn't really work yet. Okay...so this is what the home page HTML says before you log in:
</td><td><img src="http://www.locationary.com/img/LocationaryImgs/icons/txt_email.gif"> </td>
<td><input class="Data_Entry_Field_Login" type="text" name="inUserName" id="inUserName" size="25"></td>
<td><img src="http://www.locationary.com/img/LocationaryImgs/icons/txt_password.gif"> </td>
<td><input class="Data_Entry_Field_Login" type="password" name="inUserPass" id="inUserPass"></td>
So I think I'm doing it right, but the output is still "Locationary.com"
2nd EDIT:
I want to be able to stay logged in for a long time and whenever I request a page under that domain, I want the content to show up as if I were logged in.

I know you've found another solution, but for those like me who find this question, looking for the same thing, it can be achieved with requests as follows:
Firstly, as Marcus did, check the source of the login form to get three pieces of information - the url that the form posts to, and the name attributes of the username and password fields. In his example, they are inUserName and inUserPass.
Once you've got that, you can use a requests.Session() instance to make a post request to the login url with your login details as a payload. Making requests from a session instance is essentially the same as using requests normally, it simply adds persistence, allowing you to store and use cookies etc.
Assuming your login attempt was successful, you can simply use the session instance to make further requests to the site. The cookie that identifies you will be used to authorise the requests.
Example
import requests
# Fill in your details here to be posted to the login form.
payload = {
'inUserName': 'username',
'inUserPass': 'password'
}
# Use 'with' to ensure the session context is closed after use.
with requests.Session() as s:
p = s.post('LOGIN_URL', data=payload)
# print the html returned or something more intelligent to see if it's a successful login page.
print p.text
# An authorised request.
r = s.get('A protected web page url')
print r.text
# etc...

If the information you want is on the page you are directed to immediately after login...
Lets call your ck variable payload instead, like in the python-requests docs:
payload = {'inUserName': 'USERNAME/EMAIL', 'inUserPass': 'PASSWORD'}
url = 'http://www.locationary.com/home/index2.jsp'
requests.post(url, data=payload)
Otherwise...
See https://stackoverflow.com/a/17633072/111362 below.

Let me try to make it simple, suppose URL of the site is http://example.com/ and let's suppose you need to sign up by filling username and password, so we go to the login page say http://example.com/login.php now and view it's source code and search for the action URL it will be in form tag something like
<form name="loginform" method="post" action="userinfo.php">
now take userinfo.php to make absolute URL which will be 'http://example.com/userinfo.php', now run a simple python script
import requests
url = 'http://example.com/userinfo.php'
values = {'username': 'user',
'password': 'pass'}
r = requests.post(url, data=values)
print r.content
I Hope that this helps someone somewhere someday.

The requests.Session() solution assisted with logging into a form with CSRF Protection (as used in Flask-WTF forms). Check if a csrf_token is required as a hidden field and add it to the payload with the username and password:
import requests
from bs4 import BeautifulSoup
payload = {
'email': 'email#example.com',
'password': 'passw0rd'
}
with requests.Session() as sess:
res = sess.get(server_name + '/signin')
signin = BeautifulSoup(res._content, 'html.parser')
payload['csrf_token'] = signin.find('input', id='csrf_token')['value']
res = sess.post(server_name + '/auth/login', data=payload)

Find out the name of the inputs used on the websites form for usernames <...name=username.../> and passwords <...name=password../> and replace them in the script below. Also replace the URL to point at the desired site to log into.
login.py
#!/usr/bin/env python
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
payload = { 'username': 'user#email.com', 'password': 'blahblahsecretpassw0rd' }
url = 'https://website.com/login.html'
requests.post(url, data=payload, verify=False)
The use of disable_warnings(InsecureRequestWarning) will silence any output from the script when trying to log into sites with unverified SSL certificates.
Extra:
To run this script from the command line on a UNIX based system place it in a directory, i.e. home/scripts and add this directory to your path in ~/.bash_profile or a similar file used by the terminal.
# Custom scripts
export CUSTOM_SCRIPTS=home/scripts
export PATH=$CUSTOM_SCRIPTS:$PATH
Then create a link to this python script inside home/scripts/login.py
ln -s ~/home/scripts/login.py ~/home/scripts/login
Close your terminal, start a new one, run login

Some pages may require more than login/pass. There may even be hidden fields. The most reliable way is to use inspect tool and look at the network tab while logging in, to see what data is being passed on.

Related

How to scrape a word press based page? Respectively how to login to a page requesting username and Password

So im quite unsure how to explain my issue. So - I try to scrape a schedule page (of my school) to make It easier to read. Unfortunately i couldnt figure how to pass the creditals to the login prompt with python.
url = "https://www.diltheyschule.de/vertretungsplan/
or rather this one due to it contains the actual data.
url = https://www.diltheyschule.de/vertretungsplan/f1/subst_001.htm
I do know the password and username.
Login prompt looks like this :
As you might have guessed - i want to pass password and username to this prompt.
This code doesnt work for me - it returns unauthorized error.
import requests
session = requests.Session()
r = session.post("https://www.diltheyschule.de/vertretungsplan/",data={"log":"xxx","pwd":"xxx"})
#or
r = session.post("https://www.diltheyschule.de/vertretungsplan/f1/subst_001.htm",data={"log":"xxx","pwd":"xxx"})
print(r.content)
output
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>401 Unauthorized</title>
</head><body>
<h1>Unauthorized</h1>
<p>This server could not verify that you
are authorized to access the document
requested. Either you supplied the wrong
credentials (e.g., bad password), or your
browser doesn't understand how to supply
the credentials required.</p>
<hr>
<address>Apache Server at www.diltheyschule.de Port 443</address>
</body></html>
prolly essential information :
the goal is to scrape 'https://www.diltheyschule.de/vertretungsplan/f1/subst_001.htm'
passing pwd and log to the prompt (most likely without gui support (e.g. selenium))
This directory is secured by basic auth authentication. This is the easiest method of authentication where you can log in with the appropriate headers.
Are you also sure that you want to use POST method for see what is in .html page?
Please, try this:
import requests
session = requests.Session()
r = session.get("https://www.diltheyschule.de/vertretungsplan/f1/subst_001.htm",auth=requests.auth.HTTPBasicAuth('user', 'pass'))
print(r.content)

Remote login to decoupled website with python and requests

I am trying to login to a website www.seek.com.au. I am trying to test the possibility to remote login using Python request module. The site is Front end is designed using React and hence I don't see any form action component in www.seek.com.au/sign-in
When I run the below code, I see the response code as 200 indicating success, but I doubt if it's actually successful. The main concern is which URL to use in case if there is no action element in the login submit form.
import requests
payload = {'email': <username>, 'password': <password>}
url = 'https://www.seek.com.au'
with requests.Session() as s:
response_op = s.post(url, data=payload)
# print the response status code
print(response_op.status_code)
print(response_op.text)
When i examine the output data (response_op.text), i see word 'Sign in' and 'Register' in output which indicate the login failed. If its successful, the users first name will be shown in the place. What am I doing wrong here ?
P.S: I am not trying to scrape data from this website but I am trying to login to a similar website.
Try this code:
import requests
payload={"email": "test#test.com", "password": "passwordtest", "rememberMe": True}
url = "https://www.seek.com.au:443/userapi/login"
with requests.Session() as s:
response_op = s.post(url, json=payload)
# print the response status code
print(response_op.status_code)
print(response_op.text)
You are sending the request to the wrong url.
Hope this helps

Logging in to site with Python

I'm trying to use Python to scrape a website, but I have to login first before I can get to the page with the data on it.
The URL for the login page is:
https://tunein.com/account/login/?returnTo=https://amplifier.tunein.com/sessions/new&source=amplifier
I have read numerous threads which seem to answer the question, but I'm struggling to relate it to my own situation.
The code I have (from a response in this thread) is:
import requests
# Fill in your details here to be posted to the login form.
payload = {
'Username': 'user',
'Password': 'password'
}
# Use 'with' to ensure the session context is closed after use.
with requests.Session() as s:
p = s.post('https://tunein.com/account/login/?returnTo=https://amplifier.tunein.com/sessions/new&source=amplifier', data=payload)
# print the html returned or something more intelligent to see if it's a successful login page.
print p.text
I have looked at the source code to see what the name of the form fields are, hence the 'Username' and 'Password' attributes in the payload variable.
When I run the script, p.text just returns the HTML of the same page so it obviously isn't logging in correctly. Any suggestions? Is there a better way to do it?
Edit:
The "Form Data" headers once I log in are:
Username:user
Password:pass
Remember:true
Remember:false
btnLogin:Sign In
ReturnTo:https://amplifier.tunein.com/sessions/new
Source:amplifier
Does this mean I have to add all of these to my payload variable?

scraping data from webpage with python 3, need to log in first

I checked this question but it only has one answer and it's a little over my head (just started with Python). I'm using Python 3.
I'm trying to scrape data from this page, but if you have a BP account, the page is a lot different/more useful. I need my program to log me in before I have BeautifulSoup get the data for me.
So far I have
from bs4 import BeautifulSoup
import urllib.request
import requests
username = 'myUsername'
password = 'myPassword'
from requests import session
payload = {'action': 'Log in',
'Username: ': username,
'Password: ': password}
# the next 3 lines are pretty much copied from a different StackOverflow
# question. I don't really understand what they're doing, and obviously these
# are where the problem is.
with session() as c:
c.post('https://www.baseballprospectus.com/manageprofile.php', data=payload)
response = c.get('http://www.baseballprospectus.com/sortable/index.php?cid=1820315')
soup = BeautifulSoup(response.content, "lxml")
for row in soup.find_all('tr')[7:]:
cells = row.find_all('td')
name = cells[1].text
print(name)
The script does work, it just pulls the data from the site before it's logged in, so its not the data I want.
Conceptually, there is no problem with your code. You're using a session object to send a login request, then with the same session you're sending a request for the desired page. This means that the cookies set by the login request should be kept for the second request. If you want to read more about the workings of the Session object, here's the relevant Requests documentation.
Since I don't have a valid login for Baseball Prospectus, I'll have to guess that something is wrong with the data you're sending to the login page. A quick inspection using the 'Network' tab in Chrome's Developer Tools, shows that the login page, manageprofile.php, accepts four POST parameters:
username: myUsername
password: myPassword
action: muffinklezmer
nocache: some long number, e.g. 2417395155
However you're sending a different set of parameters, and specifying a different value for the 'action' parameter. Note that the parameter names have to match the original request exactly, otherwise manageprofile.php will not accept the login.
Try replacing the payload dictionary with this version:
payload = {
'action': 'muffinklezmer',
'username': username,
'password': password}
If this doesn't work, try adding the 'nocache' parameter too, e.g.:
'nocache': '1437955145'

Python: Login on a website

I trying to login on a website and do automated clean-up jobs.
The site where I need to login is : http://site.com/Account/LogOn
I tried various codes that I found it on Stack, like Login to website using python (but Im stuck on this line
session = requests.session(config={'verbose': sys.stderr})
where my JetBeans doesnt like 'verbose' telling me that i need to do something, but doesnt explain exactly what).
I also tried this: Browser simulation - Python, but no luck with this too.
Can anyone help me? All answers will be appreciate. Thanks in advance.
PS: I started learning Python 2 weeks ago so please elaborate your answer for my "pro" level of undersanding :)
-------------------------UPDATE:-----------------------------
I manage to login, but when I'm trying to move on other page and push a button, it says Please Log in!
I use this code:
url = 'http://site.com/Account/LogOn'
values = {'UserName': 'user',
'Password': 'pass'}
data = urllib.urlencode(values)
cookies = cookielib.CookieJar()
opener = urllib2.build_opener(
urllib2.HTTPRedirectHandler(),
urllib2.HTTPHandler(debuglevel=0),
urllib2.HTTPSHandler(debuglevel=0),
urllib2.HTTPCookieProcessor(cookies))
response = opener.open(url, data)
the_page = response.read()
http_headers = response.info()
print response
After I log in I need to swith a menu value, that looks like this in HTML:
<select id="menu_uid" name="menu_uid" onchange="swapTool()" style="font-size:8pt;width:120px;">
<option value="1" selected>MyProfile</option>
...
<option value="6" >DeleteTree</option>
but I also can do it directly if I form a URL like this:
http://site.com/Account/management.html?Category=6&deltreeid=6&do=Delete+Tree
So , how can I build this URL and submit it? Thanks again!
Save yourself a lot of headache and use requests:
url = 'http://site.com/Account/LogOn'
values = {'UserName': 'user',
'Password': 'pass'}
r = requests.post(url, data=values)
# Now you have logged in
params = {'Category': 6, 'deltreeid': 6, 'do': 'Delete Tree'}
url = 'http://site.com/Account/management.html'
# sending cookies as well
result = requests.get(url, data=params, cookies=r.cookies)
Well 1st things
it sends a POST request to /Account/LogOn.
The fields are called UserName and Password.
Then you can use python's httplib to do HTTP requests
http://docs.python.org/2/library/httplib.html
(There is an example in the end on how to do a POST).
Then you will get a response containing a session cookie probably, within a HTTP header. You need to store that cookie in a variable and send it in all the subsequent requests to be authenticated.

Categories

Resources