Python: Remain logged in a website using mechanicalsoup - python

I am a newbie in Python. I want to write a script in python where I run thousands of URLs and save the response. In order to access these URLs, credentials are required. So, I have written a basic script which goes to the URL and print the response. When I go through multiple URL, the website returns the error that multiple users are logged in. So, I would like to login once and run other URLs in the same logged in session. Is there any way I can do that using Mechanical soup.
Here is my script:
import mechanicalsoup
browser = mechanicalsoup.StatefulBrowser()
browser.open("mywebsite1")
browser.select_form('form[action="/login"]')
browser.get_current_form().print_summary()
browser["userId"] = "myusername"
browser["password"] = "mypassword"
response = browser.submit_selected()
browser.launch_browser()
print(response.text)
browser.open("mywebsite2")
print(response.text)
...... so on for all the URLs
How can I save my session? Thanks in advance for your help

Just save & load session.cookies:
def save_cookies(browser):
return browser.session.cookies.get_dict()
def load_cookies(browser, cookies):
from requests.utils import cookiejar_from_dict
browser.session.cookies = cookiejar_from_dict(cookies)
browser = mechanicalsoup.StatefulBrowser()
browser.open("www.google.com")
cookies = save_cookies(browser)
load_cookies(browser, cookies)

Related

Authenticating on ADFS with Python script

I need to parse site, which is hidden by ADFS service.
and struggling with authentication to it.
Is there any options to get in?
what i can see, most of solutions for backend applications, or for "system users"(with app_id, app_secret).
in my case, i can't use it, only login and password.
example of problem:
in chrome I open www.example.com and it redirects me to to https://login.microsoftonline.com/ and then to https://federation-sts.example.com/adfs/ls/?blabla with login and password form.
and how to get access into it with python3?
ADFS uses complicated redirection and CSRF protection techniques. Thus, it is better to use a browser automation tool to perform the authentication and parse the webpage afterwards. I recommend the selenium toolkit with python bindings. Here is a working example:
from selenium import webdriver
def MS_login(usrname, passwd): # call this with username and password
driver = webdriver.Edge() # change to your browser (supporting Firefox, Chrome, ...)
driver.delete_all_cookies() # clean up the prior login sessions
driver.get('https://login.microsoftonline.com/') # change the url to your website
time.sleep(5) # wait for redirection and rendering
driver.find_element_by_xpath("//input[#name='loginfmt'").send_keys(usrname)
driver.find_element_by_xpath("//input[#type='submit']").click()
time.sleep(5)
driver.find_element_by_xpath("//input[#name='passwd'").send_keys(passwd)
driver.find_element_by_xpath("//input[#name='KMSI' and #type='checkbox'").click()
driver.find_element_by_xpath("//input[#type='submit']").click()
time.sleep(5)
driver.find_element_by_xpath("//input[#type='submit']").click()
# Successfully login
# parse the site ...
driver.close() # close the browser
return driver
This script calls Microsoft Edge to open the website. It injects the username and password to the correct DOM elements and then let the browser to handle the rest. It has been tested on the webpage "https://login.microsoftonline.com". You may need to modify it to suit your website.
To Answer your question "How to Get in with python" i am assuming you want perform some web scraping operation on the pages which is secured by Azure AD authentication.
In these kind of scenario, you have to do the following steps.
For this script we will only need to import the following:
import requests
from lxml import html
First, we would like to create our session object. This object will allow us to persist the login session across all our requests.
session_requests = requests.session()
Second, we would like to extract the csrf token from the web page, this token is used during login. For this example we are using lxml and xpath, we could have used regular expression or any other method that will extract this data.
login_url = "https://bitbucket.org/account/signin/?next=/"
result = session_requests.get(login_url)
tree = html.fromstring(result.text)
authenticity_token = list(set(tree.xpath("//input[#name='csrfmiddlewaretoken']/#value")))[0]
Next, we would like to perform the login phase. In this phase, we send a POST request to the login url. We use the payload that we created in the previous step as the data. We also use a header for the request and add a referer key to it for the same url.
result = session_requests.post(
login_url,
data = payload,
headers = dict(referer=login_url)
)
Payload would be a dictionary object of user name and password etc.
payload = {
"username": "<USER NAME>",
"password": "<PASSWORD>",
"csrfmiddlewaretoken": "<CSRF_TOKEN>"
}
Note:- This is just an example.
Step 2:
Scrape content
Now, that we were able to successfully login, we will perform the actual scraping
url = 'https://bitbucket.org/dashboard/overview'
result = session_requests.get(
url,
headers = dict(referer = url)
)
So in other words, you need to get the request details payload from Azure AD and then create a session object using logged in method and then finally do the scraping.
Here is a very good example of Web scraping of a secured website.
Hope it helps.

Download encrypted webpage

Hi I have researched this but I can not find any answers this question. I need to download a sub directory of a web page to a string for a search, I know have to do this but the only problem is the site is encrypted and requires a login to acces the directory. I know I need to send the cookies to request the download but I am unsure how to do this. I am coding python. feel free to ask for more info.
import urllib
import urllib2
import cookielib
import time
# All your cookie related things are done by this.
cookie_jar = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookie_jar))
urllib2.install_opener(opener)
#POST Parameters for login page.
request_body_params = {'your_parameter_name': 'its_value', 'another_parameter_name': 'its_value'}
data_encoding = urllib.urlencode(request_body_params)
url_main = 'https://your_site.com/login'
main_request = urllib2.Request(url_main, data_encoding)
#Any headers required goes here.
main_request.add_header('Accept-encoding', 'gzip')
# This is the response of login. You don't want to read this.
main_response = urllib2.urlopen(main_request)
# You want data from this link.
url_results = 'https://your_site.com/sub_directory'
results_response = urllib2.urlopen(url_results)
print results_response.read()
To check the POST Parameters, go to the site from a browser, click on 'View Source', go to 'Network' in view source. Then as you login in the browser, there will be network logs generated, click on the link and check out it's POST Parameters and headers.

Using python requests module to login on an Wordpress based website

I'm trying to login to a Wordpress based website using python's request module and beautifulsoup4. It seems like the code fails to sucessfully login. Also, there is no csrf token on the website. How do I sucessfully login to the website?
import requests
import bs4 as bs
with requests.session() as c:
link="https://gpldl.com/sign-in/" #link of the webpage to be logged in
initial=c.get(link) #passing the get request
login_data={"log":"*****","pwd":"******"} #the login data from any account on the site. Stars must be replaced with username and password
page_login=c.post(link, data=login_data) #posting the login data into the link
print(page_login) #checking status of requested page
page=c.get("https://gpldl.com/my-gpldl-account/") #requesting source code of logged in page
good_data = bs.BeautifulSoup(page.content, "lxml") #parsing it with BS4
print(good_data.title) #printing this gives the title that is got from the page when accessed from an logged-out account
You are sending your POST request to a wrong URL, the correct one should be https://gpldl.com/wp-login.php, also there're 5 parameters for the payload which are log, pwd, rememberme, redirect_to, redirect_to_automatic.
So it should be:
login_data = {"log": "*****","pwd": "******",
"rememberme": "forever",
"redirect_to": "https://gpldl.com/my-gpldl-account/",
"redirect_to_automatic": "1"
}
page_login = c.post('https://gpldl.com/wp-login.php', data=login_data)
Edit:
You could use Chrome Dev tool to find out all these info while logging in, it's like this:
As to rememberme key, I would suggest you to do exact same thing a browser does, also add some headers for your request, especially User-Agent, because for some websites they just don't welcome you got login this way.

Using Python to request draftkings.com info that requires login?

I'm trying to get contest data from the url: "https://www.draftkings.com/contest/gamecenter/32947401"
If you go to this URL and aren't logged in, it'll just re-direct you to the lobby. If you're logged in, it'll actually show you the contest results.
Here's some things I tried:
-First, I used Chrome's Dev networking tools to watch requests while I manually logged in
-I then tried copying the cookie that I thought contained the authentication info, it was of the form:
'ajs_anonymous_id=%123123123123123, mlc=true; optimizelyEndUserId'
-I then stored that cookie as an Evironment variable and ran this code:
HEADERS= {'cookie': os.environ['MY_COOKIE'] }
requests.get(draft_kings_url, headers= HEADERS)
No luck, this just gave me the lobby.
I then tried request's built in:
HTTPBasicAuth
HTTPDigestAuth
No luck here either.
I'm no python expert by far, and I've pretty much exhausted what I know and the search results I've found. Any ideas?
The tool that you want is selenium. Something along the lines of:
from selenium import webdriver
browser = webdriver.Firefox()
browser.get(r"https://www.draftkings.com/contest/gamecenter/32947401" )
username = browser.find_element_by_id("user")
username.send_keys("username")
password = browser.find_element_by_id("password")
password.send_keys("top_secret")
login = selenium.find_element_by_name("login")
login.click()
Use fiddler to see the exact request they are making when you try to log in. Then use Session class in requests package.
import requests
session = requests.Session()
session.get('YOUR_URL_LOGIN_PAGE')
this will save all the cookies from your url in your session variable (Like when you use a browser).
Then make a post request to the login url with appropriate data.
You dont have to manually pass cookie data as it is auto generated when you first visit a website. However you can set some header explicitly like UserAgent etc by:
session.headers.update({'header_name':'header_value'})
HTTPBasicAuth & HTTPDigestAuth might not work based on the website.

Download a CSV file with python requests from an authenticated URL

For one of my project I need to fetch a CSV file from a website authenticated URL. Unfortunately there is no API to do it. So that is why I decided to use requests with sessions to fetch it.
The login page is:
http://extranet.ffbb.com/fbi/identification.do
I checked on the page for the login and password names in the HTML code. Login is "identificationBean.identifiant" and password is "identificationBean.mdp".
So I try to connect and to display the returned HTML with the following code but it returns the HTML code of a wrong login/password (like if I type a wrong login/pass from my browser). And I am sure my credentials are right.
login = "my_login"
password = "my_password"
with requests.Session() as session:
data = {
'identificationBean.identifiant': '{}'.format(config.login),
'identificationBean.mdp': '{}'.format(config.password)
}
url = 'http://extranet.ffbb.com/fbi/identification.do'
response = session.post(url, data=data)
print(response.text)
Thank you for your help

Categories

Resources