Intercept when url changes before the page is completely loaded - python

Is it possible to catch the event when the url is changed inside my browser using selenium?
Here is my scenario:
I load my website test.com
After all the static files are loaded, when executing one of the js file, I am redirected (not sure how) to another page redirect-one.test.com/blah
My browser gets the url redirect-one.test.com/blah and gets a 307 response to go to redirect-two.test.com/blahblah
Here my browser receives a final 302 to go to final.test.com/
The page of final.test.com/ is loaded and at the end of this, selenium enables me to search for elements and so on...
I'd like to be able to intercept (and time the moment it happens) each time I am redirected.
After that, I still need to do some other steps for which selenium is more suitable:
Enter my username and password
Test some functionnalities
Log out
Here a sample of how I tried to intercept the first redirect:
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.support.ui import WebDriverWait
def url_contains(url):
def check_contains_url(driver):
return (url in driver.current_url)
return check_contains_url
driver = webdriver.Remote(
command_executor='http://127.0.0.1:4444/wd/hub',
desired_capabilities=DesiredCapabilities.FIREFOX)
driver.get("http://test.com/")
try:
url = "redirect-one.test.com"
first_redirect = WebDriverWait(driver, 20).until(url_contains(url))
print("found first redirect")
finally:
print("move on to the next redirect...."
Is this even possible using selenium?
I cannot change the behavior of the website and the reason it is built like this is because of an SSO mechanism I cannot bypass.
I realize I specified python but I am open to tools in other languages.

Selenium is not the tool for this. All the redirects that the browser encounters are handled by the browser in a way that Selenium does not allow you to check.
You can perform the checks using urllib2, or if you prefer a sane interface, using requests.

Related

Automation on the site using seleniumrequests

I am trying to automate some processes on the site. At first I tried to use queries, but a captcha came in response. Now I'm using selenium queries, and here's the problem: when I log in using selenium tools only, everything works fine, but I can't add coupons on the site and confirm them.
from seleniumrequests import Firefox
driver = Firefox()
user = '000000'
password = '000000'
driver_1x.get("https://1xstavka.ru/")
driver.find_element_by_id('curLoginForm').click()
driver.find_element_by_id('auth_id_email').send_keys(user)
driver.find_element_by_id('auth-form-password').send_keys(password)
driver.find_element_by_class_name('auth-button__text').click()
But if you use:
from seleniumrequests import Firefox
driver = Firefox()
driver.request('GET', 'https://1xstavka.ru')
The window opens for a second and immediately closes, a 200 response is received, but there are no cookies. It's the same with publishing requests, with which I'm trying to automate the process. After the request for publication, the response is 200, but nothing happens on the site.
driver.request('POST', 'https://1xstavka.ru/user/auth', json=json)
please tell me what is wrong or how you can solve this problem
I am unable to access the URL specified in the query; but for captcha/coupons, I created a loop with an interim stop function. This gives me the chance to input it manually and then continue the loop.

Unable to programatically login to a website

So I am trying to login programatically (python) to https://www.datacamp.com/users/sign_in using my email & password.
I have tried 2 methods of login. One using requests library & another using selenium (code below). Both time facing [403] issue.
Could someone please help me login programatically to it ?
Thank you !
Using Requests library.
import requests; r = requests.get("https://www.datacamp.com/users/sign_in"); r (which gives <response [403]>)
Using Selenium webdriver.
driver = webdriver.Chrome(executable_path=driver_path, options=option)
driver.get("https://www.datacamp.com/users/sign_in")
driver.find_element_by_id("user_email") # there is supposed to be form element with id=user_email for inputting email
Implicit wait at least should have worked, like this:
from selenium import webdriver
driver = webdriver.Chrome(executable_path='/snap/bin/chromium.chromedriver')
driver.implicitly_wait(10)
url = "https://www.datacamp.com/users/sign_in"
driver.get(url)
driver.find_element_by_id("user_email").send_keys("test#dsfdfs.com")
driver.find_element_by_css_selector("#new_user>button[type=button]").click()
BUT
The real issue is the the site uses anti-scraping software.
If you open Console and go to request itself you'll see:
It means that the site blocks your connection even before you try to login.
Here is similar question with different solutions: Can a website detect when you are using Selenium with chromedriver?
Not all answers will work for you, try different approaches suggested.
With Firefox you'll have the same issue (I've already checked).
You have to add a wait after driver.get("https://www.datacamp.com/users/sign_in") before driver.find_element_by_id("user_email") to let the page loaded.
Try something like WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, 'user_email')))

Failed to use selenium to automatically click the link in a website

I want to use selenium to automatically log in a website(https://www.cypress.com/) and download some materials.
I successfully open the website using selenium. But when I use selenium to click the "Log in" button. It shows this:
Access Denied
Here is my code:
from time import sleep
from selenium import webdriver
class Cypress():
def extractData(self):
browser = webdriver.Chrome(executable_path=r"C:chromedriver.exe")
browser.get("https://www.cypress.com/")
sleep(5)
element = browser.find_element_by_link_text("Log in")
sleep(1)
element.click()
pass
if __name__ == "__main__":
a = Cypress()
a.extractData()
pass
Can anyone give me some idea?
The website is protected using Akamai CDN, services, or whatever is loaded there.
I took a quick glance and it seems like the Akamai service worker is up, but I don't see any sensor data protection, selenium is simply detected as webdriver (and plenty other things) and flagged, try to login using requests, or ask the website owner to give you API access for your project.
Akamai cookies are up, so surely the protection is too, the 301 you got is the bot protection stopping you from automating something on a protected endpoint.

Selenium Firefox browser is stuck after downloading pdf

Was hoping someone could help me understand what's going on:
I'm using Selenium with Firefox browser to download a pdf (need Selenium to login to the corresponding website):
le = browser.find_elements_by_xpath('//*[#title="Download PDF"]')
time.sleep(5)
if le:
pdf_link = le[0].get_attribute("href")
browser.get(pdf_link)
The code does download the pdf, but after that just stays idle.
This seems to be related to the following browser settings:
fp.set_preference("pdfjs.disabled", True)
fp.set_preference("browser.helperApps.neverAsk.saveToDisk", "application/pdf")
If I disable the first, it doesn't hang, but opens pdf instead of downloading it. If I disable the second, a "Save As" pop-up window shows up. Could someone explain how to handle this?
For me, the best way to solve this was to let Firefox render the PDF in the browser via pdf.js and then send a subsequent fetch via the Python requests library with the selenium cookies attached. More explanation below:
There are several ways to render a PDF via Firefox + Selenium. If you're using the most recent version of Firefox, it'll most likely render the PDF via pdf.js so you can view it inline. This isn't ideal because now we can't download the file.
You can disable pdf.js via Selenium options but this will likely lead to the issue in this question where the browser gets stuck. This might be because of an unknown MIME-Type but I'm not totally sure. (There's another StackOverflow answer that says this is also due to Firefox versions.)
However, we can bypass this by passing Selenium's cookie session to requests.session().
Here's a toy example:
import requests
from selenium import webdriver
pdf_url = "/url/to/some/file.pdf"
# setup driver with options
driver = webdriver.Firefox(..options)
# do whatever you need to do to auth/login/click/etc.
# navigate to the PDF URL in case the PDF link issues a
# redirect because requests.session() does not persist cookies
driver.get(pdf_url)
# get the URL from Selenium
current_pdf_url = driver.current_url
# create a requests session
session = requests.session()
# add Selenium's cookies to requests
selenium_cookies = driver.get_cookies()
for cookie in selenium_cookies:
session.cookies.set(cookie["name"], cookie["value"])
# Note: If headers are also important, you'll need to use
# something like seleniumwire to get the headers from Selenium
# Finally, re-send the request with requests.session
pdf_response = session.get(current_pdf_url)
# access the bytes response from the session
pdf_bytes = pdf_response.content
I highly recommend using seleniumwire over regular selenium because it extends Python Selenium to let you return headers, wait for requests to finish, use proxies, and much more.

Submit form that renders dynamically with Scrapy?

I'm trying to submit a dynamically generated user login form using Scrapy and then parse the HTML on the page that corresponds to a successful login.
I was wondering how I could do that with Scrapy or a combination of Scrapy and Selenium. Selenium makes it possible to find the element on the DOM, but I was wondering if it would be possible to "give control back" to Scrapy after getting the full HTML in order to allow it to carry out the form submission and save the necessary cookies, session data etc. in order to scrape the page.
Basically, the only reason I thought Selenium was necessary was because I needed the page to render from the Javascript before Scrapy looks for the <form> element. Are there any alternatives to this, however?
Thank you!
Edit: This question is similar to this one, but unfortunately the accepted answer deals with the Requests library instead of Selenium or Scrapy. Though that scenario may be possible in some cases (watch this to learn more), as alecxe points out, Selenium may be required if "parts of the page [such as forms] are loaded via API calls and inserted into the page with the help of javascript code being executed in the browser".
Scrapy is not actually a great fit for coursera site since it is extremely asynchronous. Parts of the page are loaded via API calls and inserted into the page with a help of javascript code being executed in the browser. Scrapy is not a browser and cannot handle it.
Which raises the point - why not use the publicly available Coursera API?
Aside from what is documented, there are other endpoints that you can see called in browser developer tools - you need to be authenticated to be able to use them. For example, if you are logged in, you can see the list of courses you've taken:
There is a call to memberships.v1 endpoint.
For the sake of an example, let's start selenium, log in and grab the cookies with get_cookies(). Then, let's yield a Request to memberships.v1 endpoint to get the list of archived courses providing the cookies we've got from selenium:
import json
import scrapy
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
LOGIN = 'email'
PASSWORD = 'password'
class CourseraSpider(scrapy.Spider):
name = "courseraSpider"
allowed_domains = ["coursera.org"]
def start_requests(self):
self.driver = webdriver.Chrome()
self.driver.maximize_window()
self.driver.get('https://www.coursera.org/login')
form = WebDriverWait(self.driver, 10).until(EC.presence_of_element_located((By.XPATH, "//div[#data-js='login-body']//div[#data-js='facebook-button-divider']/following-sibling::form")))
email = WebDriverWait(form, 10).until(EC.visibility_of_element_located((By.ID, 'user-modal-email')))
email.send_keys(LOGIN)
password = form.find_element_by_name('password')
password.send_keys(PASSWORD)
login = form.find_element_by_xpath('//button[. = "Log In"]')
login.click()
WebDriverWait(self.driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//h2[. = 'My Courses']")))
self.driver.get('https://www.coursera.org/')
cookies = self.driver.get_cookies()
self.driver.close()
courses_url = 'https://www.coursera.org/api/memberships.v1'
params = {
'fields': 'courseId,enrolledTimestamp,grade,id,lastAccessedTimestamp,role,v1SessionId,vc,vcMembershipId,courses.v1(display,partnerIds,photoUrl,specializations,startDate,v1Details),partners.v1(homeLink,name),v1Details.v1(sessionIds),v1Sessions.v1(active,dbEndDate,durationString,hasSigTrack,startDay,startMonth,startYear),specializations.v1(logo,name,partnerIds,shortName)&includes=courseId,vcMembershipId,courses.v1(partnerIds,specializations,v1Details),v1Details.v1(sessionIds),specializations.v1(partnerIds)',
'q': 'me',
'showHidden': 'false',
'filter': 'archived'
}
params = '&'.join(key + '=' + value for key, value in params.iteritems())
yield scrapy.Request(courses_url + '?' + params, cookies=cookies)
def parse(self, response):
data = json.loads(response.body)
for course in data['linked']['courses.v1']:
print course['name']
For me, it prints:
Algorithms, Part I
Computing for Data Analysis
Pattern-Oriented Software Architectures for Concurrent and Networked Software
Computer Networks
Which proves that we can give Scrapy the cookies from selenium and successfully extract the data from the "for logged in users only" pages.
Additionally, make sure you don't violate the rules from the Terms of Use, specifically:
In addition, as a condition of accessing the Sites, you agree not to
... (c) use any high-volume, automated or electronic means to access
the Sites (including without limitation, robots, spiders, scripts or
web-scraping tools);

Categories

Resources