Python Selenium: Looking for ways to print page load time - python

I am pretty new with using python with selenium web testing.
I am creating a handful of test cases for my website and I would like to see how long it takes for specific pages to load. I was wondering if there is a way to print the page load time after or during the test.
Here is a basic example of what one of my test cases looks like:
import time
import unittest
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("some URL")
driver.implicitly_wait(10)
element = driver.find_element_by_name("username")
element.send_keys("User")
element = driver.find_element_by_name("password")
element.send_keys("Pass")
element.submit()
time.sleep(2)
driver.close()
In this example I would like to see how long it took for the page to load after submitting my log in information.

I have found a way around this by running my tests as python unit tests. I now record my steps using the selenium IDE and export them into a python file. I then modify the file as needed. After the test runs it shows the time by default.

Related

How can i webscrap aviator game results?

I want to get the latest result from the aviator game each time it crashes, i'm trying to do it with python and selenium but i can't get it to work, the website takes some time to load which complicates the process since the classes are not loaded from the beginning
this is the website i'm trying to scrape: https://estrelabet.com/ptb/bet/main
import requests
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
url = 'https://estrelabet.com/ptb/bet/main'
options = Options()
options.headless = True
navegador = webdriver.Chrome(options=options)
navegador.get('https://estrelabet.com/ptb/bet/main')
navegador.find_element()
navegador.quit()
this is what i've done so far
i want to get all the elements in the results block
payout block
and get these results individually
result
I tried to extract the data using selenium but it was impossible since the iDs and elements were dynamic, I was able to extract data using an OCR library called Tesseract, I share the code I used for this purpose, I hope it helps you
AviatorScraping github

Selenium get(url) showing wrong page

I am trying to web scrape a dynamically loaded page with Selenium. I can copy and paste the below url into a normal Chrome browser and it works perfectly fine but when I use selenium, it return the wrong page of horse races for a different day. It seems to work the first time you run the code but retains some sort of memory and you cannot run it again with a different date as it just returns the original date?
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
url = "https://www.tab.com.au/racing/meetings/2021-06-11"
driver = webdriver.Chrome('xxxxxxxx')
driver.get(url)
Has anyone every come across something like this with Selenium?

Selenium download entire html

I have been trying to use selenium to scrape and entire web page. I expect at least a handful of them are spa's such as Angular, React, Vue so that is why I am using Selenium.
I need to download the entire page (if some content isn't loaded from lazy loading because of not scrolling down that is fine). I have tried setting a time.sleep() delay, but that has not worked. After I get the page I am looking to hash it and store it in a db to compare later and check to see if the content has changed. Currently the hash is different every time and that is because selenium is not downloading the entire page, each time a different partial amount is missing. I have confirmed this on several web pages not just a singular one.
I also have probably a 1000+ web pages to go through by hand just getting all the links so I do not have time to find an element on them to make sure it is loaded.
How long this process takes is not important. If it takes 1+ hours so be it, speed is not important only accuracy.
If you have an alternative idea please also share.
My driver declaration
from selenium import webdriver
from selenium.common.exceptions import WebDriverException
driverPath = '/usr/lib/chromium-browser/chromedriver'
def create_web_driver():
options = webdriver.ChromeOptions()
options.add_argument('headless')
# set the window size
options.add_argument('window-size=1200x600')
# try to initalize the driver
try:
driver = webdriver.Chrome(executable_path=driverPath, chrome_options=options)
except WebDriverException:
print("failed to start driver at path: " + driverPath)
return driver
My url call my timeout = 20
driver.get(url)
time.sleep(timeout)
content = driver.page_source
content = content.encode('utf-8')
hashed_content = hashlib.sha512(content).hexdigest()
^ getting different hash here every time since same url not producing same web page
As the Application Under Test(AUT) is based on Angular, React, Vue in that case Selenium seems to be the perfect choice.
Now, as you are fine with the fact that some content isn't loaded from lazy loading because of not scrolling makes the usecase feasible. But in all possible ways ...do not have time to find an element on them to make sure it is loaded... can't be really compensated inducing time.sleep() as time.sleep() have certain drawbacks. You can find a detailed discussion in How to sleep webdriver in python for milliseconds. It would be worth to mention that the state of the HTML DOM will be different for all the 1000 odd web pages.
Solution
A couple of viable solutions:
A pottential solution could have been to induce WebDriverWait and ensure that some HTML elements are loaded as per the discussion How can I make sure if some HTML elements are loaded for Selenium + Python? validating atleast either of the following:
Page Title
Page Heading
Another solution would be to tweak the capability pageLoadStrategy. You can set the pageLoadStrategy for all the 1000 odd web pages to common point assigning a value either:
normal (full page load)
eager (interactive)
none
You can find a detailed discussion in How to make Selenium not wait till full page load, which has a slow script?
If you implement pageLoadStrategy, page_source method will be triggered at the same tripping point and possibly you would see identical hashed_content.
In my experience time.sleep() does not work well with dynamic loading times.
If the page is javascript-heavy you have to use the WebDriverWait clause.
Something like this:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.get(url)
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, "[my-attribute='my-value']")))
Change 10 with whatever timer you want, and By.CSS_SELECTOR and its value with whatever type you want to use as a reference for a lo
You can also wrap the WebDriverWait around a Try/Except statement with the TimeoutException exception, which you can get from the submodule selenium.common.exceptions in case you want to set a hard limit.
You can probably set it inside a while loop if you truly want it to check forever until the page's loaded, because I couldn't find any reference in the docs about waiting "forever", but you'll have to experiment with it.

Cannot find element from a jump out window. How can I switch to a new jump out window?

I'm trying to automate our system with Python2.7, Selenium-webdriver, and Sikuli. I have a problem on login. Every time I open our system, the first page is an empty page, and it will jump to another page automatically; the new page is the main login page, so Python is always trying to find the element from the first page. The first page sometimes shows:
your session has timeout
I set a really large number for session timeout, but it doesn't work.
Here is my code:
import requests
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
driver = webdriver.Chrome()
driver.get('http://172.16.1.186:8080/C******E/servlet/LOGON')
# time.sleep(15)
bankid = driver.find_element_by_id("idBANK")
bankid.send_keys(01)
empid = driver.find_element_by_id("idEMPLOYEE")
empid.send_keys(200010)
pwdid = driver.fin`enter code here`d_element_by_id("idPASSWORD")
pwdid.send_keys("C******e1")
elem = driver.find_element_by_id("maint")
elem.send_keys(Keys.RETURN)
First of all, I can't see any Sikuli usage in your example. If you were using Sikuli, it wouldn't matter how the other page was launched as you'd be interacting with whatever is visible on your screen at that time.
In Selenium, if you have multiple windows you have to switch your driver to the correct one. A quick way to get a list of the available windows is something like this:
for handle in driver.window_handles:
driver.switch_to_window(handle);
print "Switched to handle:", handle
element = browser.find_element_by_tag_name("title")
print element.get_attribute("value")

Selenium takes a long time to find an element.Is there something I can do?

I have been trying to write a selenium script to login to my Quora account.
This is the script I have written.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import getpass
import time
email=raw_input("email: ")
password=getpass.getpass("Password: ")
driver = webdriver.Firefox()
driver.get("http://www.quora.com")
#time.sleep(5)
Form=driver.find_element_by_xpath("//div[#class='form_inputs']")
Form.find_element_by_name("email").send_keys(email)
#time.sleep(4)
Form.find_element_by_name("password").send_keys(password)
#time.sleep(4)
Form.find_element_by_xpath("//input[#value='Login']").click()
The statement
Form=driver.find_element_by_xpath("//div[#class='form_inputs']") takes very long to find the element. In fact, all the find_element statements take very long to do their job.(This could be because of some Javascript snippet to increase the load on selenium, but I could not understand much from the page source)
Is there any way I could do it faster? Similar scripts have worked well for me in Facebook and Google.
EDIT:
Removed the time.sleep() calls. It still takes around 6-8 minutes to find the element.
The reason why it is taking a while is because you are preforming time.sleep()
You should not do this, it's bad practice. You should be using WebDriver waits. I would personally go with Implicit waits for your scenario.
Please see the documentation
This is something I've seen here asked on SO multiple times, see:
Is Selenium slow, or is my code wrong?
Unable to login to quora using selenium webdriver in python
I've been able to reproduce the slow code execution using Firefox, but the following code works without any delays using Chrome or PhantomJS driver:
import getpass
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
email = raw_input("email: ")
password = getpass.getpass("Password: ")
driver = webdriver.Chrome()
driver.get("http://www.quora.com")
form = driver.find_element_by_class_name('regular_login')
form.find_element_by_name('email').send_keys(email)
form.find_element_by_name('password').send_keys(password + Keys.RETURN)
FYI, for Firefox, it helps to overcome the issue if you fire up Firefox with disabled javascript:
firefox_profile = webdriver.FirefoxProfile()
firefox_profile.set_preference("browser.download.folderList",2)
firefox_profile.set_preference("javascript.enabled", False)
driver = webdriver.Firefox(firefox_profile=firefox_profile)
driver.get('http://www.quora.com/')
But, as you would see - you'll quickly get a different set of problems.

Categories

Resources