Selenium with Python Random Timeout without error message - python

I am following the example here (under the Python tab): https://www.selenium.dev/documentation/en/
Code here:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support.expected_conditions import presence_of_element_located
#This example requires Selenium WebDriver 3.13 or newer
with webdriver.Firefox() as driver:
wait = WebDriverWait(driver, 10)
driver.get("https://google.com/ncr")
driver.find_element_by_name("q").send_keys("cheese" + Keys.RETURN)
first_result = wait.until(presence_of_element_located((By.CSS_SELECTOR, "h3>div")))
print(first_result.get_attribute("textContent"))
I've ran this code and got it to work, displaying the first result "Show More". However, other times when I run this code this doesn't work, and gives a random timeout error without a message:
Traceback (most recent call last):
File "c:\Users\User\PyCharm_Projects\Project\sample_test.py", line 12, in <module>
first_result = wait.until(presence_of_element_located((By.CSS_SELECTOR, "h3>div")))
File "C:\Program Files\Python38\lib\site-packages\selenium\webdriver\support\wait.py", line 80, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
My question is: what is causing the timeout error if it doesn't happen every time? I've tried locating the elements with other methods (XPATH, link text). I've looked at the other following examples, but nothing that they post seemed to fix this problem:
Tried, didn't work
- Selenium random timeout exceptions without any message
Non-applicable solutions
- Instagram search bar with selenium
- Selenium Timeoutexception Error
- Random TimeoutException even after using ui.WebDriverWait() chrome selenium python
I am on Python 3.8, Firefox 68.6.0, and here is the relevant packages from 'pip freeze'
- beautifulsoup4==4.8.2
- requests==2.22.0
- selenium==3.141.0
- urllib3==1.25.8
- webencodings==0.5.1
Thank you!

I have made some customized actions for this cases, for instance:
def findXpath(xpath,driver):
actionDone = False
count = 0
while not actionDone:
if count == 3:
raise Exception("Cannot found element %s after retrying 3 times.\n"%xpath)
break
try:
element = WebDriverWait(driver, waitTime).until(
EC.presence_of_element_located((By.XPATH, xpath)))
actionDone = True
except:
count += 1
sleep(random.randint(1,5)*0.1)
return element

Please refer below solution, I have executed this on a chrome and firefox browser couple of times and its working fine.TimeoutException thrown when a command does not complete in enough time.
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium import webdriver
#utilise chrome driver to open specified webpage
driver = webdriver.Chrome(executable_path=r"chromedriver.exe")
driver.maximize_window()
driver.get("https://google.com/ncr")
WebDriverWait(driver, 20).until(
EC.visibility_of_element_located((By.NAME,"q"))).send_keys("cheese" + Keys.RETURN)
first_result=WebDriverWait(driver, 20).until(
EC.visibility_of_element_located((By.CSS_SELECTOR, "h3>div")))
print(first_result.get_attribute("textContent"))
Output
Show more

Related

Unable to click radio button even after using explicit wait on selenium

I am trying to select 'Female' Radio Button in the webpage
import time
import selenium.common.exceptions
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
driver = webdriver.Chrome(executable_path="C:\Drivers\chrome\chromedriver.exe")
driver.get("https://fs2.formsite.com/meherpavan/form2/index.html?1537702596407")
wait = WebDriverWait(driver, 60)
element = wait.until(EC.element_to_be_clickable((By.ID, "RESULT_RadioButton-7_1")))
driver.execute_script("arguments[0].click();",element)
#element.click()
#driver.find_element_by_id("RESULT_RadioButton-7_1").click()
print(driver.find_element_by_id("RESULT_RadioButton-7_0").is_selected())
print(driver.find_element_by_id("RESULT_RadioButton-7_1").is_selected())
Error:
C:\Users\kkumaraguru\PycharmProjects\pythonProject\venv\Scripts\python.exe C:/Users/kkumaraguru/PycharmProjects/SeleniumProject/RadioButtons.py
Traceback (most recent call last):
File "C:\Users\kkumaraguru\PycharmProjects\SeleniumProject\RadioButtons.py", line 14, in <module>
element = wait.until(EC.element_to_be_clickable((By.ID, "RESULT_RadioButton-7_1")))
File "C:\Users\kkumaraguru\PycharmProjects\pythonProject\venv\lib\site-packages\selenium\webdriver\support\wait.py", line 80, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
Process finished with exit code 1
It seems that it times out waiting for an element with ID RESULT_RadioButton-7_1 to be present on the page. I'd open the page yourself to make sure such element is present. You can do this using javascript in the browser's console: document.getElementById("RESULT_RadioButton-7_1"). If this doesn't work then try to debug through the code, and check what HTML Selenium is looking at to make sure is what you expect.
You can do that using JS intervention, Also make sure to maximize the windows size like below :
driver = webdriver.Chrome(driver_path)
driver.maximize_window()
driver.implicitly_wait(30)
driver.get("https://fs2.formsite.com/meherpavan/form2/index.html?1537702596407")
#time.sleep(5)
element = driver.find_element(By.ID, "RESULT_RadioButton-7_1")
driver.execute_script("arguments[0].click();", element)

Bookmakers scraping with selenium

I'm trying do understand how to scrape this betting website https://www.betaland.it/
I'm trying to scrape all the table rows that have inside the information of the 1X2 odds of the italian "Serie A".
The code I have written is this:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support.expected_conditions import presence_of_element_located
import time
import sys
url = 'https://www.betaland.it/sport/calcio/italia/serie-a-OIA-scommesse-sportive-online'
# absolute path
chrome_driver_path = '/Users/39340/PycharmProjects/pythonProject/chromedriver'
chrome_options = Options()
chrome_options.add_argument('--headless')
webdriver = webdriver.Chrome(
executable_path=chrome_driver_path, options=chrome_options
)
with webdriver as driver:
#timeout
wait = WebDriverWait(driver, 10)
#retrieve the data
driver.get(url)
#wait
wait.until(presence_of_element_located((By.ID, 'prematch-container-events-1-33')))
#results
results = driver.find_elements_by_class_name('simple-row')
print(results)
for quote in results:
quoteArr = quote.text
print(quoteArr)
print()
driver.close()
And the error that I have is:
Traceback (most recent call last):
File "C:\Users\39340\PycharmProjects\pythonProject\main.py", line 41, in <module>
wait.until(presence_of_element_located((By.ID, 'prematch-container-events-1-33')))
File "C:\Users\39340\PycharmProjects\pythonProject\venv\lib\site-packages\selenium\webdriver\support\wait.py", line 80, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
P.S: if you try to access to the bookmakers you have to set an Italian IP address. Italian bookmakers are avaible only from Italy.
It's basically a timeout error which means the given time to load the page or find the element(as in this case) is insufficient. So firstly try to increase the wait time from 10 to 15 or 30 even.
Secondly you can use other element identifiers like xpath, css_selector and others instead of id and adjust the wait time like said in point one.

I keep getting the error message NoSuchElementException when trying to use selenium to log into my university's webpage

I'm pretty new to python and StackOverflow so please bear with me.
I'm trying to write a script in python and use selenium to log myself into my university's website but I keep getting the same error NoSuchElementException.
The full text of the error:
Exception has occurred: NoSuchElementException
Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="username"]"}
(Session info: chrome=86.0.4240.183)
File "C:\Users\User\Desktop\Python\Assignment6\nsuokSelenium.py", line 9, in <module>
browser.find_element_by_id('username').send_keys(bb_username)
I have my log in information in a separate script called credential.py that I'm calling with
from credentials import bb_username, bb_password
My Code
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.keys import Keys
from credentials import bb_password, bb_username
browser = webdriver.Chrome()
browser.get('https://bb.nsuok.edu')
browser.find_element_by_id('username').send_keys(bb_username)
browser.find_element_by_id('password').send_keys(bb_password)
browser.find_element_by_name('submit').click()
try:
WebDriverWait(browser, 1) .until(EC.url_matches('https://bb.nsuok.edu/ultra'))
except TimeoutError:
print('took too long')
WebDriverWait(browser, 10).until(EC.url_matches('https://bb.nsuok.edu/ultra'))
browser.find_element_by_name('Courses').click()
WebDriverWait(browser, 10).until(EC.url_matches('https://bb.nsuok.edu/ultra/course'))
browser.find_element_by_name('Organizations').click()
WebDriverWait(browser, 10).until(EC.url_matches('https://bb.nsuok.edu/ultra/logout'))
The error is showing up here
browser.find_element_by_id('username').send_keys(bb_username)
Could it be an issue with PATH?
What Justin Ezequiel said is correct. You need to add waits in your code for the page to load properly; due to the fact that, dependent upon internet speeds, some pages load faster than others. ( obviously )
With that in mind, I was able to identify the elements on the page for you. I added some comments in the code as well.
MAIN PROGRAM - For Reference
from selenium import webdriver
from selenium.webdriver.chrome.webdriver import WebDriver as ChromeDriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait as DriverWait
from selenium.webdriver.support import expected_conditions as DriverConditions
from selenium.common.exceptions import WebDriverException
def get_chrome_driver():
"""This sets up our Chrome Driver and returns it as an object"""
path_to_chrome = "F:\Selenium_Drivers\Windows_Chrome85_Driver\chromedriver.exe"
chrome_options = webdriver.ChromeOptions()
# Browser is displayed in a custom window size
chrome_options.add_argument("window-size=1500,1000")
return webdriver.Chrome(executable_path = path_to_chrome,
options = chrome_options)
def wait_displayed(driver : ChromeDriver, xpath: str, int = 5):
try:
DriverWait(driver, int).until(
DriverConditions.presence_of_element_located(locator = (By.XPATH, xpath))
)
except:
raise WebDriverException(f'Timeout: Failed to find {xpath}')
def enter_information(driver : ChromeDriver, xpath: str, text : str):
driver.find_element(By.XPATH, xpath).send_keys(text)
if(driver.find_element(By.XPATH, xpath).get_attribute('value').__len__() != text.__len__()):
raise Exception(f'Failed to populate our Textbox.\nXPATH: {xpath}')
# Gets our chrome driver and opens our site
chrome_driver = get_chrome_driver()
chrome_driver.get("https://logon.nsuok.edu/cas/login")
# Waits until our elements are loaded onto the page
wait_displayed(chrome_driver, "//form//input[#id='username']")
wait_displayed(chrome_driver, "//form//input[#id='password']")
wait_displayed(chrome_driver, "//form//input[contains(#class, 'btn-submit')]")
# Inputs our Username and Password
enter_information(chrome_driver, "//form//input[#id='username']", "MyUserNameHere")
enter_information(chrome_driver, "//form//input[#id='password']", "MyPasswordHere")
# Clicks Login
chrome_driver.find_element(By.XPATH, "//form//input[contains(#class, 'btn-submit')]").click()
chrome_driver.quit()
chrome_driver.service.stop()
You may need to wait for the element. Try something like the following:
element = WebDriverWait(browser, 10).until(
EC.presence_of_element_located((By.ID, "username"))
)
element.clear()
element.send_keys(bb_username)
element = WebDriverWait(browser, 10).until(
EC.presence_of_element_located((By.ID, "password"))
)
element.clear()
element.send_keys(bb_password)

Python/Dynamic Parsing: <div id="root"> can't parse anything inside

I've been wanting to parse information from a particular website, and I have been having problems with the dynamic aspect. When a request is called in python for this site with BeautifulSoup, etc., everything in < div id="root" > isn't there.
According to the answer to this similar question -- Why isn't the html code inside div is being parsed? -- I tried to use a headless browser. I ended up trying to use selenium and splinter with the '--headless' options enabled for chrome.
I don't know whether the headless browser I chose is just the wrong one for this particular website's setup, or if its my code, so please give me suggestions if you have any.
Notes: Running on Ubunutu 20.04.1 LTS, and Python 3.8.3. If you want to suggest different headless browser prgorams, go ahead, but it needs to be compatible for all linux, mac, etc. and Python.
Below is a look at my most recent code. I've tried various ways to ".find" the button I want to click. Here I tried to use the xpath of the element I want, which I got through inspect:
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.select import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
options = webdriver.ChromeOptions()
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
options.add_argument('--ignore-certificate-errors')
with Browser('chrome', options=options) as browser:
browser.visit("http://gnomad.broadinstitute.org/region/16-2087388-2087428?dataset=gnomad_r2_1")
print(browser.title)
browser.find_by_xpath('//*[#id="root"]/div/div/div[2]/div/div[3]/section/div[2]/button').first.click()
The error message this gave me was:
File "etc/anaconda3/lib/python3.8/site-packages/splinter/element_list.py", line 42, in __getitem__
return self._container[index]
IndexError: list index out of range
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "practice3.py", line 20, in
browser.find_by_xpath('//[#id="root"]/div/div/div[2]/div/div[3]/section/div[2]/button').first.click()
File "etc/anaconda3/lib/python3.8/site-packages/splinter/element_list.py", line 57, in first
return self[0]
File "etc/anaconda3/lib/python3.8/site-packages/splinter/element_list.py", line 44, in getitem
raise ElementDoesNotExist(
splinter.exceptions.ElementDoesNotExist: no elements could be found with xpath "// [#id="root"]/div/div/div[2]/div/div[3]/section/div[2]/button"
Thanks!
Your problem seems to be that you don't wait for the elements to fully load. I set up the environment of your piece of code and printed the source of the website, ran through the response with a html beautifier
https://www.freeformatter.com/html-formatter.html#ad-output
Here I found that a div you want to access has a state of
<div class="StatusMessage-xgxrme-0 daewTb">Loading region...</div>
Which implies that the site is not fully loaded yet. To fix this, you can simply wait for the website to load, which selenium can do
from selenium.webdriver.support.ui import WebDriverWait
WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.XPATH, '//*[#id="root"]/div/div/div[2]/div/div[3]/section/div[2]/button')))
This will wait for the element to be loaded and clickable.
Here's the code snippet I tested on
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.select import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
options = webdriver.ChromeOptions()
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
options.add_argument('--ignore-certificate-errors')
with webdriver.Chrome("<path-to-driver>", options=options) as browser:
browser.get("http://gnomad.broadinstitute.org/region/16-2087388-2087428?dataset=gnomad_r2_1")
WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.XPATH, '//*[#id="root"]/div/div/div[2]/div/div[3]/section/div[2]/button')))
print(browser.title)
print(browser.page_source)
b = browser.find_element_by_xpath('//*[#id="root"]/div/div/div[2]/div/div[3]/section/div[2]/button')
browser.execute_script("arguments[0].click()", b)
Simply replace the <path-to-driver> with the path to your chrome webdriver.
The last bit is becuase I got an error from the click of the button, which selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element is not clickable with Selenium and Python solved.

Try - except clause, NoSuchElementExcepts error

Actually this is my second question on the same subject.
But in the original question, I put so many functions in my code which played a role as distraction.
So in this post, I deleted all the unnecessary functions and focused on my problem.
What I want to do is as follwing:
1. open a url with the firefox browser(using selenium)
2. click into a page
3. click every thumnails using a loop until the loop hit the "NoSuchElementExcepts" error
4. stop the loop when the loop hit the error
Here is my code.
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
import time
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
#1. opening Firefox and go to an URL
driver = webdriver.Firefox()
url = "https://www.amazon.com/Kraft-Original-Macaroni-Microwaveable-Packets/dp/B005ECO3H0"
driver.get(url)
action = ActionChains(driver)
time.sleep(5)
#2. going to the main images page
driver.find_element_by_css_selector('#landingImage').click()
time.sleep(2)
#3 parsing the imgs and click them
n = 0
for i in range(1,10):
try:
driver.find_element_by_css_selector(f'#ivImage_{n}').click()
element = WebDriverWait(driver,20,1).until(
EC.presence_of_element_located((By.CLASS_NAME, "fullscreen"))
)
n = n + 1
except NoSuchElementException:
break
driver.close()
and error stacktrace is like this
Exception has occurred: NoSuchElementException
Message: Unable to locate element: #ivImage_6
File "C:\Users\Administrator\Desktop\pythonworkspace\except_test.py", line 25, in <module>
driver.find_element_by_css_selector(f'#ivImage_{n}').click()
FYI, all the thumnail images are under [ivImage_"number"] ID
I don't know why my try-except statement is not working.
Am I missing something?

Categories

Resources