Python Selenium wait for innerHTML - python

I'm trying to wait for the innerHTML element to load. Here is the generic version of my code:
element = WebDriverWait(driver, 120).until(EC.presence_of_element_located((By.XPATH, XPATH)))
element = element.get_attribute('innerHTML')
Element is a tr tag inside of a table. This code is inside of a loop that is supposed to run 25x per page over thousands of ajax pages. After a certain amount of runs, I continue to receive this error:
selenium.common.exceptions.StaleElementReferenceException: Message: {"errorMessage":"Element is no longer attached to the DOM"...
Every time, this error stems from the second line of provided code. This leads me to believe the element is loading, but the innerHTML is not loading quickly enough, and this elicits the given error message. I've tried many ways to get around this without success.
How can I make my code wait for the innerHTML to load after the element's presence has been confirmed?

Good that you are using python, you could write the wait condition like this as well.
innerHTML = WebDriverWait(driver, 5).until(lambda driver: driver.find_element_by_xpath(XPATH).get_attribute("innerHTML"))
or maybe like this
WebDriverWait(driver, 5).until(lambda driver: driver.find_element_by_xpath(XPATH).get_attribute("innerHTML") == "expected text")

Related

How do I click on an item on my google search page with Selenium Python?

Good time of the day!
Faced with a seemingly simple problem,
But it’s been a while, and I’m asking for your help.
I work with Selenium on Python and I need to curse about
20 items on google search page by random request.
And I’ll give you an example of the elements below, and the bottom line is, once the elements are revealed,
Google generates new such elements
Problem:
Cannot click on the element. I will need to click on existing and then on new, generated elements in this block: click for open see blocks with element
Tried to click on xpath, having collected all the elements:
xpath = '//*[#id="qmCCY_adG4Sj3QP025p4__16"]/div/div/div[1]/div[4]'
all_elements = driver.find_element(By.XPATH, value=xpath)
for element in all_elements:
element.click()
sleep(2)
Important note!
id xpath has constantly changing and is generated by another on the google side
Tried to click on the class
class="r21Kzd"
Tried to click on the selector:
#qmCCY_adG4Sj3QP025p4__16 > div > div > div > div.wWOJcd > div.r21Kzd
Errors
This is when I try to click using xpath:
Message: no such element: Unable to locate element: {"method":"xpath","selector"://*[#id="vU-CY7u3C8PIrgTuuJH4CQ_9"]/div/div[1]/div[4]}
In other cases, the story is almost the same, the driver does not find the element and cannot click on it. Below I apply a scratch tag on which I need to click
screenshot tags on google search
Thanks for the help!
In case iDjcJe IX9Lgd wwB5gf are a fixed class name values of that element all you need is to use CSS_SELECTOR instead of CLASS_NAME with a correct syntax of CSS Selectors.
So, instead of driver.find_element(By.CLASS_NAME, "iDjcJe IX9Lgd wwB5gf") try using this:
driver.find_element(By.CSS_SELECTOR, ".iDjcJe.IX9Lgd.wwB5gf")
(dots before each class name, no spaces between them)

Selenium Python Error: "stale element reference: element is not attached to the page document"

HTML Image
I'm using selenium to get some information from a site, but the links that I need to click on require double clicking. Here's the code (I've included an attachment of the image of the HTML)
PATH = "C:\Program Files (x86)\msedgedriver.exe"
driver = webdriver.Edge(PATH)
action = ActionChains(driver)
from selenium import webdriver
from selenium.webdriver import ActionChains
frame = search_by_XPATH('//[#id="frmMain"]/table/tbody/tr/td[2]/div[2]').get_attribute('id').replace("_Div","_Frame")
WebDriverWait(driver, 20).until(EC.frame_to_be_available_and_switch_to_it((By.ID, frame)))
WebDriverWait(driver, 20).until(EC.frame_to_be_available_and_switch_to_it((By.ID,"fraTree_Frame")))
rows = driver.find_elements(By.XPATH,'//*[#id="ctl00_treChart_RadTreeView"]/ul/li/div/span[2]')
for row in rows:
# row.click()
action.double_click(row).perform()
sleep(1)
I'm accessing different pages to get the links, and although each page has the same content the "id" of the frame changes with each new page I access. So that's why I use the 'get_attribute' function to get the id of the frame of the page.
I'm using action chains right now to perform the double click, but every time I do I get the stale element reference: element is not attached to the page document error. I know it's not an issue with the XPATH I'm using to access the element because I can perform a element.click() function on it just fine and the element receives the click. Nothing changes after the first click except the element gets highlighted. The element's xpath doesn't change and its still on the page. I also tried doing the action.click().perform() on the element but that produces the same error.
I've tried clicking on other parts of the element that could receive the click (the div, the span2,etc) but I always get the error. The 'span2' is the part that holds the text of the link (although its not an 'a' tag so its not really a link in that sense.)
I don't understand why the element would be able to receive the click for the .click() function but then be unable to receive the double click. I'm using the double_click() function on another element previously in my code and that works as expected.

Message: no such element: Unable to locate element python selenium

i have this error in selenium python
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"/html/body/div[24]/div[1]/div/div/div[3]/div[1]/div/button[1]"}
i want the element with this xpath:
driver.find_element_by_xpath("/html/body/div[24]/div[1]/div/div/div[3]/div[1]/div/button[1]").click()
if doesn't exist, then click other element with other xpath like:
driver.find_element_by_xpath("/html/body/div[24]/div[1]/div/div/div[3]/div[1]/div/button[2]").click()
everything is clear if element 1 doesn't exists to click, click to element 2
how can I do this in selenium python
thanks
NoSuchElementException generally occurs because of one the below reasons
Element locator is incorrect
Element is not rendered
Page is not rendered or loaded
Element is inside a frame
You can have a look, do a debug and based on that write your code. :)
This is not the exact solution, but this is the idea
if elem1.is_displayed():
elem1.click()
else:
elem2.click()
I'd suggest you using time.sleep(10) and a nested try block to catch your errors while it accomplishes what you want to do. For example:
time.sleep(10)
try:
driver.find_element_by_xpath("/html/body/div[24]/div[1]/div/div/div[3]/div[1]/div/button[1]").click()
except:
print("button1 could not be found... Now trying to retreive button2")
try:
driver.find_element_by_xpath("/html/body/div[24]/div[1]/div/div/div[3]/div[1]/div/button[2]").click()
except:
print("button1 nor button2 were not found...")
Note that the time.sleep(10) delays your code from executing anything after this line for 10 seconds, and this allows the page to load properly for elements to be located easier.

Python selenium presence_of_element_located with only href

I am trying to click on a link in a forum using Selenium, but I need to wait until the page load, so i thought the better way was to use WebDriverWait. This is my code I used to test it:
driver.get("https://testocolo.forumcommunity.net")
#First click, working
driver.find_element_by_xpath('//a[#href="'+"/?f=9087616"+'"]').click()
try :
element = WebDriverWait(driver, 2).until(
EC.presence_of_element_located(By.XPATH, '//a[#href="'+"/?t=61904616"+'"]')
)
element.click()
except :
print("NO")
This is the element
Brotha
The try except cycle ends up every time printing "NO".
Before that I tried locating by LINK_TEXT instead, with 'Brotha' but in neither way works. Where am I doing wrong?
xpath you can try
//a[contains(#tittle,'discussione inviata il')]
or
//*[text()='Brotha']
Next option you can check is if that element is in iframe?
WebDriverWait(driver, 30).until(
EC.element_to_be_clickable((By.XPATH, "//*[text()='Brotha']")))

WebDriverWait on find_elements_by_xpath

I am trying to figure out how WebDriverWait works with find_elements_by_xpath. How does it know that all related elements have loaded or does it just wait until page is loaded.
I can understand if we have a specific element using find_element_by_xpath, but not sure with find_elements_by_xpath.
For example:
elements = WebDriverWait(driver, 5).until(lambda driver: driver.find_elements_by_xpath("//table[#id='%s']/tbody/tr" % myid))
The expected condition you've presented would actually evaluate to True once there is at least one element matching the XPath expression. In other words, it is equivalent to:
expression = "//table[#id='%s']/tbody/tr" % myid
wait.until(EC.presence_of_element_located((By.XPATH, expression)))
webdriver isn't waiting for the page to be loaded -- it can't since the page's contents could be continually changing. Instead it simply executes the find_elements_* command and if successful, the WebDriverWait(...).until call returns the elements found. It is no different than find_element_by_xpath, except that more than one element may be returned.

Categories

Resources