Splinter Python ElementDoesNotExist for is_visible() - python

In my code I have following line:
browser.find_by_css(business_role_expand).is_visible(1000)
According to the documentation, the code should wait maximum of 1000s for the element specified by the CSS to load and be visible. If not, it will return "False". However instead I get this error:
splinter.exceptions.ElementDoesNotExist: no elements could be found with css "div.panel:nth-child(4) > div:nth-child(1) > a:nth-child(1)"
Can anyone advise me? I don't understand why this happens. I'm using Firefox driver.

This error...
splinter.exceptions.ElementDoesNotExist: no elements could be found with css "div.panel:nth-child(4) > div:nth-child(1) > a:nth-child(1)"
...implies that no element exists within the DOM Tree which can be identified by the css_selector:
div.panel:nth-child(4) > div:nth-child(1) > a:nth-child(1)
As the element itself doesn't exist, there is no question of waiting for the presence, visibility or interactibility of the element even with wait_time
Solution
Try to construct a locator strategy which identifies the element uniquely within the HTML DOM.

It's impossible to say without being able to see the page but a couple scenarios come to mind.
Your locator is off somehow. We can't help fix it without a link to the page or the relevant HTML.
Your locator is correct but the page needs to be scrolled or otherwise manipulated to cause the element to appear. Again, we can't help without seeing the page.

Related

Selenium explicit wait accepts EC.visibility_of_element_located xpath element incorrectly

I am currently running selenium pytest automation and am running into an issue with explicit wait. I have a WebDriverWait function like such:
WebDriverWait(driver, 10).until(EC.visibility_of_element_located((
By.XPATH, '//div[#class="ant-card ant-card-bordered" and //*[contains(text(), "Access Points")]]')))
where this specific xpath element does not exist on the page.
Image of page where xpath element does not exist
As such I expect a failed test case. The bizzare thing is that it does work as expected using a docker image and successfully produces a timeout exception, but when run on local web browser it passes through the explicit wait without any timeout exception. I believe there is something weird with the xpath element, because when I edit the text() attribute to a string that does not exist on the page it properly fails. Note that when searching for xpath element with chrome inspect tools the element is not found as seen in the image above
As I see in the attached image: the selected div does not contain the text "Access Points". Let's remove it from the XPath or try to find another div that actually contains such text.

Second iterative of find_elements_by_xpath gives error in selenium python

I'm trying to find all the my subject in my dashboard of my college website.
I'm using selenium to do it.
The site is a little slow so first I wait
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//span[#class='multiline']")))
then I find all the elements with
course = driver.find_elements_by_xpath("//span[#class='multiline']")
after that in a for loop I try to traverse it the 0th place of the "course" works fine and I'm able to click it and go to webpage but when the loop runs for the secon d time that is for the 1st place in "course" it gives me error selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
So I tried adding a lit bit wait time to using 2 method it still gives me error
driver.implicitly_wait(20)
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//span[#class='multiline']")))
the loop
for i in course[1::]:
#driver.implicitly_wait(20)
#WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//span[#class='multiline']")))
print(i)
i.click()
driver.implicitly_wait(2)
driver.back()
a snippet of the website
Thanks in advance
Answering my own question after extensive research
A common technique used for simulating a tabbed UI in a web app is to prepare DIVs for each tab, but only attach
one at a time, storing the rest in variables. In this case my code have a reference
to an element that is no longer attached to the DOM (that is, that has an ancestor which is "document.documentElement").
If WebDriver throws a stale element exception in this case, even though the element still exists, the reference
is lost. You should discard the current reference you hold and replace it, possibly by locating the element again
once it is attached to the DOM
for i in range(len(course)):
# here you need to find all the elements again because once we
leave the page the reference will be lost and we need to find it again
course = driver.find_elements_by_xpath("//span[#class='multiline']")
print(course[i].text)
course[i].click()
driver.implicitly_wait(2)
driver.back()

Selenium Stale Element driver.get(url) inside Loop

I desire to iterate thru a set of URLs using Selenium. From time to time I get 'element is not attached to the page document'. Thus after reading a couple of other questions indicated that it's because I am changing the page that is looking at. But I am not satisfied with that argument since:
for url in urlList:
driver.get(url)
WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, '//div/div')))
#^ WebDriverWait shall had taken care of it
myString = driver.find_element_by_xpath('//div/div').get_attribute("innerHTML")
# ^ Error occurs here
# Then I call this function to go thru other elements given other conditions not shown
if myString:
getMoreElements(driver)
But if I add a delay like this:
for url in urlList:
driver.get(url)
time.sleep(5) # <<< IT WORKS, BUT WHY?
element = WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, '//div/div')))
myString = driver.find_element_by_xpath('//div/div').get_attribute("innerHTML") # Error occured here
I feel I am hiding the problem by adding the delay right there. I have implicity_wait set to 30s and set_page_load_timeout to 90s, that would had been sufficient. So, why am I still facing to add what looks like useless time.sleep?
Did you try the xpath: //div/div manually in dev tool to see how many div will be found on the page? I thinks there should be many. So your below explicity wait code can very easy to satisfied, maybe no more than 1 second, selenium can find such one div after browser.get() and your wait end.
WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, '//div/div')))
Consider following possiblity:
Due to your above explicity wait issue, the page loading not complete, more and more //div/div are rendering to page, at this time point, you ask selenium to find such one div and to interact with it.
Think about the possiblity of the first found div by selenium won't be deleted or moved to another DOM node.
What do you think the rate of above possiblity will be high or low? I think it's very hight, because div is very common tag in nowdays web page and you use such a relaxed xpath which lead to so many matched div will be found, and each one of them is possible to cause the 'Element Stale' issue
To resolve your issue, please use more strict locator to wait some special element, rather than such hasty xpath which result in finding very common and many exist element.
What you observe as element is not attached to the page document is pretty much possible.
Analysis:
In your code, while iterating over the urlList, we are opening an url then waiting for the WebElement with XPATH as //div/div with ExpectedConditions clause set to presence_of_element_located which does not necessarily mean that the element is visible or clickable.
Hence, next when you try to driver.find_element_by_xpath('//div/div').get_attribute("innerHTML") the reference of previous search/find_element is not found.
Solution:
The solution to your question would be to change the ExpectedConditions clause from presence_of_element_located to element_to_be_clickable which checks that element is visible and enabled such that you can even click it.
Code Block:
Your optimized code block may look like:
for url in urlList:
driver.get(url)
WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.XPATH, '//div/div')))
myString = driver.find_element_by_xpath('//div/div').get_attribute("innerHTML")
Your other solution:
Your other solution works because you are trying to covering up Selenium's work through time.sleep(5) which is not a part of best practices.

Selenium iframe within iframe

I have the following python code :
iframe = driver.find_element_by_name("iframe_name")
driver.switch_to_frame(iframe)
elem = driver.find_element_by_xpath("/html/body/iframe")
It is able to find the first iframe element and then switch to it however once it is in it and I try to access the second iframe element (by xpath since it does not have a name or id) I keep getting a "no such element error".
Can someone please help. I am trying to access the interior iframe so that I can get the src attribute within it.
Possible solutions.
Try using wait for an element by XPath ("/html/body/iframe"), as often times the driver will fail to wait till switching to the frame is completed.
Make sure that your XPath ("/html/body/iframe") is working. Also try identifying element using the tag name if there only one IFrame in the IFrame.
Hope that helps.

How to take elements by Selenium which created by JS script

I trying automating testing with Selenium (python bindings), specifically want to log in on tokmonet.zendesk.com.
I create a script which takes email field, password field and sign in button by id.
But when I ran script it fails with
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: {"method":"id","selector":"user_email"}
Inspecting page with Firebug I see these elements. But when trying to get them with Firefinder it couldn't.
So, I perform
html_source = driver.page_source
print(html_source)
and get the only
<html xmlns="http://www.w3.org/1999/xhtml"><head></head><body></body></html>
When I check page source it contains only js scripts and no mark up.
Please advice how I could handle this elements?
I see that elements that you are trying to log in are in an iframe in tokmonet.zendesk.com and so you are not able to get the elements. To handle such situation try to switch to the iframe first and then get the elements. Here's how to do it in Java -
driver.switchTo().frame(driver.findElement(By.tagName("iframe")));
(new WebDriverWait(driver, 20))
.until(ExpectedConditions.presenceOfElementLocated(By.id("user_email"))).sendKeys("username");
//Similarly you can get other elements too
You similarly implement it in other languages. Hope this helps.
You need to switch to the IFRAME, then send_keys() to the element which you can find by ID. Don't forget to switch back to the default content if you need to access elements outside the IFRAME.
driver.switch_to.frame(driver.find_element_by_tag_name("iframe"))
driver.find_element_by_id("user_email").send_keys("username")
driver.find_element_by_id("user_password").send_keys("password")
// do whatever else
driver.switch_to.default_content()

Categories

Resources