Selenium iframe within iframe - python

I have the following python code :
iframe = driver.find_element_by_name("iframe_name")
driver.switch_to_frame(iframe)
elem = driver.find_element_by_xpath("/html/body/iframe")
It is able to find the first iframe element and then switch to it however once it is in it and I try to access the second iframe element (by xpath since it does not have a name or id) I keep getting a "no such element error".
Can someone please help. I am trying to access the interior iframe so that I can get the src attribute within it.

Possible solutions.
Try using wait for an element by XPath ("/html/body/iframe"), as often times the driver will fail to wait till switching to the frame is completed.
Make sure that your XPath ("/html/body/iframe") is working. Also try identifying element using the tag name if there only one IFrame in the IFrame.
Hope that helps.

Related

How to prevent "State Element Reference" errors in selenium

driver = webdriver.Chrome(service=s)
url="https://fourminutebooks.com/book-summaries/"
driver.get(url)
page_tabs = driver.find_elements(By.CSS_SELECTOR, "a[class='post_title w4pl_post_title']")
#html = driver.find_elements(By.CSS_SELECTOR,"header[class='entry-header page-header']")
length_page_tabs = len(page_tabs)
in_length = len(page_tabs)
for i in range(length_page_tabs):
ran = random.randint(0,in_length)
page_tabs[ran].click()
driver.execute_script("window.history.go(-1)")
time.sleep(10)
#need to get page source of html and then open it to a new file, extract what I want and add it to the email
I am trying to click one of the links, get the html code, email it to myself, and then go back a page and repeat. However after clicking the first random link, the code stops working and instead I get this error
You have to be very careful, when you put some elements collection to the variable, and going to iterate and perform some actions.
page_tabs = driver.find_elements...
All the elements in this case are cached, and each web browser action of navigate to another page, refrech the page, etc. will make all of these cached elements stale. This means they bacame like out-of-date and not possible to interact them any more.
So, to avoid stale element reference errors, you have to prevent any page reloads, or just refresh the elements every time after the page state has been changed.
StaleElementReferenceException
StaleElementReferenceException is a type of WebDriverException which is thrown when a reference to an element have gone stale, i.e. the element no longer appears on the HTML DOM of the page.
Some of the possible causes of StaleElementReferenceException include:
You are no longer on the same page, or the page may have refreshed since the element was last located.
The element may have been removed and re-added to the DOM Tree, since it was located. Such as an element being relocated. This can happen typically with a javascript framework when values are updated and the node is rebuilt.
Element may have been inside an iframe or another context which was refreshed.
This usecase
In your usecase, you have created a list of webelement i.e. page_tabs using the locator strategy:
page_tabs = driver.find_elements(By.CSS_SELECTOR, "a[class='post_title w4pl_post_title']")
Next within the loop whenever you invoke click on page_tabs[ran] you are redirected to a new page, where the elements within the list page_tabs becomes stale and new elements are loaded.
Moving forward when you invoke driver.execute_script("window.history.go(-1)") you are moving back to the main page where the elements of page_tabs were present and they reload again. At this point of time, the list page_tabs still continues to hold the webelements of the previous search, which have now become stale. Hence during the second iteration you face StaleElementReferenceException
Solution
In your usecase to avoid StaleElementReferenceException as the desired elements are <A> tag so instead of saving the elements you can store the href attributes in a list and invoke get(href) as follows:
driver.get("https://fourminutebooks.com/book-summaries/")
hrefs = [my_elem.get_attribute("href") for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "a[class='post_title w4pl_post_title']")))]
for href in hrefs:
driver.get(href)
print("Placeholder to perform the desired operations on the respective page")
driver.quit()
References
You can find a couple of relevant detailed discussions in:
StaleElementException when iterating with Python
Message: stale element reference: element is not attached to the page document in Python
StaleElementReferenceException: Message: stale element reference: element is not attached to the page document with Selenium and Python
Use driver.execute_script and javascript. Javascript is never stale because it evaluates right away. In other words, if you select an element with Python and later interact with it, there's a decent chance it won't be there anymore. The only way you can be sure it's still there is to evaluate it as you interact with it and the only way to do that is to stay in the browser context.

How do I click link hidden by element in selenium?

I am trying to access a link that is only available when I select a filter button element.
Filter Button
Desired Link
I have tried to access the element using CSS Selector, since the link text contains "include-out-of-stock".
driver.get("https://www.target.com/c/young-adult/-/N-qh1tf?Nao=0")
#Selects the filter button
link = driver.find_element(By.ID, "filterButton")
link.click()
#The code that is given me issues. It doesn't find the desired link even though it's in the html inspector
element = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "a[href*='include-out-of-stock']")))
element.click()
However, the element is seemingly unfound as I encounter a TimeoutException. I did play around to see if xpath would work, but I still meet the same issues. Is the element not interactable since it's not directly on the webpage? Could I just not be accessing the element right?
You have incorrect identifier as it has no attribute id with filtersButton.
A simple xpath for that button would be
//button[#data-test='filtersButton']

Second iterative of find_elements_by_xpath gives error in selenium python

I'm trying to find all the my subject in my dashboard of my college website.
I'm using selenium to do it.
The site is a little slow so first I wait
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//span[#class='multiline']")))
then I find all the elements with
course = driver.find_elements_by_xpath("//span[#class='multiline']")
after that in a for loop I try to traverse it the 0th place of the "course" works fine and I'm able to click it and go to webpage but when the loop runs for the secon d time that is for the 1st place in "course" it gives me error selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
So I tried adding a lit bit wait time to using 2 method it still gives me error
driver.implicitly_wait(20)
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//span[#class='multiline']")))
the loop
for i in course[1::]:
#driver.implicitly_wait(20)
#WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//span[#class='multiline']")))
print(i)
i.click()
driver.implicitly_wait(2)
driver.back()
a snippet of the website
Thanks in advance
Answering my own question after extensive research
A common technique used for simulating a tabbed UI in a web app is to prepare DIVs for each tab, but only attach
one at a time, storing the rest in variables. In this case my code have a reference
to an element that is no longer attached to the DOM (that is, that has an ancestor which is "document.documentElement").
If WebDriver throws a stale element exception in this case, even though the element still exists, the reference
is lost. You should discard the current reference you hold and replace it, possibly by locating the element again
once it is attached to the DOM
for i in range(len(course)):
# here you need to find all the elements again because once we
leave the page the reference will be lost and we need to find it again
course = driver.find_elements_by_xpath("//span[#class='multiline']")
print(course[i].text)
course[i].click()
driver.implicitly_wait(2)
driver.back()

an element is not visually searchable in chrome inspection

I am building a web scraper selenium but the element I want to click is actually searched in chrome inspection layout but can't be seen visually(like when you hit ctrl+F and search for an element by typing it in, it shows up in the DOM structure collapsing...)
So I can't proceed since I should make a "clicking" on it with my Python.
http://bitly.kr/Nypl88
is the link and the element I would like to click with selenium has an id called "cns_Tab21". When you search for it, the result total is 1 but can't be seen in the DOM.
Thank you for reading this post and for your answer in advance.
Your element is hidden inside an iframe.
To get to it you will need to switch to the iframe first:
driver.get("https://finance.naver.com/item/coinfo.nhn?code=255440");
WebElement iframe = driver.findElement(By.id("coinfo_cp"));
driver.switchTo().frame(iframe);
WebElement tabElement = driver.findElement(By.id("cns_Tab21"));
*Edit*
Added a Python implementation
driver = webdriver.Chrome()
driver.get("https://finance.naver.com/item/coinfo.nhn?code=255440")
iframe = driver.find_element_by_id("coinfo_cp")
driver.switch_to.frame(iframe)
tabElement = driver.find_element_by_id("cns_Tab21")

How to take elements by Selenium which created by JS script

I trying automating testing with Selenium (python bindings), specifically want to log in on tokmonet.zendesk.com.
I create a script which takes email field, password field and sign in button by id.
But when I ran script it fails with
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: {"method":"id","selector":"user_email"}
Inspecting page with Firebug I see these elements. But when trying to get them with Firefinder it couldn't.
So, I perform
html_source = driver.page_source
print(html_source)
and get the only
<html xmlns="http://www.w3.org/1999/xhtml"><head></head><body></body></html>
When I check page source it contains only js scripts and no mark up.
Please advice how I could handle this elements?
I see that elements that you are trying to log in are in an iframe in tokmonet.zendesk.com and so you are not able to get the elements. To handle such situation try to switch to the iframe first and then get the elements. Here's how to do it in Java -
driver.switchTo().frame(driver.findElement(By.tagName("iframe")));
(new WebDriverWait(driver, 20))
.until(ExpectedConditions.presenceOfElementLocated(By.id("user_email"))).sendKeys("username");
//Similarly you can get other elements too
You similarly implement it in other languages. Hope this helps.
You need to switch to the IFRAME, then send_keys() to the element which you can find by ID. Don't forget to switch back to the default content if you need to access elements outside the IFRAME.
driver.switch_to.frame(driver.find_element_by_tag_name("iframe"))
driver.find_element_by_id("user_email").send_keys("username")
driver.find_element_by_id("user_password").send_keys("password")
// do whatever else
driver.switch_to.default_content()

Categories

Resources