NoSuchElement Exception erron when using "find_element_by_link_text" - python

Selenium fails to find element by link text:
time.sleep(3);
driver = self.driver
driver.implicitly_wait(10)
findHeaderLearn = driver.find_element_by_link_text('Learn')
findHeaderLearn.click()
pageTitle = driver.title
driver.back()
return pageTitle
I get this error:
raise exception_class(message, screen, stacktrace)
NoSuchElementException: Message: u'Unable to locate element: {"method":"link text","selector":"Learn"}' ; Stacktrace:
I read extensively through the web but cant find any hints of why it cant find the element.
Added a "implicitly_wait(10)" to make sure the element is visible but it didnt solve the problem.
Any other ideas?
Here is the HTML code:
<div class="l-wrap">
<h1 id="site-logo">
<div id="nav-global">
<h2 class="head">
<ul class="global-nav">
<li class="global-nav-item">
<li class="global-nav-item">
<a class="global-nav-link" href="/learn/">Learn</a> ======> im trying to find this element
</li>
<li class="global-nav-item">
<li class="global-nav-item">
<li class="global-nav-item global-nav-item-last buy-menu">
<li class="global-nav-item global-nav-addl">
</ul>enter code here
</div>
<a class="to-bottom" href="#l-footer">Jump to Bottom of Page</a>
</div>

Try a different locator, ideally Css Selector or XPath. Don't use find_element_by_link_text.
CSS Selector:
findHeaderLearn = driver.find_element_by_css_selector("#nav-global a[href*='learn']")
XPath:
findHeaderLearn = driver.find_element_by_xpath(".//*[#id='nav-global']//a[contains(#href, 'learn')]")
# findHeaderLearn = driver.find_element_by_xpath(".//*[#id='nav-global']//a[text()='Learn']")

Find element by linktext might not be as reliable as css selector or xpath. Also verify if the element is stale before performing any actions on it

Related

Find subdivs within selenium in python ( selenium.webdriver.firefox.webelement)

I use selenium to access all divs which contain roster information:
# this returns a <selenium.webdriver.firefox.webelement.FirefoxWebElement
divs = driver.find_elements_by_class_name('appointment-template')
These divs inside this element should look like this:
div class="appointment-template" id="appointment-3500508">
<p class="title">Mentoruur</p>
<p class="time">11:15<span class="time-end"> - 12:15</span></p>
<ul class="facility">
<li onclick=""
title="HIC_Online">ONLINE
</li>
</ul>
<ul class="docent">
<li onclick=""
title="HANSJE">HANSJE
</li>
</ul>
<ul class="group">
<li onclick=""
title="ASD123">ASD123
</li>
</ul>
The next thing I want to do is access values like the docent name and time values that lie within this div:
for div in divs:
print(div.find_element_by_class_name('title'))
print(div.find_element_by_class_name('time'))
This does not seem to work:
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: .title
How can I use selenium to get the values like:
Mentoruur
11:15 - 12:15
Hansje
to get the Mentoruur, one should try the below css :
div.appointment-template p.title
use it like this :
title = driver.find_element(By.CSS_SELECTOR, "div.appointment-template p.title").text
print(title)
to get time :
div.appointment-template p.time
code :
time = driver.find_element(By.CSS_SELECTOR, "div.appointment-template p.time").text
print(time)
same way you can go ahead with others.
In order to locate element inside element it's better to use this technique:
for div in divs:
print(div.find_element_by_xpath('.//p[#class="title"]'))
print(div.find_element_by_xpath('.//p[#class="time"]'))
the dot . in front of xpath expression means "from here". This is what we need when searching inside specific parent element

Python webscraping with Selenium chrome driver

I'm trying to get the number of publications of an instagram account which is in a span tag by using Python Selenium with Chrome driver this is a part of the html code:
<!doctype html>
<html lang="fr" class="js logged-in client-root js-focus-visible sDN5V">
<head>-</head>
<body class style>
<div id="react-root"> == 50
<form enctype^murtipart/form-data" method="POST" role="presentation">_</form>
<section class=”_9eogI E3X2T">
<div></div>
<main class="SCxLW o64aR " role=”main">
<div class=”v9tJq AAaSh VfzDr">
<header class=" HVbuG">_</header>
► <div class="-vDIg">_</div>
► <div class="_4bSq7">_</div>
▼ <ul class=” _3dEHb">
▼ <li class=” LH36I">
▼ <span class=" _81NM2">
<span class="g47SY 10XF2">6 588</span>
"publications"
</span>
</li>
THE PYTHON CODE
def get_publications_number(self, user):
self.nav_user(user)
sleep(16)
publication = self.driver.find_element_by_xpath('//div[contains(id,"react-root")]/section/main/div/ul/li[1]/span/span')
THE ERROR MESSAGE
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element:
{"method":"xpath","selector":"//div[contains(id,"react-root")]/section/main/div/ul/li[1]/span/span"}
(Session info: chrome=80.0.3987.149)
IMPORTANT:
This xpath is pasted from the Chrome element inspector so I don't think it's the problem. When I put self.driver.find_elements_by_xpath() (with 's') there will be no error and if I do:
for value in publication:
print(value.text)
there will be no error too but nothing will be printed
SO THE QUESTION IS:
Why am I getting this error while the Xpath exists?
Try
'//div[#id="react-root"]//ul/li//span[contains(., "publications")]/span'
Explanation:
//div[#id="react-root"] << find the element which has the id of "react-root"
//ul/li << inside the found react root find elements anywhere (//) which are li elements which are children of an ul tagged element
//span[contains(., "publications")] << in the found li elements find span elements anywhere which contain publications as text
/span get span elements of the found span
One more thing: find_element_by_xpath returns the first element which matches. In case you have more than one 'publications', you can collect them all with the xpath above (if you want to ) if you just use find_elements_by_xpath instead of find_element_by_xpath in selenium.
Recently I found this page which is a quite good read to start mastering Xpath, check it out if you want to know more.
//div[contains(id,"react-root")]/section/main/div/ul/li[1]/span/span
Use this Xpath. It might work. I think you made a coma error there.

Unable to locate element Selenium webdriver || Python

<div class="container-fluid ">
<div class="navbar-header">
<span id="problem_hide_search" class="nav navbar-left">
<span id="ca660735dba5d3003d7e5478dc9619b2_title" class="list_search_title navbar-text " style="float: left; display:inherit">Go to</span>
<div style="float: left; display:inherit">
<div class="input-group" style="width: 300px;">
<span class="input-group-addon input-group-select">
<label class="sr-only" for="ca660735dba5d3003d7e5478dc9619b2_text">Search</label>
<input id="ca660735dba5d3003d7e5478dc9619b2_text" class="form-control" name="ca660735dba5d3003d7e5478dc9619b2_text" style="width: 150px;" placeholder="Search"/>
</div>
</div>
<script data-comment="widget search load event">addLoadEvent(function () { new GlideWidgetSearch('ca660735dba5d3003d7e5478dc9619b2', 'problem', 'true'); });</script>
Am trying to locate the Search box by switching into iframe and selecting by
search_box = driver.find_element_by_xpath('//*#id="ca660735dba5d3003d7e5478dc9619b2_text"]')
But i get error unable to locate Message: no such element: Unable to locate element:
Even thought I find one matching node.
As you mentioned in your question that you are trying to locate the Search box by switching into iframe and selecting as per the best practices you should :
Induce WebDriverWait for the frame to be available to switch as follows :
WebDriverWait(driver, 10).until(EC.frame_to_be_available_and_switch_to_it(By.ID,"id_of_iframe"))
Here you will find a detailed discussion How can I select a html element no matter what frame it is in in selenium?
While you look out for an element within an <iframe> tag induce WebDriverWait with proper expected_conditions. Considering the fact that you intend to send text to the element you can use the following line of code :
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//div[#class='navbar-header']//input[#class='form-control' and contains(#id,'_text')]"))).send_keys("hello")

Why did changing my xpath make my selenium click work consistently?

I am running a series of selenium tests with python. I have a navigation on the page I'm testing that has this structure:
<ul>
<li class="has-sub">
<a href="#">
<span> First nav </span>
</a>
<ul style="display:block">
<li>
<a href="#">
<span> First subnav </span>
</a>
</li>
<li>...</li>
<li>...</li>
<li>...</li>
</ul>
</li>
<li>...</li>
</ul>
Now I am clicking on the first subnav, that is the first span, but clicking on First nav to open up that list then first subnav. I implement a webdriverwait, to wait for the element to be visible and click on it via it's xpath,
//span[1]
I often got timeout exceptions waiting for the subnav span to be visible after clicking on the nav, which made me think something was wrong with clicking on the first nav to open up the list. So I changed the xpath of the first nav (//span[1]) to
//li[#class='has-sub']/descendant::span[text()='First subnav']
and I never get timeout exceptions when waiting for subnav span to be visible now. So seems like it's always clicking on the nav span every time to open it up and give me no timeout when trying to get to the subnav. Anyone have any idea why that is?
Here is my python code as well:
inside LoadCommPage class:
def click_element(self, by, locator):
try:
WebDriverWait(self.driver, 10).until(EC.visibility_of_element_located((by, locator)))
print "pressing element " + str(locator)
self.driver.find_element(by, locator).click()
except TimeoutException:
print "no clickable element in 10 sec"
print self.traceback.format_exc()
self.driver.close()
inside main test (load_comm_page is an instance of LoadCommPage, where click_clement is defined):
load_comm_page.click_element(*LoadCommPageLocators.sys_ops_tab)
And another class for the locators:
class LoadCommPageLocators(object):
firstnav_tab = (By.XPATH, "//li[#class='has-sub']/descendant::span[text()='First nav']")
Xpath indexes begin at one, not 0 so the Xpath
//span[1]
is looking for the first span element in the html. Whereas
//span[2]
will look for the second span.

How to traverse through inner div tags in Selenium using Python?

I have a html code as below:
<div class="abc">
<div class ="xyz"
<p> xyz </p>
</div>
<div class= "foo">
<p>foo</p>
<a class="btn btn-lg btn-success" href="www.google.com" role="button" name="click" id="click">Click me</a>
<div>
</div>
How can I grab the Click me button here and do a click using Selenium. I used driver.find_element_by_id() but it did not work and gave an error.
I think I might have to traverse through the div tags to reach the button.I tried the below code to grab the div class but not able to proceed.
def test(self):
self.driver.get("sample site address")
elem = self.driver.find_element_by_class_name("abc")
#need to get to the button and click it?
ERROR:
raise exception_class(message, screen, stacktrace)
InvalidSelectorException: Message: u'The given selector abc is either invalid or does not result in a WebElement. The following error occurred:\nInvalidSelectorError: Compound class names not permitted' ; Stacktrace:
at FirefoxDriver.annotateInvalidSelectorError_ (file:///tmp/tmpXXYwwK/extensions/fxdriver#googlecode.com/components/driver_component.js:8879)
at FirefoxDriver.prototype.findElementInternal_ (file:///tmp/tmpXXYwwK/extensions/fxdriver#googlecode.com/components/driver_component.js:8910)
at FirefoxDriver.prototype.findChildElement (file:///tmp/tmpXXYwwK/extensions/fxdriver#googlecode.com/components/driver_component.js:8917)
at DelayedCommand.prototype.executeInternal_/h (file:///tmp/tmpXXYwwK/extensions/fxdriver#googlecode.com/components/command_processor.js:10884)
at DelayedCommand.prototype.executeInternal_ (file:///tmp/tmpXXYwwK/extensions/fxdriver#googlecode.com/components/command_processor.js:10889)
at DelayedCommand.prototype.execute/< (file:///tmp/tmpXXYwwK/extensions/fxdriver#googlecode.com/components/command_processor.js:10831)
-----
Since, Click me is a link, you can find it with link text and then click on it:
driver.find_element_by_link_text("Click me")

Categories

Resources