I am trying to search for an element in a sub-element with Selenium (Version 2.28.0), but selenium des not seem to limit its search to the sub-element. Am I doing this wrong or is there a way to use element.find to search a sub-element?
For an example I created a simple test webpage with this code:
<!DOCTYPE html>
<html>
<body>
<div class=div title=div1>
<h1>My First Heading</h1>
<p class='test'>My first paragraph.</p>
</div>
<div class=div title=div2>
<h1>My Second Heading</h1>
<p class='test'>My second paragraph.</p>
</div>
<div class=div title=div3>
<h1>My Third Heading</h1>
<p class='test'>My third paragraph.</p>
</div>
</body>
</html>
My python (Version 2.6) code looks like this:
from selenium import webdriver
driver = webdriver.Firefox()
# Open the test page with this instance of Firefox
# element2 gets the second division as a web element
element2 = driver.find_element_by_xpath("//div[#title='div2']")
# Search second division for a paragraph with a class of 'test' and print the content
print element2.find_element_by_xpath("//p[#class='test']").text
# expected output: "My second paragraph."
# actual output: "My first paragraph."
If I run:
print element2.get_attribute('innerHTML')
It returns the html from the second division. So selenium is not limiting its search to element2.
I would like to be able to find a sub-element of element2. This post suggests my code should work Selenium WebDriver access a sub element but his problem was caused by a time-out issue.
Can anyone help me understand what is happening here?
If you start an XPath expression with //, it begins searching from the root of document. To search relative to a particular element, you should prepend the expression with . instead:
element2 = driver.find_element_by_xpath("//div[#title='div2']")
element2.find_element_by_xpath(".//p[#class='test']").text
Use the following:
element2 = driver.find_element_by_cssselector("css=div[title='div2']")
element2.find_element_by_cssselector("p[#class='test']").text
Please let me know if you have any problems.
I guess,we need use method "By" from webdriver.common.by when use "driver.find_element".
So...the code must be:
from selenium import webdriver
driver = webdriver.Firefox()
from selenium.webdriver.common.by import By
element2 = driver.find_element(By.XPATH, "//div[#title='div2']")
element2.find_element(By.XPATH, ".//p[#class='test']").text
This is how you search for element or tag in CSS subclass and I believe that it works for multilevel situation as well:
Sample HTML:
<li class="meta-item">
<span class="label">Posted:</span>
<time class="value" datetime="2019-03-22T09:46:24+01:00" pubdate="pubdate">22.03.2019 u 09:46</time>
</li>
This is how you would get pubdate tag value for example.
published = driver.find_element_by_css_selector('li>time').get_attribute('datetime')
Chrome Webdriver :
element = driver.find_element_by_id("ParentElement")
localElement = element.find_element_by_id("ChildElement")
print(localElement.text)
Find The Child of any Elements
parent = browser.find_element(by=By.XPATH,value='value of XPATH of Parents')
child=parent.find_elements(by=By.TAG_NAME,value='value of child path')
Related
I'm trying to get the content from a tag, but it raised NoSuchElement even though getting it from an another tag with the same level is successful.
This is the link to website: https://soundcloud.com/pastlivesonotherplanets/sets/spell-jars-from-mars
This is the html code that I access to:
<div class="fullHero__tracksSummary">
<div class="playlistTrackCount sc-font">
<div class="genericTrackCount sc-font large m-active" title="16 tracks">
<div class="genericTrackCount__title">16</div>
<div class="genericTrackCount__subtitle"> Tracks </div>
<div class="genericTrackCount__duration sc-type-small sc-text-body sc-type-light sc-text-secondary">56:07</div>
</div>
</div>
</div>
I'm trying to get the playlist's duration with this code:
try:
tmp=driver.find_element_by_xpath("//div[#class='fullHero__tracksSummary']")
duration=tmp.find_element_by_class_name("genericTrackCount__duration sc-type-small sc-text-body
sc-type-light sc-text-secondary").get_attribute('textContent')
print(duration)
except:
print("None")
It raised error NoSuchElement even though the other two tags was successful.
What is the problem and how can I fix it?
Thank your for your time.
I think you can try directly using xpath //div[contains(#class, 'duration')] OR
//div[contains(#class, 'playlistTrackCount')]/descendant::div[contains(#class, 'duration')]
Without looking at the page, you probably need to wait for the element to load.
You can use either time.sleep(5) 5 being the number of seconds to wait or WebDriverWait(driver, 20) with an expected condition
so your code would look like
import expected_conditions as EC
WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CLASS_NAME, '"genericTrackCount__duration sc-type-small sc-text-body
sc-type-light sc-text-secondary"))).text
Also maybe the get_attribute('textContent') is failing, you can just use .text
I am struggling with a problem in Selenium using Python.
This is a dummy draft of what I have.
<body>
<button info="content1" aria-label="1">"Click 1"</button>
<button info="content1" aria-label="2">"Click 2"</button>
<button info="content2" aria-label="2">"Click 2"</button>
<button info="content2" aria-label="4">"Click 4"</button>
<body>
My target is to select the button that has info="content1" and aria-label="2"
I have already tried
element=driver.find_element_by_css_selector('button[info="content1"] and button[aria-label="2"]')
But doesn't work and returns NoSuchElementException
Would you please help me?
Simply put the two bracketed attribute selectors next to each other with no and:
element = driver.find_element_by_css_selector('button[info="content1"][aria-label="2"]')
I have the following HTML page. I want to get all the links inside a specific div. Here is my HTML code:
<div class="rec_view">
<a href='www.xyz.com/firstlink.html'>
<img src='imga.png'>
</a>
<a href='www.xyz.com/seclink.html'>
<img src='imgb.png'>
</a>
<a href='www.xyz.com/thrdlink.html'>
<img src='imgc.png'>
</a>
</div>
I want to get all the links that are present on the rec_view div. So those links that I want are,
www.xyz.com/firstlink.html
www.xyz.com/seclink.html
www.xyz.com/thrdlink.html
Here is the Python code which I tried with
from selenium import webdriver;
webpage = r"https://www.testurl.com/page/123/"
driver = webdriver.Chrome("C:\chromedriver_win32\chromedriver.exe")
driver.get(webpage)
element = driver.find_element_by_css_selector("div[class='rec_view']>a")
link = element.get_attribute("href")
print(link)
How can I get those links using selenium on Python?
As per the HTML you have shared to get the list of all the links that are present on the rec_view div you can use the following code block :
from selenium import webdriver
driver = webdriver.Chrome(executable_path=r'C:\chromedriver_win32\chromedriver.exe')
driver.get('https://www.testurl.com/page/123/')
elements = driver.find_elements_by_css_selector("div.rec_view a")
for element in elements:
print(element.get_attribute("href"))
Note : As you need to collect all the href attributes from the div tag so instead of find_element_* you need to use find_elements_*. Additionally, > refers to immediate <a> child node where as you need to traverse all the <a> child nodes so the desired css_selector will be div.rec_view a
What is wrong in the below code
import os
import time
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("http://x.x.x.x/html/load.jsp")
elm1 = driver.find_element_by_link_text("load")
time.sleep(10)
elm1.click()
time.sleep(30)
driver.close()
The page source is
<body>
<div class="formcenterdiv">
<form class="form" action="../load" method="post">
<header class="formheader">Loader</header>
<div align="center"><button class="formbutton">load</button></div>
</form>
</div>
</body></html>
I want to click on button load. when I ran the above code getting this error
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: load
As the documentation says, find_elements_by_link_text only works on a tags:
Use this when you know link text used within an anchor tag. With this
strategy, the first element with the link text value matching the
location will be returned. If no element has a matching link text
attribute, a NoSuchElementException will be raised.
The solution is to use a different selector like find_element_by_class_name:
elm1 = driver.find_element_by_class_name('formbutton')
Did you try using Xpath?
As the OP said, find_elements_by_link_text works on a tags only:
Below code might help you out
driver.get_element_by_xpath("/html/body/div/form/div/button")
How do I find
element = driver.find_element_by_id("id","class","class")
Im trying to click an ad
doing direct with xpath will not work:
/html/body/div/div[1]/div[1]/div/a/img
Traceback (most recent call last):
File "a.py", line 14, in <module>
element = driver.find_element_by_id("/html/body/div/div[1]/div[1]/div/a/img")
HTML shown as follows:
</head>
<body scroll="no">
<div id="widget" class="widget">
<div class="plug">
<div class="thumbBorder">
<div class="thumb">
<div class="ton" style="display: block;">
<div class="title_bg"> </div>
<a class="title q" target="_blank" href="//prwidgets.com/t/ghxa/g0us/7433c239e19107a4301ad9959d2d37440/aHR0cDovL3RyaXBsZXh2aWQuY29tLw==">Kiss N Tell</a>
</div>
<a class="q" target="_blank" href="//prwidgets.com/t/ghxa/g0us/7433c239e19107a4301ad9959d2d37440/aHR0cDovL3RyaXBsZXh2aWQuY29tLw=="> <img title="Title" src="//prstatics.com/prplugs/0/747604/160x120.jpg"
find_element_by_id in Selenium python binding accepts one parameter which is the value of the id attribute. Such as
login_form = driver.find_element_by_id('loginForm')
Please refer to the doc here
Addition to that you can use
driver.find_element(By.ID, 'your ID')
In this case you can try xpath- and axis i.e. following-sibling
element = driver.find_element_by_xpath("//a[class='q']/following-sibling::img[1]")
element.click()
N.B I have assumed there is no a with class name q in the whole html doument.
This may not work for you but when there is not an easy ID or NAME to grab, I go into the browser (I will refer to Firefox) right click on the element, select 'Inspect Element', then right click on the highlighted area in the inspection window and select 'Copy Unique Selector'. Then you can paste this into your code and use:
selector = 'pasted string here'
element = driver.find_element_by_css_selector(selector)
element.click()
EDIT: using the selector provided by #James below:
selector = 'div.plug:nth-child(1) > div:nth-child(1) > div:nth-child(1) > a:nth-child(2) > img:nth-child(1)'
element = driver.find_element_by_css_selector(selector)
element.click()
This usually works quite well for me.
EDIT: Add a real example. Try this and see if it works.
# open google search page and click the "About" link
from selenium import webdriver
driver = webdriver.Firefox()
driver.maximize_window()
driver.get('www.google.com/ncr')
# got the selector below using Firefox 'Inspect Element -> Copy Unique Selector'
about_selector = 'a._Gs:nth-child(3)'
about = driver.find_element_by_css_selector(about_selector)
about.click()