My html code looks like:
<li>
<div class="level1">
<div id="li_hw2" class="toggle open" </div>
<ul style="" mw="220">
<li>
<div class ="level2">
...
</li>
</ul>
I am currently on the element with the id = "li_hw2", which was found by
level_1_elem = self.driver.find_element(By.ID, "li_hw2")
Now i want to go from level_1_elem to class = "level2". Is it possible to go to the parent li and than to level2? Maybe with xpath?
Hint: It is neccassary to go via the parent li and not directly to the element level2 with
self.driver.find_element(By.Class_Name, "level2")
The best-suited locator for your usecase is xpath, since you want to traverse upward as well as downwards in the HTMLDOM.
level_1_elem = self.driver.find_element(By.XPATH, "//div[#class='li_hw2']")
and then using level_1_elem web element, You can do the following :
to directly go to following-sibling
level_1_elem.find_element(By.XPATH, ".//following-sibling::ul/descendant::div[#class='level2']")
Are you sure about the html i think the ul should group all the li if it s the case then it s easy if not i realy dont get that html.
//div[#class="level1"]/parent::li/parent::ul/li/div[#class="level2"]
Related
I have a web page something like:
<div class='product-pod-padding'>
<a class='header product-pod--ie-fix' href='link1'/>
<div> SKU#1</div>
</div>
<div class='product-pod-padding'>
<a class='header product-pod--ie-fix' href='link2'/>
<div> SKU#2</div>
</div>
<div class='product-pod-padding'>
<a class='header product-pod--ie-fix' href='link3'/>
<div> SKU#3</div>
</div>
When I tried to loop through the products with the following code, it will give us expected outcome:
products=driver.find_elements_by_xpath("//div[#class='product-pod-padding']")
for index, product in enumerate(products):
print(product.text)
SKU#1
SKU#2
SKU#3
However, if I try to locate the href of each product, it will only return the first item's link:
products=driver.find_elements_by_xpath("//div[#class='product-pod-padding']")
for index, product in enumerate(products):
print(index)
print(product.text)
url=product.find_element_by_xpath("//a[#class='header product-pod--ie-fix']").get_attribute('href')
print(url)
SKU#1
link1
SKU#2
link1
SKU#3
link1
What should I do to get the corrected links?
This should make your code functional:
[...]
products=driver.find_elements_by_xpath("//div[#class='product-pod-padding']")
for index, product in enumerate(products):
print(index)
print(product.text)
url=product.find_element_by_xpath(".//a[#class='header product-pod--ie-fix']").get_attribute('href')
print(url)
[..]
The crux here is the dot in front of xpath, which means searching within the element only.
You need to use a relative XPath in order to locate a node inside another node.
//a[#class='header product-pod--ie-fix'] will always return a first match from the beginning of the DOM.
You need to put a dot . on the front of the XPath locator
".//a[#class='header product-pod--ie-fix']"
This will retrieve you a desired element inside a parent element:
url=product.find_element_by_xpath(".//a[#class='header product-pod--ie-fix']").get_attribute('href')
So, your entire code could be as following:
products=driver.find_elements_by_xpath("//div[#class='product-pod-padding']")
for index, product in enumerate(products):
print(index)
url=product.find_element_by_xpath(".//a[#class='header product-pod--ie-fix']").get_attribute('href')
print(url)
I have this html code
<ul class="select-items">
<li class="select-items-item" data-label="<first>">Number 1</li>
</ul>
and I want to add a new li like <li class="select-items-item" data-label="<second>">Number 2</li>
What i tried so far :
I click the list using the xpath ( that works fine because I can see the list open),
addAttribute to the element (that is the tricky part, not working),
add javascript code ( not working either)
liItem = """
var ul = document.getElementByXpath('//*[#class="select-items"]');
var li = document.createElement("li");
li.appendChild(document.createTextNode("Element 4"));
ul.appendChild(li);
"""
driver.execute_script(liItem);
Any suggestions?
To add the following <li> i.e.
<li class="select-items-item" data-label="<second>">Number 2</li>
You can use the following line of code:
scriptTxt = """
var ul = document.getElementsByClassName('select-items').item(0);
var li = document.createElement('li');
li.setAttribute('textContent', 'Number 2');
ul.appendChild(li);body.appendChild(li);
"""
driver.execute_script(scriptTxt)
References
You can find a couple of relevant detailed discussions in:
Unable to find_element_by_id when element is created via execute_script
I have the following HTML page. I want to get all the links inside a specific div. Here is my HTML code:
<div class="rec_view">
<a href='www.xyz.com/firstlink.html'>
<img src='imga.png'>
</a>
<a href='www.xyz.com/seclink.html'>
<img src='imgb.png'>
</a>
<a href='www.xyz.com/thrdlink.html'>
<img src='imgc.png'>
</a>
</div>
I want to get all the links that are present on the rec_view div. So those links that I want are,
www.xyz.com/firstlink.html
www.xyz.com/seclink.html
www.xyz.com/thrdlink.html
Here is the Python code which I tried with
from selenium import webdriver;
webpage = r"https://www.testurl.com/page/123/"
driver = webdriver.Chrome("C:\chromedriver_win32\chromedriver.exe")
driver.get(webpage)
element = driver.find_element_by_css_selector("div[class='rec_view']>a")
link = element.get_attribute("href")
print(link)
How can I get those links using selenium on Python?
As per the HTML you have shared to get the list of all the links that are present on the rec_view div you can use the following code block :
from selenium import webdriver
driver = webdriver.Chrome(executable_path=r'C:\chromedriver_win32\chromedriver.exe')
driver.get('https://www.testurl.com/page/123/')
elements = driver.find_elements_by_css_selector("div.rec_view a")
for element in elements:
print(element.get_attribute("href"))
Note : As you need to collect all the href attributes from the div tag so instead of find_element_* you need to use find_elements_*. Additionally, > refers to immediate <a> child node where as you need to traverse all the <a> child nodes so the desired css_selector will be div.rec_view a
The html is like this:
<body>
<div class="div_a">
<ul class="ul">
<li>li</li>
<li>li</li>
</ul>
</div>
<div class="div_b">
<a>link</a>
<ul>
<li>div_b li</li>
</ul>
</div>
</body>
try to get div_a's li
node = page.xpath("//div[#class='div_a']")
li1 = node.xpath("//li")
but li1 got the all li element in page not only div_a's. I cannot figure what the issue is.
Your XPATH - //li - is actually taking elements from the root element, hence you get all li . If you want to take only the element inside node , you should give relative XPATH. Example -
li1 = node.xpath(".//li")
. in above means current element, which would be the div element with class attribute as 'div_a'.
Fix your second XPath to be relative instead of absolute as Anand suggests, or simply use a single XPath to get the li elements in the first place:
li1 = page.xpath("//div[#class='div_a']//li")
I'm trying to parse a html file. There are many nested divs in this html. I want to get all child divs, but not grandchildren etc.
Here is a pattern:
<div class='main_div'>
<div class='child_1'>
<div class='grandchild_1'></div>
</div>
<div class='child_2'>
...
...
</div>
So the command I'm looking for would return 2 elements - divs which classes are 'child_1' and 'child_2'.
Is it possible?
I've tried to use main_div.find_elements_by_tag_name('div') but it returned all nested divs in the div.
Here is a way to find the direct div children of the div with class name "main_div":
driver.find_elements_by_xpath('//div[#class="main_div"]/div')
The key here is the use of a single slash which would make the search inside the "main_div" non-recursive finding only direct div children.
Or, with a CSS selector:
driver.find_elements_by_css_selector("div.main_div > div")