So, this won't be a long description, but I am trying to have xpath click on all of the elements (more specifically text elements) that are on a page. I really don't know where to start, and all of the other questions on clicking everything on a page is based on a class, not a text using xpath.
Here is some of my code:
browser.find_element_by_xpath("//*[text()='sample']").click()
I really don't know how I would go about to make it click all of the "sample" texts throughout the whole page.
Thanks in advance!
Well, let's say that you have lots of Divs or spans that contains text. Let's figure out Divs :
<div class="some class name" visibility ="visible" some other attribute> Text here </div>
Now when you go to developer mode(F12) in elements section and if you do this //div[contains(#class,'some class name')] and if there are more than 1 entry then you can store all of them in a list just like below :
driver.find_elements(By.XPATH, '//div[contains(#class,'some class name')]')
this will give you a list of divs web element.
div_list = driver.find_elements(By.XPATH, '//div[contains(#class,'some class name')]')
Now you have a python list and you can manipulate this list as per your requirement.
for div_text in div_list:
print(div_text.text)
Same way you can try for span or different web elements.
You just need to use that xpath to define an array of elements instead, like this:
my_elements = browser.find_elements_by_xpath("//*[text()='sample']")
for element in my_elements:
element.click();
That loop may not work as is (you could maybe add a wait for element) but that's the idea.
Related
These two elements have the exact same attributes except for the text in the pseudo-element. Is there anyway I can click on the "Practical" element. I've tried the following to no avail:
driver.find_element(By.XPATH, "//div[contains(text(),'Practical')]").click()
driver.find_element(By.XPATH, "//div[#class='v-tab']")[1].click()
Pseudo elements are not elements. So, that ::before seems to be just a kind of text content of div element.
I can't give you tested answer since you didn't share a link to the page you are working on, but I can suggest.
I'd try this:
driver.find_element(By.XPATH, "//div[#class='v-tab'][contains(.,'Practical')]")].click()
In case v-tab class name and Practical text content are unique enough it should work. Otherwise you will need to find nore unique locator.
I can't seem to find an example of this.
What I am trying to do is search a specific div element on the page for text that has the potential to change.
So it'd be like this
<div id="coolId">
<div>This</div>
<div>Can</div>
<div>Change depending on the iteration of the page</div>
</div>
In my case, the div coolID will always be present, but the text within it's inner divs and child elements will change depending on which iteration of the page is loaded, and I need to search for the presence of certain terms within this coolID div and cool div only because I know it will always be there, and I'd like to specify the search as much as possible so as not to potentially contaminate results with other text from other places on the page.
In my head, I sort of see it like this (using the above example):
"//div[#id='coolId', contains(text(), 'Change depending on the iteration of the page')]"
Or something to this effect.
Does anyone know how to do this?
I'm not completely sure you can set a correct XPath based on all 3 inner elements texts.
What you clearly can is to locate the outer div with id = coolId based on one of the inner texts that will be unique and then to extract all the inner texts form it.
the_total_text = driver.find_element_by_xpath("//div[#id and contains(.,'Change depending on the iteration of the page')]").text
This will give you
the_total_text = This Can Change depending on the iteration of the page
You should try:
div_element_with_needed_text = driver.find_element_by_xpath("//div[#id='coolId']/div[text()[contains(.,'Change depending on the iteration of the page')]]")
Considering the HTML:
<div id="coolId">
<div>This</div>
<div>Can</div>
<div>Change depending on the iteration of the page</div>
</div>
to retrieve the variable texts with respective to the parent <div id="coolId"> you can use the following solutions:
Extracting This using xpath:
first_child_text = driver.find_element(By.XPATH, "//div[#id='coolId']//following::div[1]").text
Extracting Can using xpath:
second_child_text = driver.find_element(By.XPATH, "//div[#id='coolId']//following::div[2]").text
Extracting Change depending on the iteration of the page using xpath:
third_child_text = driver.find_element(By.XPATH, "//div[#id='coolId']//following::div[3]").text
To extract all the texts from the decendents using xpath:
all_child_text = driver.find_element(By.XPATH, "//div[#id='coolId']").text
this problem is really driving me crazy! Here's my code:
list_divs = driver.find_elements_by_xpath("//div[#class='myclass']")
print(f'Number of divs found: {len(list_divs)}') #Correct number displayed
for art in list_divs:
mybtn = art.find_elements_by_xpath('//button') #There are 2 buttons in each div
print(f'Number of buttons found = {len(mybtn)}') #Incorrect number (129 instead of 2)
mybtn[2].click() #Wrong button clicked!
The button clicked IS NOT in the art Html but at the very beginning of webpage!!! Seems like Selenium is parsing the whole document instead of webelement art...
I've printed the outerHTML of variable art and it's correct: only the div code which contains 2 buttons!!!! So why the find_elements_by_xpath() function applied to the webelement art is not parsing the div but the whole html page??!!!
Totally incomprehensible for me!
Because you are using mybtn = art.find_elements_by_xpath('//button') where //button ignores your search context since it starts from //. Change it to:
mybtn = art.find_elements_by_xpath('.//button')
I can't post any html code (the page is about 1,000 lines long).
So far, the only way I saw to go through this is to avoid parsing webelements and make parsing of entire webpage for each element I need:
list_divs = driver.find_elements(By.XPATH, "//div[#class='myclass']")
buttons = driver.find_elements(By.XPATH,"//div[#class='myclass']//button")
and then iterate through the lists to access the button I need for each div. Works perfectly like this. I still don't catch how a xpath applied to a given html code can return something that is not inside this html code...
I'll make other tests with other webpages to see if the problem comes from Selenium.
Thanks for help!
Quick info: I'm using Mac OS, Python 3.
I have like 800 links that need to be clicked on a page (and many more pages to go so need automation).
They were hidden because you only see those links when you hover over.
I fixed that by injecting CSS rule (just saying in case its the reason it's not working).
When I try to find elements by xpath it does not want to click the links afterwards and it also doesn't find all of them always just 4 (even when more are displayed in view).
HTML:
Display
When i click ok copy xpath in inspect it gives me:
//*[#id="tiles"]/li[3]/div[2]/ul/li[2]/a
But it doesn't work when I use it like this:
driver.find_elements_by_xpath('//*[#id="tiles"]/li[3]/div[2]/ul/li[2]/a')
So two questions:
How do I get them all?
How do I get it to click on each of them?
The pattern in the XPath is the same, with the /li[3] being the only number that changes, for this I created a for loop to create them all based on the count on page which I did successfully.
So if it can be done with the XPaths generated by myself that are corresponding to when I copy XPath in inspector then I only need question 2 answered.
PS.: this is HTML of parent of that first HTML:
<li onclick="openPopup(event, 'collect', {item_id: 165214})" class="collect" data-item-id="165214">Display</li>
This XPath,
//a[.="Display"]
will select all a links with anchor text equal to "Display".
As per your question, the HTML you have shared and your code attempts there is no necessity to get the <li> tags. Instead we will get the <a> tags in a list. So to answer your first question How do I get them all you can use the following line of code :
all_Display = driver.find_elements_by_xpath("//*[#id='tiles']//li/div[2]/ul/li[#class='collect']/a[#title='Display']")
Next to click on each of them you have to create a loop to iterate through all the <a> tag as follows :
all_Display = driver.find_elements_by_xpath("//*[#id='tiles']//li/div[2]/ul/li[#class='collect']/a[#title='Display']")
for each_Display in all_Display :
each_Display.click()
Using an XPath with elements by position is not ideal. Instead use a CSS selector to match the attributes for the targeted elements.
Something like:
all_Display = driver.find_elements_by_css_selector("#tiles li[onclick][data-item-id] a[title]")
You can then click them in a loop if none of them is loading a new page:
for element in all_Display:
element.click()
Using Python and Selenium I'm trying to click a link if it contains text. In this case say 14:10 and this would be the DIV I'm after.
<div class="league_check" id="hr_selection_18359391" onclick="HorseRacingBranchWindow.showEvent(18359391);" title="Odds Available"> <span class="race-status"> <img src="/i/none_v.gif" width="12" height="12" onclick="HorseRacingBranchWindow.toggleSelection(18359391); cancelBubble(event);"> </span>14:10 * </div>
I've been watching the browser move manually. I know the DIV has loaded before my code fires but I can't figure out what the heck it is actually doing.
Looked pretty straightforward. I'm not great at XPATH but I usually manage the basics.
justtime = "14:10"
links = Driver.find_elements_by_xpath("//div*[contains(.,justtime)")
As far as I can see no other link on that page contains the text 14:10 but when I loop through links and print it out it's showing basically every link on that page.
I've tried to narrow it down to that class name and containing the text
justtime = "14:10"
links = Driver.find_elements_by_xpath("//div[contains(.,justtime) and (contains(#class, 'league_check'))]")
Which doesn't return anything at all. Really stumped on this it's making no sense to me at all.
Currently, your XPath didn't make use of justtime python variable. Instead, it references child element <justtime> which doesn't exists within the <div>. Expression of form contains(., nonExistentElement) will always evaulates to True, because nonExistentElement translates to empty string here. This is possibly one of the reason why your initial XPath returned more elements than the expected.
Try to incorporate value from justtime variable into your XPath by using string interpolation, and don't forget to enclose the value with quotes so that it can be properly recognized as XPath literal string :
justtime = "14:10"
links = Driver.find_elements_by_xpath("//div[contains(.,'%s')]" % justtime)
You have need to use wait for element
wait = WebDriverWait(driver, 10)
element = wait.until(EC.element_to_be_clickable((By.ID,'someid')))