Should be Simple XPATH? - python

Using Python and Selenium I'm trying to click a link if it contains text. In this case say 14:10 and this would be the DIV I'm after.
<div class="league_check" id="hr_selection_18359391" onclick="HorseRacingBranchWindow.showEvent(18359391);" title="Odds Available"> <span class="race-status"> <img src="/i/none_v.gif" width="12" height="12" onclick="HorseRacingBranchWindow.toggleSelection(18359391); cancelBubble(event);"> </span>14:10 * </div>
I've been watching the browser move manually. I know the DIV has loaded before my code fires but I can't figure out what the heck it is actually doing.
Looked pretty straightforward. I'm not great at XPATH but I usually manage the basics.
justtime = "14:10"
links = Driver.find_elements_by_xpath("//div*[contains(.,justtime)")
As far as I can see no other link on that page contains the text 14:10 but when I loop through links and print it out it's showing basically every link on that page.
I've tried to narrow it down to that class name and containing the text
justtime = "14:10"
links = Driver.find_elements_by_xpath("//div[contains(.,justtime) and (contains(#class, 'league_check'))]")
Which doesn't return anything at all. Really stumped on this it's making no sense to me at all.

Currently, your XPath didn't make use of justtime python variable. Instead, it references child element <justtime> which doesn't exists within the <div>. Expression of form contains(., nonExistentElement) will always evaulates to True, because nonExistentElement translates to empty string here. This is possibly one of the reason why your initial XPath returned more elements than the expected.
Try to incorporate value from justtime variable into your XPath by using string interpolation, and don't forget to enclose the value with quotes so that it can be properly recognized as XPath literal string :
justtime = "14:10"
links = Driver.find_elements_by_xpath("//div[contains(.,'%s')]" % justtime)

You have need to use wait for element
wait = WebDriverWait(driver, 10)
element = wait.until(EC.element_to_be_clickable((By.ID,'someid')))

Related

How would you click all texts on a page with Xpath - Python

So, this won't be a long description, but I am trying to have xpath click on all of the elements (more specifically text elements) that are on a page. I really don't know where to start, and all of the other questions on clicking everything on a page is based on a class, not a text using xpath.
Here is some of my code:
browser.find_element_by_xpath("//*[text()='sample']").click()
I really don't know how I would go about to make it click all of the "sample" texts throughout the whole page.
Thanks in advance!
Well, let's say that you have lots of Divs or spans that contains text. Let's figure out Divs :
<div class="some class name" visibility ="visible" some other attribute> Text here </div>
Now when you go to developer mode(F12) in elements section and if you do this //div[contains(#class,'some class name')] and if there are more than 1 entry then you can store all of them in a list just like below :
driver.find_elements(By.XPATH, '//div[contains(#class,'some class name')]')
this will give you a list of divs web element.
div_list = driver.find_elements(By.XPATH, '//div[contains(#class,'some class name')]')
Now you have a python list and you can manipulate this list as per your requirement.
for div_text in div_list:
print(div_text.text)
Same way you can try for span or different web elements.
You just need to use that xpath to define an array of elements instead, like this:
my_elements = browser.find_elements_by_xpath("//*[text()='sample']")
for element in my_elements:
element.click();
That loop may not work as is (you could maybe add a wait for element) but that's the idea.

Selenium parsing whole document instead of webelement

this problem is really driving me crazy! Here's my code:
list_divs = driver.find_elements_by_xpath("//div[#class='myclass']")
print(f'Number of divs found: {len(list_divs)}') #Correct number displayed
for art in list_divs:
mybtn = art.find_elements_by_xpath('//button') #There are 2 buttons in each div
print(f'Number of buttons found = {len(mybtn)}') #Incorrect number (129 instead of 2)
mybtn[2].click() #Wrong button clicked!
The button clicked IS NOT in the art Html but at the very beginning of webpage!!! Seems like Selenium is parsing the whole document instead of webelement art...
I've printed the outerHTML of variable art and it's correct: only the div code which contains 2 buttons!!!! So why the find_elements_by_xpath() function applied to the webelement art is not parsing the div but the whole html page??!!!
Totally incomprehensible for me!
Because you are using mybtn = art.find_elements_by_xpath('//button') where //button ignores your search context since it starts from //. Change it to:
mybtn = art.find_elements_by_xpath('.//button')
I can't post any html code (the page is about 1,000 lines long).
So far, the only way I saw to go through this is to avoid parsing webelements and make parsing of entire webpage for each element I need:
list_divs = driver.find_elements(By.XPATH, "//div[#class='myclass']")
buttons = driver.find_elements(By.XPATH,"//div[#class='myclass']//button")
and then iterate through the lists to access the button I need for each div. Works perfectly like this. I still don't catch how a xpath applied to a given html code can return something that is not inside this html code...
I'll make other tests with other webpages to see if the problem comes from Selenium.
Thanks for help!

PYTHON - Unable To Find Xpath Using Selenium

I have been struggling with this for a while now.
I have tried various was of finding the xpath for the following highlighted HTML
I am trying to grab the dollar value listed under the highlighted Strong tag.
Here is what my last attempt looks like below:
try:
price = browser.find_element_by_xpath(".//table[#role='presentation']")
price.find_element_by_xpath(".//tbody")
price.find_element_by_xpath(".//tr")
price.find_element_by_xpath(".//td[#align='right']")
price.find_element_by_xpath(".//strong")
print(price.get_attribute("text"))
except:
print("Unable to find element text")
I attempted to access the table and all nested elements but I am still unable to access the highlighted portion. Using .text and get_attribute('text') also does not work.
Is there another way of accessing the nested element?
Or maybe I am not using XPath as it properly should be.
I have also tried the below:
price = browser.find_element_by_xpath("/html/body/div[4]")
UPDATE:
Here is the Full Code of the Site.
The Site I am using here is www.concursolutions.com
I am attempting to automate booking a flight using selenium.
When you reach the end of the process of booking and receive the price I am unable to print out the price based on the HTML.
It may have something to do with the HTML being a java script that is executed as you proceed.
Looking at the structure of the html, you could use this xpath expression:
//div[#id="gdsfarequote"]/center/table/tbody/tr[14]/td[2]/strong
Making it work
There are a few things keeping your code from working.
price.find_element_by_xpath(...) returns a new element.
Each time, you're not saving it to use with your next query. Thus, when you finally ask it for its text, you're still asking the <table> element—not the <strong> element.
Instead, you'll need to save each found element in order to use it as the scope for the next query:
table = browser.find_element_by_xpath(".//table[#role='presentation']")
tbody = table.find_element_by_xpath(".//tbody")
tr = tbody.find_element_by_xpath(".//tr")
td = tr.find_element_by_xpath(".//td[#align='right']")
strong = td.find_element_by_xpath(".//strong")
find_element_by_* returns the first matching element.
This means your call to tbody.find_element_by_xpath(".//tr") will return the first <tr> element in the <tbody>.
Instead, it looks like you want the third:
tr = tbody.find_element_by_xpath(".//tr[3]")
Note: XPath is 1-indexed.
get_attribute(...) returns HTML element attributes.
Therefore, get_attribute("text") will return the value of the text attribute on the element.
To return the text content of the element, use element.text:
strong.text
Cleaning it up
But even with the code working, there’s more that can be done to improve it.
You often don't need to specify every intermediate element.
Unless there is some ambiguity that needs to be resolved, you can ignore the <tbody> and <td> elements entirely:
table = browser.find_element_by_xpath(".//table[#role='presentation']")
tr = table.find_element_by_xpath(".//tr[3]")
strong = tr.find_element_by_xpath(".//strong")
XPath can be overkill.
If you're just looking for an element by its tag name, you can avoid XPath entirely:
strong = tr.find_element_by_tag_name("strong")
The fare row may change.
Instead of relying on a specific position, you can scope using a text search:
tr = table.find_element_by_xpath(".//tr[contains(text(), 'Base Fare')]")
Other <table> elements may be added to the page.
If the table had some header text, you could use the same text search approach as with the <tr>.
In this case, it would probably be more meaningful to scope to the #gdsfarequite <div> rather than something as ambiguous as a <table>:
farequote = browser.find_element_by_id("gdsfarequote")
tr = farequote.find_element_by_xpath(".//tr[contains(text(), 'Base Fare')]")
But even better, capybara-py provides a nice wrapper on top of Selenium, helping to make this even simpler and clearer:
fare_quote = page.find("#gdsfarequote")
base_fare_row = fare_quote.find("tr", text="Base Fare"):
base_fare = tr.find("strong").text

Xpath clicking not working at all

Quick info: I'm using Mac OS, Python 3.
I have like 800 links that need to be clicked on a page (and many more pages to go so need automation).
They were hidden because you only see those links when you hover over.
I fixed that by injecting CSS rule (just saying in case its the reason it's not working).
When I try to find elements by xpath it does not want to click the links afterwards and it also doesn't find all of them always just 4 (even when more are displayed in view).
HTML:
Display
When i click ok copy xpath in inspect it gives me:
//*[#id="tiles"]/li[3]/div[2]/ul/li[2]/a
But it doesn't work when I use it like this:
driver.find_elements_by_xpath('//*[#id="tiles"]/li[3]/div[2]/ul/li[2]/a')
So two questions:
How do I get them all?
How do I get it to click on each of them?
The pattern in the XPath is the same, with the /li[3] being the only number that changes, for this I created a for loop to create them all based on the count on page which I did successfully.
So if it can be done with the XPaths generated by myself that are corresponding to when I copy XPath in inspector then I only need question 2 answered.
PS.: this is HTML of parent of that first HTML:
<li onclick="openPopup(event, 'collect', {item_id: 165214})" class="collect" data-item-id="165214">Display</li>
This XPath,
//a[.="Display"]
will select all a links with anchor text equal to "Display".
As per your question, the HTML you have shared and your code attempts there is no necessity to get the <li> tags. Instead we will get the <a> tags in a list. So to answer your first question How do I get them all you can use the following line of code :
all_Display = driver.find_elements_by_xpath("//*[#id='tiles']//li/div[2]/ul/li[#class='collect']/a[#title='Display']")
Next to click on each of them you have to create a loop to iterate through all the <a> tag as follows :
all_Display = driver.find_elements_by_xpath("//*[#id='tiles']//li/div[2]/ul/li[#class='collect']/a[#title='Display']")
for each_Display in all_Display :
each_Display.click()
Using an XPath with elements by position is not ideal. Instead use a CSS selector to match the attributes for the targeted elements.
Something like:
all_Display = driver.find_elements_by_css_selector("#tiles li[onclick][data-item-id] a[title]")
You can then click them in a loop if none of them is loading a new page:
for element in all_Display:
element.click()

selenium get element by css selector

I am trying to get user details from each block as given
driver.get("https://www.facebook.com/public/karim-pathan")
wait = WebDriverWait(driver, 10)
li_link = []
for s in driver.find_elements_by_class_name('clearfix'):
print s
print s.find_element_by_css_selector('_8o._8r.lfloat._ohe').get_attribute('href')
print s.find_element_by_tag_name('img').get_attribute('src')
it says:
unable to find element with css selector
any hint appreciable.
Just a mere guess based on assumption that you are not logged in. You are getting exception cause for all class clearfix, element with ._8o._8r.lfloat._ohe does not exists. So your code isn't reaching the required elements. Anyhow, if you are trying to fetch href and img source of results, you need not iterate over all clearfix cause as suggested by #leo.fcx, your css is incorrect, trying the css provided by leo, you can achieve the desired result as:
driver.get("https://www.facebook.com/public/karim-pathan")
for s in driver.find_elements_by_css_selector('._8o._8r.lfloat._ohe'): // there didn't seemed to iterate over each class of clearfix
print s.get_attribute('href')
print s.find_element_by_tag_name('img').get_attribute('src')
P.S. sorry for any syntax, never explored python binding :)
Since you are using all class-names that the element applies, adding a . to the beginning of your CSS selector should fix it.
Try this:
s.find_element_by_css_selector('._8o._8r.lfloat._ohe')
instead of:
s.find_element_by_css_selector('_8o._8r.lfloat._ohe')
Adding to what #leo.fcx pointed about the selector, wait for search results to become visible:
wait.until(EC.visibility_of_element_located((By.ID, "all_search_results")))

Categories

Resources