As you can see from the picture, the pagination is a bit different. So my idea is to get the position of the current-page and get the link for the a tag.
current_page= driver.find_element(By.CSS_SELECTOR,"span.current-page")
driver.find_element(By.TAG_NAME,"a").click()
but I didn't know to use the current_page element to find the a tag after it and click on it.
Thanks for your help in advance.
To get the next element using current page reference you can use the following css selector in one liner or XPATH option.
next_page= driver.find_element(By.CSS_SELECTOR,"span.current-page+a")
print(next_page.get_attribute("href"))
next_page.click()
Or you can use this.
current_page= driver.find_element(By.CSS_SELECTOR,"span.current-page")
current_page.find_element(By.XPATH,"./following::a[1]").click()
Related
Currently I'm trying to scrape data from a website. Therefore I'm using Selenium.
Everything is working as it should. Until I realised I have to scrape a tooltiptext.
I found already different threads on stackoverflow that are providing an answer. Anyway I did not manage to solve this issue so far.
After a few hours of frustration I realised the following:
This span has nothing to do with the tooltip I guess. Because the tooltip looks like this:
There is actually a span that I can't read. I try to read it like this:
bewertung = driver.find_elements_by_xpath('//span[#class="a-icon-alt"]')
for item in bewertung:
print(item.text)
So Selenium finds this element. But unfortunatly '.text' returns nothing. Why is it always empty ?
And what for is the span from the first screenshot ? Btw. it is not displayed at the Website as well.
Since you've mentioned Selenium finds this element, I would assume you must have print the len of bewertung list
something like
print(len(bewertung))
if this list has some element in it, you could probably use innerText
bewertung = driver.find_elements_by_xpath('//span[#class="a-icon-alt"]')
for item in bewertung:
print(item.get_attribute("innerText"))
Note that, you are using find_elements which won't throw any error instead if it does not find the element it will return an empty list.
so if you use find_element instead, it would throw the exact error.
Also, I think you've xpath for the span (Which does not appear in UI, sometime they don't appear until some actions are triggered.)
You can try to use this xpath instead:
//i[#data-hook='average-stars-rating-anywhere']//span[#data-hook='acr-average-stars-rating-text']
Something like this in code:
bewertung = driver.find_elements_by_xpath("//i[#data-hook='average-stars-rating-anywhere']//span[#data-hook='acr-average-stars-rating-text']")
for item in bewertung:
print(item.text)
I'm new to Selenium Webdriver using Python.
I want to select the tag which contains 'Stats'. Tried multiple ways but failed as both the tags don't have any Id and have same class names as well.
Please help me with the codes to select the tag which contains 'Stats' using Selenium webdriver with python.
List of few Trials, Even if the result is found. I'm unable to click it and the error message is Lists cannot be clicked.
driver.find_element_by_class_name("css173kae7").click()
driver.find_elements_by_link_text("stats/dashboard").click()
driver.find_elements_by_xpath("//*[contains(text(), 'Stats')]")
I've attached the image of the Inspect element of the code, please have a look at it.
(Updated) Image
Inspect element of the "Anchor Tags" from which one needs to be selected
Use this code:
driver.find_elements_by_xpath("//*[contains(text(), 'stats')]")
if you are looking for "stats" in href attribute use below code:
driver.find_elements_by_xpath("//*[contains(#href, 'stats')]")
Keep in mind, contains is case-sensitive.
If you want to do case-insensitive search, See here.
Based on the image of the HTML you shared, I would try this XPATH locator to select that element:
driver.find_element_by_xpath(".//a[#class='css-1qdedno' and #href='/stats/dashboard']")
try this xpath
//nav//a[contains(#href,'/stats/dashboard')]
You need to select the desired element by its index value after you get all the elements in a list:
elements= driver.find_elements_by_xpath("//*[contains(text(), 'stats')]")
Then identify and access the element by its index and call the click() function:
For example if the element is at elements[0], then
elements[0].click()
Quick info: I'm using Mac OS, Python 3.
I have like 800 links that need to be clicked on a page (and many more pages to go so need automation).
They were hidden because you only see those links when you hover over.
I fixed that by injecting CSS rule (just saying in case its the reason it's not working).
When I try to find elements by xpath it does not want to click the links afterwards and it also doesn't find all of them always just 4 (even when more are displayed in view).
HTML:
Display
When i click ok copy xpath in inspect it gives me:
//*[#id="tiles"]/li[3]/div[2]/ul/li[2]/a
But it doesn't work when I use it like this:
driver.find_elements_by_xpath('//*[#id="tiles"]/li[3]/div[2]/ul/li[2]/a')
So two questions:
How do I get them all?
How do I get it to click on each of them?
The pattern in the XPath is the same, with the /li[3] being the only number that changes, for this I created a for loop to create them all based on the count on page which I did successfully.
So if it can be done with the XPaths generated by myself that are corresponding to when I copy XPath in inspector then I only need question 2 answered.
PS.: this is HTML of parent of that first HTML:
<li onclick="openPopup(event, 'collect', {item_id: 165214})" class="collect" data-item-id="165214">Display</li>
This XPath,
//a[.="Display"]
will select all a links with anchor text equal to "Display".
As per your question, the HTML you have shared and your code attempts there is no necessity to get the <li> tags. Instead we will get the <a> tags in a list. So to answer your first question How do I get them all you can use the following line of code :
all_Display = driver.find_elements_by_xpath("//*[#id='tiles']//li/div[2]/ul/li[#class='collect']/a[#title='Display']")
Next to click on each of them you have to create a loop to iterate through all the <a> tag as follows :
all_Display = driver.find_elements_by_xpath("//*[#id='tiles']//li/div[2]/ul/li[#class='collect']/a[#title='Display']")
for each_Display in all_Display :
each_Display.click()
Using an XPath with elements by position is not ideal. Instead use a CSS selector to match the attributes for the targeted elements.
Something like:
all_Display = driver.find_elements_by_css_selector("#tiles li[onclick][data-item-id] a[title]")
You can then click them in a loop if none of them is loading a new page:
for element in all_Display:
element.click()
I'm having an issue with looking for an element with a particular text attribute utilizing CSS_Selectors in Selenium. Here is the current line of code I have:
element = driver.find_element(By.CSS_SELECTOR, "li.adTypeItem[text='CLASS']")
I've had trouble using the attribute selector brackets in CSS_Selectors in the past, and clearing this up would really go a long way to better understanding how to use CSS_Selectors in the future.
Please note - i'm not looking for a element with a class, but rather the actual text that is displayed with that element.
Is this the only place on the page with the text "class" displayed? If so you can try:
driver.find_element(By.LINK_TEXT('CLASS'));
driver.find_element(By.PARTIAL_LINK_TEXT('CLASS'));
I am trying to get user details from each block as given
driver.get("https://www.facebook.com/public/karim-pathan")
wait = WebDriverWait(driver, 10)
li_link = []
for s in driver.find_elements_by_class_name('clearfix'):
print s
print s.find_element_by_css_selector('_8o._8r.lfloat._ohe').get_attribute('href')
print s.find_element_by_tag_name('img').get_attribute('src')
it says:
unable to find element with css selector
any hint appreciable.
Just a mere guess based on assumption that you are not logged in. You are getting exception cause for all class clearfix, element with ._8o._8r.lfloat._ohe does not exists. So your code isn't reaching the required elements. Anyhow, if you are trying to fetch href and img source of results, you need not iterate over all clearfix cause as suggested by #leo.fcx, your css is incorrect, trying the css provided by leo, you can achieve the desired result as:
driver.get("https://www.facebook.com/public/karim-pathan")
for s in driver.find_elements_by_css_selector('._8o._8r.lfloat._ohe'): // there didn't seemed to iterate over each class of clearfix
print s.get_attribute('href')
print s.find_element_by_tag_name('img').get_attribute('src')
P.S. sorry for any syntax, never explored python binding :)
Since you are using all class-names that the element applies, adding a . to the beginning of your CSS selector should fix it.
Try this:
s.find_element_by_css_selector('._8o._8r.lfloat._ohe')
instead of:
s.find_element_by_css_selector('_8o._8r.lfloat._ohe')
Adding to what #leo.fcx pointed about the selector, wait for search results to become visible:
wait.until(EC.visibility_of_element_located((By.ID, "all_search_results")))