How to get element by attribute with Selenium, Xpath and Expected Condition - python

This is what I'm using:
getByAttribute = WebDriverWait(amazonDriver, 10).until(EC.visibility_of_element_located((By.XPATH, "//div[#an-attribute='data-category']")))
The element looks as follows:
<div class='nav-subnav' data-category='drugstore'>
This is present on every Amazon products page.
It times out and does not find the element.

Use #data-category to get element by attribute.
getByAttribute = WebDriverWait(amazonDriver, 10).until(EC.visibility_of_element_located((By.XPATH, "//div[#data-category]")))
CSS Selector:
getByAttribute = WebDriverWait(amazonDriver, 10).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div[data-category]")))

Related

selenium exception error in random executions

i've written the below lines of code:
elem = driver.find_elements(By.XPATH, "//span[#class='drop__expand']")
for i in elem:
i.click()
and i get the below error:
selenium.common.exceptions.ElementClickInterceptedException: Message: Element <span class="drop__expand"> is not clickable at point (112,20) because another element <div class="wrapper"> obscures it
i tried that without any result:
driver.find_elements(By.XPATH, "//div[#class='wrapper']").click()
elem = driver.find_elements(By.XPATH, "//span[#class='drop__expand']")
for i in elem:
i.click()
how can i handle this?
There are two functions:
find_elements() with s at the end - to get list with all matching elements - even if there is one element or there is no elements.
find_element() without s at the end - to get only first matching elements.
So to click element you may need second function (without s)
driver.find_element(By.XPATH, "//div[#class='wrapper']").click()
or you have to get first element from list when you use first function (with s)
driver.find_elements(By.XPATH, "//div[#class='wrapper']")[0].click()
or use for-loop
for item in driver.find_elements(By.XPATH, "//div[#class='wrapper']"):
item.click()
But wrapper may not be clickable - if it is some popup message then you may need to find button on this wrapper. But you didn't show URL for this page so only you have access to full HTML to check if it has button and find xpath for this button. And here I can't help.
You may also try to use JavaScript to click hiddent element(s) and maybe it will work.
Something like this
elem = driver.find_elements(By.XPATH, "//span[#class='drop__expand']")
for i in elem:
driver.execute_script("arguments[0].click()", i)
But all this is only guess because you didn't show url for this page and we can't test it.

Cannot click an element , Selenium Python

with selenium, i'm trying to click an element but not working with this element,
the page is here page (username/password :admin/admin)
wait2 = WebDriverWait(driver, 10)
element = wait2.until(EC.element_to_be_clickable((By.XPATH, '//*[#id="operate2a5a0448a8bf44a8898ec13e95b152fc"]/div/div[2]')))
element.click()
i tried this on other element in the same page and got no problem
no idea why not working on this element
operate2a5a0448a8bf44a8898ec13e95b152fc seems to be dynamically created id.
The simplest way to access this element is with text based XPath locator:
wait2.until(EC.element_to_be_clickable((By.XPATH, "//div[contains(text(),'Entry Registration')]"))).click()
you can try with below code as well :
//img[contains(#src,'registration')]/..
in code :
wait2 = WebDriverWait(driver, 10)
element = wait2.until(EC.element_to_be_clickable((By.XPATH, "//img[contains(#src,'registration')]/..")))
element.click()

Selenium cant locate element inside ::before ::after

Element to be located
I am trying to locate a span element inside a webpage, I have tried by XPath but its raise timeout error, I want to locate title span element inside Facebook marketplace product. url
here is my code :
def title_detector():
title = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH, 'path'))).text
list_data = title.split("ISBN", 1)
Try this xpath //span[contains(text(),'isbn')]
You can't locate pseudo elements with XPath, only with CSS selector.
I see it's FaceBook with it's ugly class names...
I'm not sure this will work for you, maybe these class names are dynamic, but it worked for me this time.
Anyway, the css_locator for that span element is .dati1w0a.qt6c0cv9.hv4rvrfc.discj3wi .d2edcug0.hpfvmrgz.qv66sw1b.c1et5uql.lr9zc1uh.a8c37x1j.keod5gw0.nxhoafnm.aigsh9s9.qg6bub1s.fe6kdd0r.mau55g9w.c8b282yb.iv3no6db.o0t2es00.f530mmz5.hnhda86s.oo9gr5id
So, since we are trying to get it's before we can do it with the following JavaScript script:
span_locator = `.dati1w0a.qt6c0cv9.hv4rvrfc.discj3wi .d2edcug0.hpfvmrgz.qv66sw1b.c1et5uql.lr9zc1uh.a8c37x1j.keod5gw0.nxhoafnm.aigsh9s9.qg6bub1s.fe6kdd0r.mau55g9w.c8b282yb.iv3no6db.o0t2es00.f530mmz5.hnhda86s.oo9gr5id`
script = "return window.getComputedStyle(document.querySelector('{}'),':before').getPropertyValue('content')".format(span_locator)
print(driver.execute_script(script).strip())
In case the css selector above not working since the class names are dynamic there - try to locate that span with some stable css_locator, it is possible. Just have to try it several times until you see which class names are stable and which are not.
UPD:
You don't need to locate the pseudo elements there, will be enough to catch that span itself. So, it will be enough something like this:
span_locator = `.dati1w0a.qt6c0cv9.hv4rvrfc.discj3wi .d2edcug0.hpfvmrgz.qv66sw1b.c1et5uql.lr9zc1uh.a8c37x1j.keod5gw0.nxhoafnm.aigsh9s9.qg6bub1s.fe6kdd0r.mau55g9w.c8b282yb.iv3no6db.o0t2es00.f530mmz5.hnhda86s.oo9gr5id`
def title_detector():
title = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.CSS_SELECTOR, 'span_locator'))).text
title = title.strip()
list_data = title.split("ISBN", 1)

Selenium Webdrive Find Element XPATH

I am trying to pull the price of the first listing on this website with the following code, but it's returning nothing but blanks. I am navigating to the website, hitting F12, and then copying the XPATH into the line of code below. Any thoughts on why this wouldn't work?
the following gives nothing:
/html/body/div[1]/div/section[6]/div/div[3]/div/div[1]/a/div/div[1]/div/div[2]/div/div[2]/span/sup
While this shows the listing seller successfully:
/html/body/div[1]/div/section[6]/div/div[3]/div/div[1]/a/div/div[1]/div/div[2]/div/div
driver.get('https://swappa.com/mobile/buy/apple-iphone-xs/t-mobile')
pricing = WebDriverWait(driver, 5).until(EC.presence_of_element_located((By.XPATH, '//*[#id="listing_previews"]/div[1]/a/div/div[1]/div/div[2]/div/div[2]/span/sup'))).text
Use getattribute and switch to the span.
pricing = WebDriverWait(driver, 5).until(EC.presence_of_element_located((By.XPATH, '//*[#id="listing_previews"]/div[1]/a/div/div[1]/div/div[2]/div/div[2]/span')))
print(pricing.get_attribute('textContent'))
Outputs
$379
The reason you are not getting values using element.text because span element is hidden by parent tag.
<div class="col-xs-3 col-sm-2 hidden-md hidden-lg text-right">
<span class="price"><sup>$</sup>379</span>
</div>
Instead of .text you need to use textContent attribute which retrieves the value from hidden nodes.
Use the following code block to retrieve all the products price.
driver.get('https://swappa.com/mobile/buy/apple-iphone-xs/t-mobile')
allproductdetails=WebDriverWait(driver,10).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,"div.listing_row")))
for prod in allproductdetails:
print(prod.find_element_by_css_selector("div.media-body span.price").get_attribute("textContent"))

'list' object has no attribute 'get_attribute' while iterating through WebElements

I'm trying to use Python and Selenium to scrape multiple links on a web page. I'm using find_elements_by_xpath and I'm able to locate a list of elements but I'm having trouble changing the list that is returned to the actual href links. I know find_element_by_xpath works, but that only works for one element.
Here is my code:
path_to_chromedriver = 'path to chromedriver location'
browser = webdriver.Chrome(executable_path = path_to_chromedriver)
browser.get("file:///path to html file")
all_trails = []
#finds all elements with the class 'text-truncate trail-name' then
#retrieve the a element
#this seems to be just giving us the element location but not the
#actual location
find_href = browser.find_elements_by_xpath('//div[#class="text truncate trail-name"]/a[1]')
all_trails.append(find_href)
print all_trails
This code is returning:
<selenium.webdriver.remote.webelement.WebElement
(session="dd178d79c66b747696c5d3750ea8cb17",
element="0.5700549730549636-1663")>,
<selenium.webdriver.remote.webelement.WebElement
(session="dd178d79c66b747696c5d3750ea8cb17",
element="0.5700549730549636-1664")>,
I expect the all_trails array to be a list of links like: www.google.com, www.yahoo.com, www.bing.com.
I've tried looping through the all_trails list and running the get_attribute('href') method on the list but I get the error:
Does anyone have any idea how to convert the selenium WebElement's to href links?
Any help would be greatly appreciated :)
Let us see what's happening in your code :
Without any visibility to the concerned HTML it seems the following line returns two WebElements in to the List find_href which are inturn are appended to the all_trails List :
find_href = browser.find_elements_by_xpath('//div[#class="text truncate trail-name"]/a[1]')
Hence when we print the List all_trails both the WebElements are printed. Hence No Error.
As per the error snap shot you have provided, you are trying to invoke get_attribute("href") method over a List which is Not Supported. Hence you see the error :
'List' Object has no attribute 'get_attribute'
Solution :
To get the href attribute, we have to iterate over the List as follows :
find_href = browser.find_elements_by_xpath('//your_xpath')
for my_href in find_href:
print(my_href.get_attribute("href"))
If you have the following HTML:
<div class="text-truncate trail-name">
Link 1
</div>
<div class="text-truncate trail-name">
Link 2
</div>
<div class="text-truncate trail-name">
Link 3
</div>
<div class="text-truncate trail-name">
Link 4
</div>
Your code should look like:
all_trails = []
all_links = browser.find_elements_by_css_selector(".text-truncate.trail-name>a")
for link in all_links:
all_trails.append(link.get_attribute("href"))
Where all_trails -- is a list of links (Link 1, Link 2 and so on).
Hope it helps you!
Use it in Singular form as find_element_by_css_selector instead of using find_elements_by_css_selector as it returns many webElements in List. So you need to loop through each webElement to use Attribute.
find_href = browser.find_elements_by_xpath('//div[#class="text truncate trail-name"]/a[1]')
for i in find_href:
all_trails.append(i.get_attribute('href'))
get_attribute works on elements of that list, not list itself.
get_attribute works on elements of that list only, not list itself. For eg :-
def fetch_img_urls(search_query: str):
driver.get('https://images.google.com/')
search = driver.find_element(By.CLASS_NAME, "gLFyf.gsfi")
search.send_keys(search_query)
search.send_keys(Keys.RETURN)
links=[]
try:
time.sleep(5)
urls = driver.find_elements(By.CSS_SELECTOR,'a.VFACy.kGQAp.sMi44c.lNHeqe.WGvvNb')
for url in urls:
#print(url.get_attribute("href"))
links.append(url.get_attribute("href"))
print(links)
except Exception as e:
print(f'error{e}')
driver.quit()

Categories

Resources