selenium exception error in random executions - python

i've written the below lines of code:
elem = driver.find_elements(By.XPATH, "//span[#class='drop__expand']")
for i in elem:
i.click()
and i get the below error:
selenium.common.exceptions.ElementClickInterceptedException: Message: Element <span class="drop__expand"> is not clickable at point (112,20) because another element <div class="wrapper"> obscures it
i tried that without any result:
driver.find_elements(By.XPATH, "//div[#class='wrapper']").click()
elem = driver.find_elements(By.XPATH, "//span[#class='drop__expand']")
for i in elem:
i.click()
how can i handle this?

There are two functions:
find_elements() with s at the end - to get list with all matching elements - even if there is one element or there is no elements.
find_element() without s at the end - to get only first matching elements.
So to click element you may need second function (without s)
driver.find_element(By.XPATH, "//div[#class='wrapper']").click()
or you have to get first element from list when you use first function (with s)
driver.find_elements(By.XPATH, "//div[#class='wrapper']")[0].click()
or use for-loop
for item in driver.find_elements(By.XPATH, "//div[#class='wrapper']"):
item.click()
But wrapper may not be clickable - if it is some popup message then you may need to find button on this wrapper. But you didn't show URL for this page so only you have access to full HTML to check if it has button and find xpath for this button. And here I can't help.
You may also try to use JavaScript to click hiddent element(s) and maybe it will work.
Something like this
elem = driver.find_elements(By.XPATH, "//span[#class='drop__expand']")
for i in elem:
driver.execute_script("arguments[0].click()", i)
But all this is only guess because you didn't show url for this page and we can't test it.

Related

Locating and storing multiple elements in selenium

I need to find and store the location of some elements so the bot can click on those elements even the if page changes. I have read online that for a single element storing the location of that element in a variable can help however I could not find a way to store locations of multiple elements in python. Here is my code
comment_button = driver.find_elements_by_css_selector("svg[aria-label='Comment']")
for element in comment_button:
comment_location = element.location
sleep(2)
for element in comment_location:
element.click()
this code gives out this error:
line 44, in <module>
element.click()
AttributeError: 'str' object has no attribute 'click'
Is there a way to do this so when the page refreshes the script can store the locations and move on to the next location to execute element.click without any errors?
I have tried implementing ActionChains into my code
comment_button = driver.find_elements_by_css_selector("svg[aria-label='Comment']")
for element in comment_button:
ac = ActionChains(driver)
element.click()
ac.move_to_element(element).move_by_offset(0, 0).click().perform()
sleep(2)
comment_button = driver.find_element_by_css_selector("svg[aria-label='Comment']")
comment_button.click()
sleep(2)
comment_box = driver.find_element_by_css_selector("textarea[aria-label='Add a comment…']")
comment_box.click()
comment_box = driver.find_element_by_css_selector("textarea[aria-label='Add a comment…']")
comment_box.send_keys("xxxx")
post_button = driver.find_element_by_xpath("//button[#type='submit']")
post_button.click()
sleep(2)
driver.back()
scroll()
However this method gives out the same error saying that the page was refreshed and the object can not be found.
selenium.common.exceptions.StaleElementReferenceException: Message: The element reference of <svg class="_8-yf5 "> is stale; either the element is no longer attached to the DOM, it is not in the current frame context, or the document has been refreshed
Edited:
Assuming No. of such elements are not changing after refresh of page, you can use below code
commentbtns = driver.find_elements_by_css_selector("svg[aria-label='Comment']")
for n in range(1, len(commentbtns)+1):
Path = "(//*[name()='svg'])["+str(n)+"]"
time.sleep(2)
driver.find_element_by_xpath(Path).click()
You can use more sophisticated ways to wait for element to load properly. However for simplicity purpose i have used time.sleep.

How to click in a div?

I'm trying to perform a click on different div class TopBox. i tried the codes below but i don't get the click performed :
driver.find_element_by_css_selector('#home > div > div.row.topBoxs > div.col-xs-12.col-lg-10 > div > div:nth-child(1) > div').click()
and also :
driver.find_element_by_xpath('//*[#id="home"]/div/div[2]/div[1]/div/div[1]').click()
Below is the snapshot of the code of the box "mes posts" as exemple and the other boxes.
Try with xpath instead of css_selector.
driver.find_element_by_xpath('xpath_of_your_Div').click()
Here is what you have to do.
# get number of the toolboxes first
toolBoxes = len(driver.find_element_by_css_selector("div.boxs div[class^='topBox ']"))
# now you can click on each of them
for boxNumber in range(toolBoxes):
# you can use either xpath/css to get the nth box and click()
driver.find_element_by_xpath("(//div[#class='boxs']/div[starts-with(#class,'topBox ')])[" + str(boxNumber+1) + "]").click()
# waiting here so that you can see the element is clicked (optional)
time.sleep(2)
Firstly, check if you are able to locate the element. If it is not then following line should be throwing error:
WebElement theButton = driver.find_element_by_xpath('//div[#class='boxs']/div[1]/div');
alternatively pls try this xpath as well
'//div[#class='boxs']/div[1]/div/div/p/span'
Wait for the element till its clickable
wait = WebDriverWait(driver, 15)
wait.until(EC.element_to_be_clickable((By.XPATH, my_xpath)))
Then try clicking the element with .click()
Please let me know your observations which would help troubleshoot
Try this code with this css selector:
all_boxes = driver.find_elements_by_css_selector("div.content-box span")
for box_i in range(len(all_boxes)):
driver.find_elements_by_css_selector("div.content-box span")[box_i].click()
If you are getting an error, please let me know, what it is.

Clicking through multiple links using selenium

Trying to break a bigger problem I have into smaller chunks
main question
I am currently inputting a boxer's name into an autocomplete box, selecting the first option that comes up (boxer's name) then clicking view more until I get a list of all the boxer's fights and the view more button stops appearing.
I am then trying to create a list of onclick hrefs I would like to click then iteratively clicking on each and getting the html from each page/fight. I would ideally want to extract the text in particular.
This is the code I have written:
page_link = 'http://beta.compuboxdata.com/fighter'
chromedriver = 'C:\\Users\\User\\Downloads\\chromedriver'
cdriver = webdriver.Chrome(chromedriver)
cdriver.maximize_window()
cdriver.get(page_link)
wait = WebDriverWait(cdriver,20)
wait.until(EC.visibility_of_element_located((By.ID,'s2id_autogen1'))).send_keys('Deontay Wilder')
wait.until(EC.visibility_of_element_located((By.CLASS_NAME, 'select2-result-label'))).click()
while True:
try:
element = wait.until(EC.visibility_of_element_located((By.CLASS_NAME, 'view_more'))).click()
except TimeoutException:
break
# fighters = cdriver.find_elements_by_xpath("//div[#class='row row-bottom-margin-5']/div[2]")
links = [x.get_attribute('onclick') for x in wait.until(EC.visibility_of_element_located((By.XPATH, "//*[contains(#onclick, 'get_fight_report')]/a")))]
htmls = []
for link in links:
cdriver.get(link)
htmls.append(cddriver.page_source)
Running this however gives me the error message:
ElementClickInterceptedException Traceback (most recent call last)
<ipython-input-229-1ee2547c0362> in <module>
10 while True:
11 try:
---> 12 element = wait.until(EC.visibility_of_element_located((By.CLASS_NAME, 'view_more'))).click()
13 except TimeoutException:
14 break
ElementClickInterceptedException: Message: element click intercepted: Element <a class="view_more" href="javascript:void(0);" onclick="_search('0')"></a> is not clickable at point (274, 774). Other element would receive the click: <div class="col-xs-12 col-sm-12 col-md-12 col-lg-12">...</div>
(Session info: chrome=78.0.3904.108)
UPDATE
I have tried looking at a few answers with similar error messages and tried this
while True:
try:
element = cdriver.find_element_by_class_name('view_more')
webdriver.ActionChains(cdriver).move_to_element(element).click(element).perform()
# element = wait.until(EC.visibility_of_element_located((By.CLASS_NAME, 'view_more'))).click()
except TimeoutException:
break
links = [x.get_attribute('onclick') for x in wait.until(EC.visibility_of_element_located((By.XPATH, "//*[contains(#onclick, 'get_fight_report')]/a")))]
htmls = []
for link in links:
cdriver.get(link)
htmls.append(cddriver.page_source)
but this seems to create some sort of infinite loop at the ActionChains point. Seems to be constantly waiting for the view more href to appear
click function should already move the window so the element is in the viewable window. So you don't need that action chain (I think...) but the original error shows some other element OVER the view more button.
You may need to remove (or hide) this element from the DOM, or if it's a html window, "dismiss" it. So pinpointing this covering element is key and then deciding on a strategy to uncover the view more button.
Your site http://beta.compuboxdata.com/fighter doesn't seem to be working at the time so I can't dig in further.

Finding an Xpath link on IMDB.com for the first drop down item in the search bar

I can't seem to find the right Xpath for what I am trying to do:
the element I am trying to click on appears to be:
//*[#id="react-autowhatever-1--item-0"]/span/div[2]
but when run this code:
wait = WebDriverWait(imdbScrape.driver, 10)
wait.until(ec.element_to_be_clickable((By.XPATH,
'//*[#id="suggestion-search"]'))).send_keys("the"
+ "fifth element")
searchBar = wait.until(ec.element_to_be_clickable((By.XPATH,
'//*[#id="react-autowhatever-1--item-0"]/span/div[2]')))
searchBar.location_once_scrolled_into_view
urlToScrape = searchBar.get_attribute("href")
print(urlToScrape)
I get "None" as the result, I am assuming it because when I look on the page I don't see any "href" tags but I am wondering how I can get the first selection's link address
Any help would be appreciated
Thanks,

How to get element by attribute with Selenium, Xpath and Expected Condition

This is what I'm using:
getByAttribute = WebDriverWait(amazonDriver, 10).until(EC.visibility_of_element_located((By.XPATH, "//div[#an-attribute='data-category']")))
The element looks as follows:
<div class='nav-subnav' data-category='drugstore'>
This is present on every Amazon products page.
It times out and does not find the element.
Use #data-category to get element by attribute.
getByAttribute = WebDriverWait(amazonDriver, 10).until(EC.visibility_of_element_located((By.XPATH, "//div[#data-category]")))
CSS Selector:
getByAttribute = WebDriverWait(amazonDriver, 10).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div[data-category]")))

Categories

Resources