I'm trying to .click() a few elements in a pop up-list on a webpage, but keep getting the StaleElementReferenceException when i try to move_to_elements.
The code is based around a number of clickable elements in a feed. When clicked, these elements result in a pop-up box with more clickable elements that I want to access.
I access the pop up-box with the following code, where popupbox_links are a list with coordinates and links for the pop up boxes:
for coordinate in popupbox_links:
actions = ActionChains(driver)
actions.move_to_element(coordinate["Popupbox location"]).perform()
time.sleep(3)
popupboxpath = coordinate["Popupbox link"]
popupboxpath.click()
time.sleep(3)
This works fine. But when the pop up box is opened, I want to perform the following:
seemore = driver.find_element_by_link_text("See More")
time.sleep(2)
actions.move_to_element(seemore).perform()
time.sleep(2)
seemore.click()
time.sleep(3)
findbuttons = driver.find_elements_by_link_text("Button")
time.sleep(2)
print(findbutton)
for button in findbuttons:
time.sleep(2)
actions.move_to_element(button).perform()
time.sleep(2)
button.click()
time.sleep(randint(1, 5))
The trouble starts at actions.move_to_element on both "See more" and "Button". Even though the print(findbutton) actually returns a list with contents, containing the elements that I want to click, Selenium seems to be unable to move_to_element on these. Instead, it throws StaleElementReferenceException.
To make it more confusing, the script seems to work at times. Although usually it just crashes.
Any clues on how to solve this? Big thanks in advance.
I'm running the latest Selenium on Python 3.6 with the Chrome WebDriver.
StaleElementReferenceException Says that element is stage because something has changed in page after you created the webElement object. In you case that could be happening due to button.click().
The simplest solution is to create new element every time, rather than iterate element from the loop.
following changes might work.
findbuttons = driver.find_elements_by_link_text("Button")
time.sleep(2)
print(findbuttons)
for i in range(len(findbuttons)):
time.sleep(2)
elem = driver.find_elements_by_link_text("Button")[i]
actions.move_to_element(elem).perform()
time.sleep(2)
elem.click()
time.sleep(randint(1, 5))
Related
in my scraping script in python I'm in a situation where, while collapsing multiple buttons on a page, randomically a couple of pop up appear in the page and automatically the script fails.
These two pop ups are already managed and the beginning of the script but the website in a non systematic way dedices to show these two.
This is the part of the script interested where the script sleeps for 3 secs between one click to the other:
collapes = driver.find_elements_by_css_selector('.suf-CompetitionMarketGroup suf-CompetitionMarketGroup-collapsed ')
for collapes in collapes:
collapes.click()
sleep(3)
These are the two lines of the scipt where I click on the pop ups at the beginning
wait.until(EC.presence_of_element_located((By.XPATH, '/html/body/div[3]/div/div[2]/div[2]'))).click()
driver.find_element(By.XPATH, '/html/body/div[1]/div/div[4]/div[1]/div/div[3]/div[4]/div[3]/div').click()
DO you think there's a way to continue running the process being ready to click on these two without going on error?
You can try to close the popups every time your code in for loop fails:
def try_closing_popups():
try:
wait.until(EC.presence_of_element_located((By.XPATH, '/html/body/div[3]/div/div[2]/div[2]'))).click()
driver.find_element(By.XPATH, '/html/body/div[1]/div/div[4]/div[1]/div/div[3]/div[4]/div[3]/div').click()
except:
pass
collapes = driver.find_elements_by_css_selector('.suf-CompetitionMarketGroup suf-CompetitionMarketGroup-collapsed ')
for collapes in collapes:
try:
collapes.click()
except:
try_closing_popups()
sleep(3)
I am fairly new in Selenium and I've been trying to work on automating the login for this [website], but for some reason element.click() on selenium does not seem to work when I try to click onto the Login button. I keep getting this TypeError: 'str' object is not callable error.
Here's my code:
try:
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH("//li[#class='item-119']/a[contains(text(),'Login')]")))
)
print("TEST")
element.click()
finally:
driver.quit()
I've also tried using By.CLASS_NAME and By.PARTIAL_LINK_TEXT and I keep getting the same error. Spent a few hours researching, looking through StackOverFlow and trying to solve this error but I don't seem to be able to solve it. Please do help me and let me know where I went wrong.
You are trying to use wrong locator.
Selenium fails to find such element.
Also, it's better to use expected conditions of element visibility instead of element presence since when element becomes existing on the page it is still not clickable / visible.
So, please try this:
element = wait.until(EC.visibility_of_element_located((By.XPATH, "//div[#id='header']//a[#class='login-btn']")))
print("TEST")
element.click()
You can try with below xpath :
try:
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH("(//a[#class='login-btn'])[2]"))).click()
print("TEST")
finally:
driver.quit()
I'm practicing with selenium right now but I can't seem to get it to print the correct URL.
import time
from selenium import webdriver
driver = webdriver.Firefox()
home_page = ''
driver.get(home_page)
time.sleep(15)
for i in range(1,9):
listing_page = driver.find_element_by_xpath('//*[#id="m_property_lst_cnt_realtor_more_'+str(i)+'"]').click()
realtor_url = driver.find_element_by_xpath('//*[#id="lblMediaLinks"]/a').click()
print(driver.current_url)
driver.get(home_page)
time.sleep(5)
I need the URL of the webpage that opens up when selenium clicks on the element in realtor_url. It instead prints the URL of the first click from listing_page.
(Note: webpage that is opened from realtor_url is a completely different website, if that helps)
You need to switch to the newly opened window before printing the URL, then close the newly opened window and switch back to the original window.
// do the click that opens another window
WebDriverWait(driver, 10).until(lambda d: len(d.window_handles) == 2)
driver.switch_to_window(driver.window_handles[1])
print(driver.current_url)
driver.close()
driver.switch_to_window(driver.window_handles[0])
A couple other things...
.click() returns void, i.e. nothing, so there's nothing ever stored in listing_page or realtor_url variables.
sleep() is a bad practice. Don't use it. Google it to find out the details why. Replace sleep() with a relevant WebDriverWait.
I believe you need to change the "focus"of the window in order to print the correct url. The current window handler may be pointing to the previous click and not changed focus to the new window.
Try and get to change the "window handler". Every new window opened has a handler.
I hope this helps. window_handles Or this Handle multiple window in Python
EDIT:
The below should bring you to the latest open window.
driver.switch_to_window(driver.window_handles[-1])
I developed a selenium script, which makes automatic comments in facebook groups.
It works relatively good, but it does not execute the click() method if the targeted element is not visible on the browser.
As a working around I'm using the execute_script("window.scrollTo(x,y";), method, but it's not ideal script. The piece of code that must be improved is the following:
text_box = driver.find_element_by_class_name("UFIAddCommentInput")
try:
driver.execute_script("window.scrollTo(100, 0);")
text_box.click()
except:
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
text_box.click()
element = driver.switch_to.active_element
element.send_keys(frase)
element.send_keys(Keys.RETURN)
It first tries for the element on the top of the page, and, if does not get to execute click(), it tries at the bottom.
However, there is a more effective way to scroll at the element found by the find_element_by_class_namemethod?
You can try
text_box.location_once_scrolled_into_view
text_box.click()
to scroll page down right to required element and click it
I have a dynamic page that loads products when the user scrolls down a page. I want to get the total number of products rendered on the display page. Currently I am using the following code to get to the bottom until all the products are being displayed.
elems = WebDriverWait(self.driver, 30).until(EC.presence_of_all_elements_located((By.CLASS_NAME, "x")))
print len(elems)
a = len(elems)
self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(4)
elem1 = WebDriverWait(self.driver, 30).until(EC.presence_of_all_elements_located((By.CLASS_NAME, "x")))
b = len(elem1)
while b > a:
self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(4)
elem1 = WebDriverWait(self.driver, 30).until(EC.presence_of_all_elements_located((By.CLASS_NAME, "x")))
a = b
b = len(elem1)
print b
This is working nicely, but I want to know whether there is any better option of doing this?
You can perform this action easily using this line of code
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
And if you want to scroll down for ever you should try this.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
driver = webdriver.Firefox()
driver.get("https://twitter.com/BarackObama")
while True:
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(3)
I am not sure about time.sleep(x value) cause loading data my take longer .. or less ..
for more information please check the official Doc page
have fun :)
I think you could condense your code down to this:
prior = 0
while True:
self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
current = len(WebDriverWait(self.driver, 30).until(EC.presence_of_all_elements_located((By.CLASS_NAME, "x"))))
if current == prior:
return current
prior = current
I did away with all the identical lines by moving them all into the loop, which necessitated making the loop a while True: and moving the condition checking into the loop (because unfortunately, Python lacks any do-while).
I also threw out the sleep and print statements - I'm not sure what their purpose was, but on my own page, I have found that the same number of elements load whether I sleep between scrolls or not. Further, in my own case, I don't need to know the count at any point, I just need to know when it has exhausted the list (but I added in a return variable so you can get the final count if you happen to need it. If you really want to print ever intermediate count, you can print current right after it's assigned in the loop.
If you have no idea how many elements might be added to the page, but you just want to get all of them, it might be good to loop thusly:
scroll down as described above
wait a few seconds
save the size of the page source (xxx.page_source)
if the size of the page source is larger than the last page source size saved, loop back and scroll down some more
I suppose that screenshot size might work fine too, depending upon the page you're loading, but this is working in my current program.