I'm learning Selenium, and I've tried to make an automatic clicker for this game https://orteil.dashnet.org/cookieclicker/ .
I've made a Action Chain to click on the big cookie on the left side and put this into the loop.
But it click only once.
I tried, this loop also on https://clickspeedtest.com/ page, with same reasoult.
I also tried to add actions.pause(1), and time.sleep(1) inside the loop.
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
import time
PATH = r"C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome(PATH)
driver.get("https://orteil.dashnet.org/cookieclicker/")
driver.implicitly_wait(5)
bigCookie = driver.find_element_by_id("bigCookie")
actions = ActionChains(driver)
actions.click(bigCookie)
for i in range(10):
actions.perform()
When you call methods for actions on the ActionChains object, the actions are stored in a queue in the ActionChains object. When you call perform(), the events are fired in the order they are queued up.
I assume that after the first time you run perform(), the queue stays empty and you probably need to store new set of actions in the queue. So something like this:
actions = ActionChains(driver)
for i in range(10):
actions.click(bigCookie)
actions.perform()
ActionChains are used in a chain pattern. In other words, actions can be queued up one by one, then performed. When you call perform(), the events are fired in the order they are queued up.
You were almost there. However to perform the clicks in a loop, you need to create the ActionChain including both the events as follows:
driver.get('https://orteil.dashnet.org/cookieclicker/')
for i in range(10):
ActionChains(driver).move_to_element(driver.find_element(By.CSS_SELECTOR, "div#bigCookie")).click().perform()
Related
I have the following try statement, that basically finds a button that resets the current page I am in. In summary the page reloads,
try:
reset_button = D.find_element(By.XPATH,"//button[starts-with(#class,'resetBtn rightActionBarBtn ng-star-inserted')]")
reset_button.click()
D.implicitly_wait(5)
ok_reset_botton = D.find_element(By.ID,'okButton')
D.implicitly_wait(5)
print(ok_reset_botton)
ok_reset_botton.click()
D.implicitly_wait(5)
# Trying to reset current worksheet
except:
pass
print(D.current_url)
grupao_ab = D.find_element(By.XPATH,'//descendant::div[#class="slicer-restatement"][1]')
D.implicitly_wait(5)
grupao_ab.click()
The weird thing is every time that try statement get executed, I get the following log of error
selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
Which happens in the a following line of code according to the log
grupao_ab.click()
When I took a look at the reason given by selenium it say it is because the element is no longer on the given DOM, but the element grupao_ab, is not even being defined in that page so why it is giving me that error? If any extra information is needed just comment.
First of all, StaleElementReferenceException means that the web element reference you trying to access is no more valid. This normally happens after the page was reloaded. This is exactly what happens here.
What happened is as following: you clicked on reset button and immediately after that you collecting the grupao_ab element and shortly after that trying to click it. But between the moment you located the grupao_ab element with grupao_ab = D.find_element(By.XPATH,'//descendant::div[#class="slicer-restatement"][1]') and the line where you trying to click it, reloading started. So that previously collected web element, that actually is a reference to a physical element on the DOM, no more pointing to that web element.
What you can to do here is: after clicking on the refresh button set a short delay so that refreshing will start and after that wait for grupao_ab element to become clickable. WebDriverWait expected_conditions explicit waits should be used for that.
Also, you should understand that D.implicitly_wait(5) is not a pause command. It sets the timeout for find_element and find_elements methods to wait for presence of the searching element. Normally we never set this timeout at all since it's better to use WebDriverWait expected_conditions explicit waits, not implicitly_wait implicitly waits. And you should never mix these two types of waits.
And even if you want to set implicitly_wait to some value normally no need to set it again, this setting is applied to the entire driver session.
Please try changing your code as following:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
wait = WebDriverWait(driver, 20)
try:
wait.until(EC.element_to_be_clickable((By.XPATH, "//button[starts-with(#class,'resetBtn rightActionBarBtn ng-star-inserted')]"))).click()
wait.until(EC.element_to_be_clickable((By.ID, "okButton"))).click()
print(ok_reset_botton)
time.sleep(0.5) # a short pause to make reloading started
except:
pass
print(D.current_url)
#wait for the element on refreshed page to become clickable
wait.until(EC.element_to_be_clickable((By.XPATH, '//descendant::div[#class="slicer-restatement"][1]'))).click()
The automation task requires moving to an open tab, executing a command ( a button click) and then moving on to the next tab. This process is repeated for the next 4-5 tabs.
I already have a code that automates this process. I have a for loop that goes through each of the window handles for the opened tabs and automates the button click. But the issue is that on each tab, the driver waits for the execution of the button click to process and the new page to load before moving on to the next tab. I ideally want the driver to click the button and move to the next tab instantaneously without waiting for the new page to load.
Is there some method for achieving this? Are there any other options besides using Selenium for this sort of automation.
My current code looks like something like this :
handles = driver.window_handles
for i in range(4):
driver.switch_to_window(handles[i])
driver.find_element_by_id('submit').click()
for i in range(3):
driver.switch_to_window(driver.window_handles[i+1])
chain = ActionChains(driver)
element = driver.find_element_by_name('submit')
chain.move_to_element_with_offset(element, 0, 0)
chain.click(element)
chain.release(element)
chain.perform()
I used the aboce code to use ActionChains to click the button. But im getting a StateElementReferenceException. The rrror is triggered at chain.perform()
StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
Using action chains to perform the click action will allow Selenium to continue without waiting for the result of the click action. Tested on linux, python3.4, chromedriver2.38:
from selenium.webdriver.common.action_chains import ActionChains
from selenium.common.exceptions import *
chain = ActionChains(super().driver)
try:
element = driver.find_element_by_id('submit')
chain.move_to_element_with_offset(element, 0, 0)
chain.click(element)
chain.release(element)
# Perform the chained actions including the left-click.
chain.perform()
except:
print("Failed to click element")
raise
I'm using python 3.6 and selenium 3.8.1, Chrome browser to simulate users entering an order. The app we use has a particularly frustrating implementation for automation - a loading modal will pop up whenever a filter for a product is loading, but it does not truly cover elements underneath it. Additionally, load time fluctuates wildly, but with an upper bound. If I don't use excessive sleep statements, selenium will either start clicking wildly before the correct objects are loaded or clicks on the element but, of course, hits the loading modal. (Fun side note, the loading modal only fills the screen view, so selenium is also able to interact with items below the fold. :P)
To get around this:
def kill_evil_loading_modal(self):
# i pause for a second to make sure the loader has a chance to pop
time.sleep(1)
# pulling locator type and from another file: ("id","locator_id")
loading_modal = ProductsLocators.loading_modal_selector
# call a function that returns true/false for object if exists
check_for_evil = self.is_element_exist(*loading_modal)
while check_for_evil == True:
check_for_evil = self.is_element_exist(*loading_modal)
This works great! Where I had a ton of evil time.sleep(x) statements to avoid the loading modal, I'm now catching it and waiting until it's gone to move forward.
If I only had to deal with that two or three times, I would move on. Sadly, this loading modal hits after every click ... so this is what my main script looks like now:
new_quote02_obj.edit_quote_job(**data)
new_quote03_obj.kill_evil_loading_modal()
new_quote03_obj.click_product_dropdown()
new_quote03_obj.kill_evil_loading_modal()
new_quote03_obj.click_product_dropdown_link()
new_quote03_obj.kill_evil_loading_modal()
new_quote03_obj.select_category_dropdown(scenario_data['category_name'])
new_quote03_obj.kill_evil_loading_modal()
new_quote03_obj.select_market_dropdown(scenario_data['local_sales_market'])
new_quote03_obj.kill_evil_loading_modal()
new_quote03_obj.add_products_job(scenario_data['product_list_original'])
new_quote03_obj.kill_evil_loading_modal()
new_quote03_obj.click_done_btn()
new_quote03_obj.kill_evil_loading_modal()
new_quote03_obj.next_btn_page()
How can I refactor to stay DRY?
If you want to wait until modal disappeared and avoid using time.sleep() you can try ExplicitWait:
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait as wait
wait(driver, 10).until_not(EC.visibility_of_element_located(("id", "locator_id")))
or
wait(driver, 10).until(EC.invisibility_of_element_located(("id", "locator_id")))
This should allow you to wait up to 10 seconds (increase delay if needed) until element with specified selector ("id", "locator_id") will become invisible
If modal appears after each click you can implement your own click method, like
def click_n_wait(by, value, timeout=10):
wait(driver, timeout).until(EC.element_to_be_clickable((by, value))).click()
wait(driver, timeout).until(EC.invisibility_of_element_located(("id", "locator_id")))
and use it as
click_n_wait("id", "button_id")
As you mentioned in your question a loading modal will pop up whenever a filter for a product is loading irespective of the loader cover elements underneath it or not you can simply wait for the next intended element with which you want to interact with. Following this approach you can completely get rid of the function kill_evil_loading_modal() which looks to me as a overhead. As a replacement to kill_evil_loading_modal() function you have to invoke WebDriverWait() method along with proper expected_conditions as required as follows :
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# other code
WebDriverWait(driver, 2).until(EC.element_to_be_clickable((By.XPATH, "xpath_of_element_A"))).click()
WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.XPATH, "xpath_of_element_B"))).click()
WebDriverWait(driver, 3).until(EC.element_to_be_clickable((By.XPATH, "xpath_of_element_C"))).click()
I have a problem using function to scroll down using PageDown key via Selenium's ActionChains in python 3.5 on Ubuntu 16.04 x64.
What I want is that my program scrolls down by PageDown twice, so it reaches bottom at the end and so I can have selected element always visible.
Tried making another function using Keys.END, but it did not work, so I assume it has something to do with ActionChains not closing or something.
The function looks like this:
from selenium.webdriver.common.action_chains import ActionChains
...
def scrollDown(self):
body = browser.find_element_by_xpath('/html/body')
body.click()
ActionChains(browser).send_keys(Keys.PAGE_DOWN).perform()
and I use it in another file like this:
mod.scrollDown()
The first time I use it, it does scroll down as would if PageDown key would be pressed, while another time nothing happens.
It does not matter where i call it, the second (or third...) time it does not execute.
Tried doing it manually and pressed PageDown button twice, works as expected.
Console does not return any error not does the IDE.
Maybe, if it has to do with the action chains, you can just do it like this:
from selenium.webdriver.common.keys import Keys
body = browser.find_element_by_css_selector('body')
body.send_keys(Keys.PAGE_DOWN)
Hope it works!
I had to click on the body for the Keys.PAGE_DOWN to work but didn't need to use the action chain:
from selenium.webdriver.common.keys import Keys
body = driver.find_element_by_css_selector('body')
body.click()
body.send_keys(Keys.PAGE_DOWN)
#python
from selenium.webdriver.common.keys import Keys
driver.find_element_by_css_selector('body').send_keys(Keys.PAGE_DOWN)
I have a page whose source code is not available, but there is a input box where cursor is blinking.
Can i write something into the text box without finding the element. I mean, some way where send key can automatically look for focused inputbox and type input to it.
My code does not work obviously
driver.send_keys("testdata")
Solved it
from selenium.webdriver.common.action_chains import ActionChains
actions = ActionChains(self.driver)
actions.send_keys('dummydata')
actions.perform()
If you get error about 'self' in this code:
from selenium.webdriver.common.action_chains import ActionChains
actions = ActionChains(self.driver)
actions.send_keys('dummydata')
actions.perform()
just use:
actions = ActionChains(driver)
I don't have comment rights that's why I put this as answer
Edit: Added this enhancement as a comment on the original answer.
This worked for me:
driver.find_element_by_tag_name('body').send_keys(' ')
(Which I used to use a space character to scroll through a page)