Click multiple elements at the same time selenium Python - python

I am trying to click multiple elements at the same time without any delay in between.
For example, instead of 1 then 2, it should be 1 and 2.
this is the 1st element that I want to click:
WebDriverWait(driver, 1).until(EC.element_to_be_clickable(
(By.XPATH,
"//div[contains(#class, 'item-button')]//div[contains(#class, 'button-game')]"))).click()
this is the 2nd elements that I want to click:
(run the first line then second line)
WebDriverWait(driver, 1).until(
EC.frame_to_be_available_and_switch_to_it((By.XPATH, "/html/body/div[4]/div[4]/iframe")))
WebDriverWait(driver, 1.4).until(EC.element_to_be_clickable(
(By.XPATH, "//*[#id='rc-imageselect']/div[3]/div[2]/div[1]/div[1]/div[4]"))).click()
Basically, Click 1st element and 2nd elements' first line then second line.
I have tried this, but did not work:
from threading import Thread
def func1():
WebDriverWait(driver, 1).until(EC.element_to_be_clickable(
(By.XPATH,
"//div[contains(#class, 'item-button')]//div[contains(#class, 'button-game')]"))).click()
def func2():
WebDriverWait(driver, 1).until(
EC.frame_to_be_available_and_switch_to_it((By.XPATH, "/html/body/div[4]/div[4]/iframe")))
WebDriverWait(driver, 1.4).until(EC.element_to_be_clickable(
(By.XPATH, "//*[#id='rc-imageselect']/div[3]/div[2]/div[1]/div[1]/div[4]"))).click()
if __name__ == '__main__':
Thread(target = func1).start()
Thread(target = func2).start()
Use case: I am trying to automate a website and I need to be fast. Sometimes, element1 is not showing, the website shows element2 instead and vice versa. If I did not check element1 and element2 at the same time, I will be late. The code above starts function1 before function2 and not at the same time. Thank you

You can make it simultaneous if you use jQuery:
click_script = """jQuery('%s').click();""" % css_selector
driver.execute_script(click_script)
That's going to click all elements that match that selector at the same time, assuming you can find a common selector between all the elements that you want to click. You may also need to escape quotes before feeding that selector into the execute_script. And you may need to load jQuery if it wasn't already loaded. If there's no common selector between the elements that you want to click simultaneously, then you can use javascript to set a common attribute between them.
You can also try it with JS execution, which might be fast enough:
script = (
"""var simulateClick = function (elem) {
var evt = new MouseEvent('click', {
bubbles: true,
cancelable: true,
view: window
});
var canceled = !elem.dispatchEvent(evt);
};
var $elements = document.querySelectorAll('%s');
var index = 0, length = $elements.length;
for(; index < length; index++){
simulateClick($elements[index]);}"""
% css_selector
)
driver.execute_script(script)
As before, you'll need to escape quotes and special characters first.
Using import re; re.escape(STRING) can be used for that.
All of this will be made easier if you use the Selenium Python framework: SeleniumBase, which has build-in methods for simultaneous clicking:
self.js_click_all(selector)
self.jquery_click_all(selector)
And each of those above will automatically escape quotes of selectors before running driver.execute_script(SCRIPT), and will also load jQuery if it wasn't already loaded on the current page. If the elements above didn't already have a common selector, you can use self.set_attribute(selector, attribute, value) in order to create a common one before running one of the simultaneous click methods.

Simultaneous clicking is possible with selenium ActionChains class.
Reference:
https://github.com/SeleniumHQ/selenium/blob/64447d4b03f6986337d1ca8d8b6476653570bcc1/py/selenium/webdriver/common/actions/pointer_input.py#L24
And here is code example, in which 2 clicks will be performed on 2 different elements at the same time:
from selenium.webdriver.common.actions.mouse_button import MouseButton
from selenium.webdriver.common.action_chains import ActionChains
b1 = driver.find_element(By.ID, 'SomeButtonId')
b2 = driver.find_element(By.ID, 'btnHintId')
location1 = b1.rect
location2 = b2.rect
actions = ActionChains(driver)
actions.w3c_actions.devices = []
new_input = actions.w3c_actions.add_pointer_input('touch', 'finger1')
new_input.create_pointer_move(x=location1['x'] + 1, y=location1['y'] + 2)
new_input.create_pointer_down(MouseButton.LEFT)
new_input.create_pointer_up(MouseButton.LEFT)
new_input2 = actions.w3c_actions.add_pointer_input('touch', 'finger2')
new_input2.create_pointer_move(x=location2['x'] + 1, y=location2['y'] + 2)
new_input2.create_pointer_down(MouseButton.LEFT)
new_input2.create_pointer_up(MouseButton.LEFT)
actions.perform()

You can use ActionChains to perform multiple actions almost immediately.
actions = ActionChains(driver)
actions.move_to_element(element1).click()
actions.move_to_element(element2).click()
actions.perform()
Depending on the use case, you could also use click_and_hold which is also available using ActionChains.

Related

How to get selenium to wait on multiple elements to load

I am using the following code to wait on all 4 elements to be loaded before proceeding with the screen scrape; however, the code is not waiting on all 4 nor is throwing a timeout error -- it just proceeds and I get an error on elements that haven't yet been loaded.
What am I missing to get Selenium to wait until all four elements are present before proceeding?
CSSSelector1_toWaitOn = "#div1 table tbody tr td"
CSSSelector2_toWaitOn = "#div2 table tbody tr:nth-child(5) td"
CSSSelector3_toWaitOn = "#div3 table tbody tr:nth-child(5) td"
CSSSelector4_toWaitOn = "#div4 table tbody tr td"
browser.get(url)
browser_delay = 15 # seconds
try:
WebDriverWait(browser, browser_delay).until(expected_conditions and (
expected_conditions.presence_of_element_located((By.CSS_SELECTOR, CSSSelector1_toWaitOn)) and
expected_conditions.presence_of_element_located((By.CSS_SELECTOR, CSSSelector2_toWaitOn)) and
expected_conditions.presence_of_element_located((By.CSS_SELECTOR, CSSSelector3_toWaitOn)) and
expected_conditions.presence_of_element_located((By.CSS_SELECTOR, CSSSelector4_toWaitOn))))
except TimeoutException:
print("Selenium timeout")```
WebDriverWait.until expects callable object. This is an actual snippet from its source:
while True:
try:
value = method(self._driver)
if value:
return value
All expected_contiditions are callable objects. So in this case you need to compose them, something like following should work.
class composed_expected_conditions:
def __init__(self, expected_conditions):
self.expected_conditions = expected_conditions
def __call__(self, driver):
for expected_condition in self.expected_conditions:
if not expected_condition(driver):
return False
return True
And pass it to the until
conditions = [
expected_conditions.presence_of_element_located((By.CSS_SELECTOR, CSSSelector1_toWaitOn)),
expected_conditions.presence_of_element_located((By.CSS_SELECTOR, CSSSelector2_toWaitOn)),
expected_conditions.presence_of_element_located((By.CSS_SELECTOR, CSSSelector3_toWaitOn)),
expected_conditions.presence_of_element_located((By.CSS_SELECTOR, CSSSelector4_toWaitOn)),
]
WebDriverWait(browser, browser_delay).until(composed_expected_conditions(conditions))
The method presence_of_element_located(locator) only checks that the element is present in the DOM. It does not mean that the element can be interacted with. Furthermore, the search process finds all the elements for given locator and returns first one.
Please check that the element is valid, available, and specific. In case there are multiple elements in the list, make sure your locator is specific enough to find the single element.
Rather than trying to combine them all into a single wait, you can have a separate wait for each.
...
try:
wait = WebDriverWait(browser, browser_delay)
wait.until(expected_conditions.visibility_of_element_located((By.CSS_SELECTOR, CSSSelector1_toWaitOn))
wait.until(expected_conditions.visibility_of_element_located((By.CSS_SELECTOR, CSSSelector2_toWaitOn))
wait.until(expected_conditions.visibility_of_element_located((By.CSS_SELECTOR, CSSSelector3_toWaitOn))
wait.until(expected_conditions.visibility_of_element_located((By.CSS_SELECTOR, CSSSelector4_toWaitOn))))
except TimeoutException:
print("Selenium timeout")
Just be aware, there are 3 levels of interaction within Selenium for elements:
present - the element is in the DOM. If you attempt to click on or get text from, etc. a present (but not visble) element, an ElementNotInteractable exception will be thrown.
visible - the element is in the DOM and visible (e.g. not invisible, display: none, etc.)
clickable - the element is visible and enabled. For most cases, this is just... is it visible? The special cases would be elements like an INPUT button that is marked as disabled. An element that is styled as disabled (greyed out) using CSS, is not considered disabled.

python selenium, can't find elements from page_source while can find from browser

I try find target element by xpath so that I can click on it. But can't find it when run code, although can find it by right-click option manually on chrome browser.
detail: I am using
driver.get('chrome://settings/clearBrowserData')
to get history pop-up from chrome, then wait element by selenium,
and next action I try to click it by:
driver.find_element_by_css_selector('* /deep/ #clearBrowsingDataConfirm').click()
or by:
driver.find_element_by_xpath(r'//paper-button[#id="clearBrowsingDataConfirm"]').click()
both does not work
Could you tell solution by xpath if possible because I am more familiar with it. Or any other way to clear history on chrome, thank
Looking into Chrome Settings page source it looks like the button, you're looking for is hidden in the ShadowDOM
So you need to iterate down several levels of ShadowRoot
So the algorithm looks like:
Locate parent WebElement
Locate its shadow-root and cast it to the WebElement
Use WebElement.find_element() function to locate the next WebElement which is the parent for the ShadowRoot
Repeat steps 1-3 until you're in the same context with the element you want to interact with
Example code:
from selenium import webdriver
def expand_root_element(element):
shadow_root = driver.execute_script('return arguments[0].shadowRoot', element)
return shadow_root
driver = webdriver.Chrome("c:\\apps\\webdriver\\chromedriver.exe")
driver.maximize_window()
driver.get("chrome://settings/clearBrowserData")
settingsUi = driver.find_element_by_tag_name("settings-ui")
settingsUiShadowRoot = expand_root_element(settingsUi)
settingsMain = settingsUiShadowRoot.find_element_by_tag_name("settings-main")
settingsShadowRoot = expand_root_element(settingsMain)
settingsBasicPage = settingsShadowRoot.find_element_by_tag_name("settings-basic-page")
settingsBasicPageShadowroot = expand_root_element(settingsBasicPage)
settingsPrivacyPage = settingsBasicPageShadowroot.find_element_by_tag_name("settings-privacy-page")
settingsPrivacyShadowRoot = expand_root_element(settingsPrivacyPage)
settingsClearBrowsingDataDialog = settingsPrivacyShadowRoot.find_element_by_tag_name(
"settings-clear-browsing-data-dialog")
settingsClearBrowsingDataDialogShadowRoot = expand_root_element(settingsClearBrowsingDataDialog)
settingsClearBrowsingDataDialogShadowRoot.find_element_by_id("clearBrowsingDataConfirm").click()
I got it to work by doing this:
driver.ExecuteScript("return document.querySelector('body > settings-ui').shadowRoot.querySelector('#main').shadowRoot.querySelector('settings-basic-page').shadowRoot.querySelector('#advancedPage > settings-section:nth-child(1) > settings-privacy-page').shadowRoot.querySelector('settings-clear-browsing-data-dialog').shadowRoot.querySelector('#clearBrowsingDataConfirm').click();");

xpath returns more than one result, how to handle in python

I have started selenium using python. I am able to change the message text using find_element_by_id. I want to do the same with find_element_by_xpath which is not successful as the xpath has two instances. want to try this out to learn about xpath.
I want to do web scraping of a page using python in which I need clarity on using Xpath mainly needed for going to next page.
#This code works:
import time
import requests
import requests
from selenium import webdriver
driver = webdriver.Chrome()
url = "http://www.seleniumeasy.com/test/basic-first-form-demo.html"
driver.get(url)
eleUserMessage = driver.find_element_by_id("user-message")
eleUserMessage.clear()
eleUserMessage.send_keys("Testing Python")
time.sleep(2)
driver.close()
#This works fine. I wish to do the same with xpath.
#I inspect the the Input box in chrome, copy the xpath '//*[#id="user-message"]' which seems to refer to the other box as well.
# I wish to use xpath method to write text in this box as follows which does not work.
driver = webdriver.Chrome()
url = "http://www.seleniumeasy.com/test/basic-first-form-demo.html"
driver.get(url)
eleUserMessage = driver.find_elements_by_xpath('//*[#id="user-message"]')
eleUserMessage.clear()
eleUserMessage.send_keys("Test Python")
time.sleep(2)
driver.close()
To elaborate on my comment you would use a list like this:
eleUserMessage_list = driver.find_elements_by_xpath('//*[#id="user-message"]')
my_desired_element = eleUserMessage_list[0] # or maybe [1]
my_desired_element.clear()
my_desired_element.send_keys("Test Python")
time.sleep(2)
The only real difference between find_elements_by_xpath and find_element_by_xpath is the first option returns a list that needs to be indexed. Once it's indexed, it works the same as if you had run the second option!

Youtube scraping with selenium :not getting all comments

I am trying to scrape youtube comments using selenium with python. Below is the code which scrapes just the one comment and throws error
driver = webdriver.Chrome()
url="https://www.youtube.com/watch?v=MNltVQqJhRE"
driver.get(url)
wait(driver, 5500)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight + 500);")
driver.implicitly_wait(5000)
#content = driver.find_element_by_xpath('//*[#id="contents"]')
comm=driver.find_element_by_xpath('//div[#class="style-scope ytd-item-section-renderer"]')
comm1=comm.find_elements_by_xpath('//yt-formatted-string[#id="content-text"]')
#print(comm.text)
for i in range(50):
print(comm1[i].text,end=' ')
This is the output I am getting. How do I get all the comments on that page??? Can anyone help me with this.
Being a sucessful phyton freelancer really mean to me because if I able to make $2000 in month I can really help my family financial, improve my skill, and have a lot of time to refreshing. So thanks Qazi, you really help me :D
Traceback (most recent call last):
File "C:\Python36\programs\Web scrap\YT_Comm.py", line 19, in <module>
print(comm1[i].text,end=' ')
IndexError: list index out of range
An IndexError means you’re attempting to access a position in a list that doesn’t exist. You’re iterating over your list of elements (comm1) exactly 50 times, but there are fewer than 50 elements in the list, so eventually you attempt to access an index that doesn’t exist.
Superficially, you can solve your problem by changing your iteration to loop over exactly as many elements as exist in your list—no more and no less:
for element in comm1:
print(element.text, end=‘ ‘)
But that leaves you with the problem of why your list has fewer than 50 elements. The video you’re scraping has over 90 comments. Why doesn’t your list have all of them?
If you take a look at the page in your browser, you'll see that the comments load progressively using the infinite scroll technique: when the user scrolls to the bottom of the document, another "page" of comments are fetched and rendered, increasing the length of the document. To load more comments, you will need to trigger this behavior.
But depending on the number of comments, one fetch may not be enough. In order to trigger the fetch and rendering of all of the content, then, you will need to:
attempt to trigger a fetch of additional content, then
determine whether additional content was fetched, and, if so,
repeat (because there might be even more).
Triggering a fetch
We already know that additional content is fetched by scrolling to the bottom of the content container (the element with id #contents), so let's do that:
driver.execute_script(
"window.scrollTo(0, document.querySelector('#contents').scrollHeight);")
(Note: Because the content resides in an absolute-positioned element, document.body.scrollHeight will always be 0 and will not trigger a scroll.)
Waiting for the content container
But as with any browser automation, we're in a race with the application: What if the content container hasn't rendered yet? Our scroll would fail.
Selenium provides WebDriverWait() to help you wait for the application to be in a particular state. It also provides, via its expected_conditions module, a set of common states to wait for, such as the presence of an element. We can use both of these to wait for the content container to be present:
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
TIMEOUT_IN_SECONDS = 10
wait = WebDriverWait(driver, TIMEOUT_IN_SECONDS)
wait.until(
EC.presence_of_element_located((By.CSS_SELECTOR, "#contents")))
Determining whether additional content was fetched
At a high level, we can determine whether additional content was fetched by:
counting the content before we trigger the fetch,
counting the content after we trigger the fetch, then
comparing the two.
Counting the content
Within our container (with id "#contents"), each piece of content has id #content. To count the content, we can simply fetch each of those elements and use Python's built-in len():
count = len(driver.find_elements_by_css_selector("#contents #content")
Handling a slow render
But again, we're in a race with the application: What happens if either the fetch or the render of additional content is slow? We won't immediately see it.
We need to give the web application time to do its thing. To do this, we can use WebDriverWait() with a custom condition:
def get_count():
return len(driver.find_elements_by_css_selector("#contents #content"))
count = get_count()
# ...
wait.until(
lambda _: get_count() > count)
Handling no additional content
But what if there isn't any additional content? Our wait for the count to increase will timeout.
As long as our timeout is high enough to allow sufficient time for the additional content to appear, we can assume that there is no additional content and ignore the timeout:
try:
wait.until(
lambda _: get_count() > count)
except TimeoutException:
# No additional content appeared. Abort our loop.
break
Putting it all together
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
TIMEOUT_IN_SECONDS = 10
wait = WebDriverWait(driver, TIMEOUT_IN_SECONDS)
driver.get(URL)
wait.until(
EC.presence_of_element_located((By.CSS_SELECTOR, "#contents")))
def get_count():
return len(driver.find_elements_by_css_selector("#contents #content"))
while True:
count = get_count()
driver.execute_script(
"window.scrollTo(0, document.querySelector('#contents').scrollHeight);")
try:
wait.until(
lambda _: get_count() > initial_count)
except TimeoutException:
# No additional content appeared. Abort our loop.
break
elements = driver.find_elements_by_css_selector("#contents #content")
Bonus: Simplifying with capybara-py
With capybara-py, this becomes a bit simpler:
import capybara
from capybara.dsl import page
from capybara.exceptions import ExpectationNotMet
#capybara.register_driver("selenium_chrome")
def init_selenium_chrome_driver(app):
from capybara.selenium.driver import Driver
return Driver(app, browser="chrome")
capybara.current_driver = "selenium_chrome"
capybara.default_max_wait_time = 10
page.visit(URL)
contents = page.find("#contents")
elements = []
while True:
try:
elements = contents.find_all("#content", minimum=len(elements) + 1)
except ExpectationNotMet:
# No additional content appeared. Abort our loop.
break
page.execute_script(
"window.scrollTo(0, arguments[0].scrollHeight);", contents)

Python+Selenium, can't click the 'button' wrapped by span

I am new to selenium here. I am trying to use selenium to click a 'more' button to expand the review section everytime after refreshing the page.
The website is TripAdvisor. The logic of more button is, as long as you click on the first more button, it will automatically expand all the review sections for you. In other words, you just need to click on the first 'more' button.
All buttons have a similar class name. An example is like taLnk.hvrIE6.tr415411081.moreLink.ulBlueLinks. Only the numbers part changes everytime.
The full element look like this:
<span class="taLnk hvrIE6 tr413756996 moreLink ulBlueLinks" onclick=" var options = {
flow: 'CORE_COMBINED',
pid: 39415,
onSuccess: function() { ta.util.cookie.setPIDCookie(2247); ta.call('ta.servlet.Reviews.expandReviews', {type: 'dummy'}, ta.id('review_413756996'), 'review_413756996', '1', 2247);; window.location.hash = 'review_413756996'; }
};
ta.call('ta.registration.RegOverlay.show', {type: 'dummy'}, ta.id('review_413756996'), options);
return false;
">
More </span>
I have tried several ways to get the button click. But since it is an onclick event wrapped by span, I can't successfully get it clicked.
My last version looks like this:
driver = webdriver.Firefox()
driver.get(newurl)
page_source = driver.page_source
soup = BeautifulSoup(page_source)
moreID = soup.find("span", class_=re.compile(r'.*\bmoreLink\b.*'))['class']
moreID = '.'.join(moreID[0:(len(moreID)+1)])
moreButton = 'span.' + moreID
button = driver.find_element_by_css_selector(moreButton)
button.click()
time.sleep(10)
However, I keep getting the error message like this:
WebDriverException: Message: Element is not clickable at point (318.5,
7.100006103515625). Other element would receive the click....
Can you advise me on how to fix the problem? Any help will be appreciated!
WebDriverException: Message: Element is not clickable at point (318.5, 7.100006103515625). Other element would receive the click....
This error to be occur when element is not in the view port and selenium couldn't click due to some other overlay element on it. In this case you should try one of these following solution :-
You can try using ActionChains to reach that element before click as below :-
from selenium.webdriver.common.action_chains import ActionChains
button = driver.find_element_by_css_selector(moreButton)
ActionChains(button).move_to_element(element).click().perform()
You can try using execute_script() to reach that element before click as :-
driver.execute_script("arguments[0].scrollIntoView(true)", button)
button.click()
You can try using JavaScript::click() with execute_script() but this JavaScript::click() defeats the purpose of the test. First because it doesn't generate all the events like a real click (focus, blur, mousedown, mouseup...) and second because it doesn't guarantee that a real user can interact with the element. But to get rid from this issues you can consider it as an alternate solution.
driver.execute_script("arguments[0].click()", button)
Note:- Before using these options make sure you're trying to interact with correct element using with correct locator, otherwise WebElement.click() would work well after wait until element visible and clickable using WebDriverWait.
Try using an ActionChains:
from selenium.webdriver.common.action_chains import ActionChains
# Your existing code here
# Minus the `button.click()` line
ActionChains(driver).move_to_element(button).cli‌​ck().perform()
I have used this technique when I need to click on a <div> or a <span> element, rather than an actual button or link.

Categories

Resources