I have already viewed three StackOverflow solutions to almost this problem, but can't figure out how to apply it to an already-fetched web element.
Answer 1
Answer 2
Answer 3
Here is a sample of my current code:
def filter_shaded(tr_element):
td_list= tr_element.find_elements(By.CLASS_NAME, "row-major-td")
for td in td_list:
if 'row-major-td-shaded' not in td.get_attribute("class"):
return td
clickable_element = filter_shaded(...)
driver.implicitly_wait(3) # wait for element to be clickable
clickable_element.click() # here is the problem, sometimes getting ElementNotInteractableException
Essentially, I have a bunch of td elements inside of a table row. All but one of the elements is shaded. I use this function to "pick out" the unshaded element.
Now, I want to be able to click the unshaded tr element, but I have been having some issues with using a plain fixed delay such as driver.implicitly_wait(3). Sometimes the element is clickable long before 3 seconds and sometimes 3 seconds is not long enough for it to become clickable.
The only issue with the existing answers is that the code for locating the element is integrated into the wait code.
The solutions I posted above all suggest something along the lines of
element = WebDriverWait(driver, 20).until(
EC.element_to_be_clickable((By.ID, "my-id")))
element.click();
The above code locates an element by its ID and waits for it to become clickable. In my case, however, I have a custom function that filters out a single WebElement object and returns it (clickable_element in my original code). How can I tell Selenium to wait for an already-found WebElement to become clickable before clicking (and thereby avoiding any ElementNotInteractableExceptions)?
i.e.
def filter_shaded(tr_element):
td_list= tr_element.find_elements(By.CLASS_NAME, "row-major-td")
for td in td_list:
if 'row-major-td-shaded' not in td.get_attribute("class"):
return td
clickable_element = filter_shaded(...)
??? driver.wait_for_element_to_be_clickable(clickable_element) ??
clickable_element.click()
I'd rather not resort to hard coding or approximating a delay, since the elements can take anywhere from 0.5 to 11 seconds to become clickable and upping the wait time to 11s would be catastrophic for my runtime.
Related
The given list contains the references which will paste into the xpath ids (please find it below in my code) where x are the indexes of the elements.
I want to go through on all elements and click one by one by referring with its indexes, 'like so'
m_list = ['message0', 'message1', 'message2', 'message3', 'message4']
for x in range(0, len(m_list)):
WebDriverWait(driver, 10).until(EC.element_to_be_clickable(
(By.XPATH, f'//*[#id="{str(m_list[int(x)])}"]'))).click()
time.sleep(2)
This exception is common when you use Explicit wait which is WebDriverWait. This is expected since you wait for a fixed time for the element to be clickable. If the element was not found within that time, this exception is thrown. You might want to increase that time. The explicit wait can only be applied for specified elements, so if you are trying to click a paragraph, it won't work. If your elements appear after your initial click, that sleep command should also be in the loop, or you can use Implicit Wait.
Also if you want to iterate your list, you can use;
for i in m_list:
WebDriverWait(driver, 100).until(EC.element_to_be_clickable((By.XPATH, f'//*[#id="{i}"]'))).click()
There is a table whose xpath is .//table[#id='target'] in target webpage, I want to get all data in the table (all text in td in the table).
Should i write the wait.until statement
wait.until(EC.visibility_of_all_elements_located(By.XPATH, ".//table[#id='target']")))
or
wait.until(EC.visibility_of_all_elements_located(By.XPATH, ".//table[#id='target']//td")))
?
Both commands will NOT give you what you are looking for.
visibility_of_all_elements_located will NOT really wait for visibility of ALL the elements on the page matching the passed locator.
visibility_of_all_elements_located method actually waits for at least 1 element matching the passed locator to be visible.
So, to make sure all the elements are visible you will have to add some sleep after that command.
Also, I think that waiting for table internal elements visibility should be better than waiting for the table element itself visibility.
So, I would use something like this:
wait.until(EC.visibility_of_all_elements_located(By.XPATH, ".//table[#id='target']//td")))
time.sleep(1)
tds are basically not direct child but descendant :
.//table[#id='target']/descendant::td
should be the right xpath.
all_table_data = wait.until(EC.visibility_of_all_elements_located((By.XPATH, ".//table[#id='target']/descendant::td")))
all_table_data is a list that contain all the web elements.Print like below and that should give you all the data available in selenium view port.
for data in all_table_data:
print(data.text)
I am scraping the xbox website with selenium but I encountered a problem when extracting someone's followers and friends: both elements have the same class, with no other property setting them apart, so I need to find all elements with that class and append them to a list and get the first, second value. I just need to know how to find all elements with a class whilst using wait until as seen below
followers = WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, ".item-value-data"))).text
#this currently only gets the first element
I am aware of how to do this without wait; just putting elements, but I couldn't find anything regarding using this in wait.
WebDriverWait waits until at least 1 element matching the passed condition is found.
There is no expected condition supported by Selenium with Python to wait for predefined amount of elements matching or something like this.
What you can do is to put a small sleep after the wait to make the page fully loaded and then get the list of desired elements.
Like this:
WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, ".item-value-data")))
time.sleep(1)
followers = []
followers_els = driver.find_elements_by_css_selector(".item-value-data")
for el in followers_els:
followers.append(el.text)
I am using below code to check element visibility and click if available.
if element is available it is clicking quickly and moving to next code.
Problem:
if element is not visible/available it is taking more to time to skip and move to next code.
i understand it may take some time.
Question is:
is there any way to perform quickly if element not visible or if any other code to perform my test case quickly.
elements = self.driver.find_elements_by_partial_link_text("<element>")
if not elements:
print("Element Not Found")
else:
element = elements[0]
element.click()
based on your code, you are interested only on the first link (elements[0]), you can restrict the driver to find the first link only using below method
find_element_by_partial_link_text
I'm trying to create a program extracting all persons I follow on Instagram. I'm using Python, Selenium and Chromedriver.
To do so, I first get the number of followed persons and click on the 'following' button : `
nb_abonnements = int(webdriver.find_element_by_xpath('/html/body/span[1]/section[1]/main/div[1]/header/section[1]/ul/li[3]/a/span').text)
sleep(randrange(1,3))
abonnements = webdriver.find_element_by_xpath('/html/body/span[1]/section[1]/main/div[1]/header/section[1]/ul/li[3]/a')
abonnements.click()
I then use the following code to get the followers and scroll the popup page in case I can't find one:
followers_panel = webdriver.find_element_by_xpath('/html/body/div[3]/div/div/div[2]')
while i < nb_abonnements:
try:
print(i)
followed = webdriver.find_element_by_xpath('/html/body/div[3]/div/div/div[2]/ul/div/li[{}]/div/div[2]/div/div/div/a'.format(i+1)).text
#the followeds are in an ul-list
i += 1
followed_list.append(followed)
except NoSuchElementException:
webdriver.execute_script(
"arguments[0].scrollBy(0,400)",followers_panel
)
sleep(7)
The problem is once i is at 12, the program raises the exception and scrolls. From there, he still can't find the next follower and is stuck in a loop where he does nothing but scroll. I've checked the source codeof the IG page, and it turns out the path is still good, but apparently I can't access the elements as I do anymore, probably because the ul-list in which I am accessing them has become to long (line 5 of the program).
I can't work out how to solve this. I hope you will be of some help.
UPDATE: the DOM looks like this:
html
body
span
script
...
div[3]
div
...
div
div
div[2]
ul
div
li
li
li
li
...
li
The ul is the list of the followers.
The lis contain the info i'm trying to extract (username). Even when I go go by myself on the webpage, open the popup window, scroll a little and let everything load, I can't find the element I'm looking for by typing the xpath in the search bar of the DOM manually. Although the path is correct, I can check it by looking at the DOM.
I've tried various webdrivers for selenium, currently I am using chromedriver 2.45.615291. I've also put an explicit wait to wait for the element to show (WebDriverWait(webdriver, 10).until(EC.presence_of_element_located((By.XPATH, '/html/body/div[3]/div/div/div[2]/ul/div/li[{}]/div/div[2]/div/div/div/a'.format(i+1))))), but I just get a timeout exception: selenium.common.exceptions.TimeoutException: Message:.
It just seems like once the ul list is too long (which is from the moment I've scrolled down enough to load new people), I can't access any element of the list by its XPATH, even the elements that were already loaded before I began scrolling.
Instead of using xpath for each of the child element... find the ul-list element then find all the child elements using something like : ul-list element.find_elements_by_tag_name(). Then iterate through each element in the collection & get the required text
I've foud a solution: i just access the element through the XPATH like this: find_element_by_xpath("(//*[#class='FPmhX notranslate _0imsa '])[{}]".format(i)). I don't know why it didn't work the other way, but like this it works just fine.