How to get all elements with a class within wait - python

I am scraping the xbox website with selenium but I encountered a problem when extracting someone's followers and friends: both elements have the same class, with no other property setting them apart, so I need to find all elements with that class and append them to a list and get the first, second value. I just need to know how to find all elements with a class whilst using wait until as seen below
followers = WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, ".item-value-data"))).text
#this currently only gets the first element
I am aware of how to do this without wait; just putting elements, but I couldn't find anything regarding using this in wait.

WebDriverWait waits until at least 1 element matching the passed condition is found.
There is no expected condition supported by Selenium with Python to wait for predefined amount of elements matching or something like this.
What you can do is to put a small sleep after the wait to make the page fully loaded and then get the list of desired elements.
Like this:
WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, ".item-value-data")))
time.sleep(1)
followers = []
followers_els = driver.find_elements_by_css_selector(".item-value-data")
for el in followers_els:
followers.append(el.text)

Related

Python Selenium wait for an already-located WebElement to become clickable?

I have already viewed three StackOverflow solutions to almost this problem, but can't figure out how to apply it to an already-fetched web element.
Answer 1
Answer 2
Answer 3
Here is a sample of my current code:
def filter_shaded(tr_element):
td_list= tr_element.find_elements(By.CLASS_NAME, "row-major-td")
for td in td_list:
if 'row-major-td-shaded' not in td.get_attribute("class"):
return td
clickable_element = filter_shaded(...)
driver.implicitly_wait(3) # wait for element to be clickable
clickable_element.click() # here is the problem, sometimes getting ElementNotInteractableException
Essentially, I have a bunch of td elements inside of a table row. All but one of the elements is shaded. I use this function to "pick out" the unshaded element.
Now, I want to be able to click the unshaded tr element, but I have been having some issues with using a plain fixed delay such as driver.implicitly_wait(3). Sometimes the element is clickable long before 3 seconds and sometimes 3 seconds is not long enough for it to become clickable.
The only issue with the existing answers is that the code for locating the element is integrated into the wait code.
The solutions I posted above all suggest something along the lines of
element = WebDriverWait(driver, 20).until(
EC.element_to_be_clickable((By.ID, "my-id")))
element.click();
The above code locates an element by its ID and waits for it to become clickable. In my case, however, I have a custom function that filters out a single WebElement object and returns it (clickable_element in my original code). How can I tell Selenium to wait for an already-found WebElement to become clickable before clicking (and thereby avoiding any ElementNotInteractableExceptions)?
i.e.
def filter_shaded(tr_element):
td_list= tr_element.find_elements(By.CLASS_NAME, "row-major-td")
for td in td_list:
if 'row-major-td-shaded' not in td.get_attribute("class"):
return td
clickable_element = filter_shaded(...)
??? driver.wait_for_element_to_be_clickable(clickable_element) ??
clickable_element.click()
I'd rather not resort to hard coding or approximating a delay, since the elements can take anywhere from 0.5 to 11 seconds to become clickable and upping the wait time to 11s would be catastrophic for my runtime.

click through list elements with selenium python

The given list contains the references which will paste into the xpath ids (please find it below in my code) where x are the indexes of the elements.
I want to go through on all elements and click one by one by referring with its indexes, 'like so'
m_list = ['message0', 'message1', 'message2', 'message3', 'message4']
for x in range(0, len(m_list)):
WebDriverWait(driver, 10).until(EC.element_to_be_clickable(
(By.XPATH, f'//*[#id="{str(m_list[int(x)])}"]'))).click()
time.sleep(2)
This exception is common when you use Explicit wait which is WebDriverWait. This is expected since you wait for a fixed time for the element to be clickable. If the element was not found within that time, this exception is thrown. You might want to increase that time. The explicit wait can only be applied for specified elements, so if you are trying to click a paragraph, it won't work. If your elements appear after your initial click, that sleep command should also be in the loop, or you can use Implicit Wait.
Also if you want to iterate your list, you can use;
for i in m_list:
WebDriverWait(driver, 100).until(EC.element_to_be_clickable((By.XPATH, f'//*[#id="{i}"]'))).click()

Locating an element with no identifier in HTML using Selenium (Not using Xpath)

def page_nav(individual):
indiv_home = driver.find_element(By.XPATH, "/html/body/div/div[2] /div[2]/div[8]/a").click()
time.sleep(5)
person_select = driver.find_elemnet(By.TAG_NAME, individual).click()
<strong>Dylan Call</strong> <-- I want to find the element based off "Dylan Call"
The code above isn't the most well written and I know that (I'm fairly new my apologies in advance)
I'm looking for a way to find the element ("individual name") in the picture above since it doesn't have a unique identifier like "name" or "id".
I am attempting to create a bot that looks through a folder, identifies the name of the individual associated with a file, and uploads the file to that respective person's profile using selenium/python. Right now, I have stored the individual name in a variable, but I want to pass that variable through the "find_elements" function. Sadly I can't just use a "By.Xpath" to locate the element since I'm trying to find it specifically on the individual's name.
Does anyone have any workarounds or better ways to do this?
You can retrieve your strong tag containing individual name using an xpath as follow
individual = "Dylan Call"
driver.find_elements_by_xpath(f'//strong[#contains(text(), "{individual}")]')
Here // indicates that we are using relative xpath, this means we are looking for a strong tag anywhere in the html markup.
And we specify that the strong tag we are looking for has an innerText that contains the string given by variable individual.
May I suggest an improvement, you should not use time.sleep, but WebdriverWait in conjunction with ExpectedCondition. Instead of adding an arbitrary delay to wait until a page is loaded, this allow to wait until element we are trying to find is displayed.
All together you code becomes
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support.expected_conditions import presence_of_element_located
individual = "Dylan Call"
# create a wait object that will be used to wait at most 10 seconds that element appears.
wait = WebDriverWait(driver, 10)
# wait until the strong tag containing individual name appears.
# timeout if it did not appeared after 10 seconds.
person_select = wait.until(
presence_of_element_located((By.XPATH, f'//strong[#contains(text(), "{individual}"]'))
)
person_select.click()

Python, Selenium: can't find element by xpath when ul list is too long

I'm trying to create a program extracting all persons I follow on Instagram. I'm using Python, Selenium and Chromedriver.
To do so, I first get the number of followed persons and click on the 'following' button : `
nb_abonnements = int(webdriver.find_element_by_xpath('/html/body/span[1]/section[1]/main/div[1]/header/section[1]/ul/li[3]/a/span').text)
sleep(randrange(1,3))
abonnements = webdriver.find_element_by_xpath('/html/body/span[1]/section[1]/main/div[1]/header/section[1]/ul/li[3]/a')
abonnements.click()
I then use the following code to get the followers and scroll the popup page in case I can't find one:
followers_panel = webdriver.find_element_by_xpath('/html/body/div[3]/div/div/div[2]')
while i < nb_abonnements:
try:
print(i)
followed = webdriver.find_element_by_xpath('/html/body/div[3]/div/div/div[2]/ul/div/li[{}]/div/div[2]/div/div/div/a'.format(i+1)).text
#the followeds are in an ul-list
i += 1
followed_list.append(followed)
except NoSuchElementException:
webdriver.execute_script(
"arguments[0].scrollBy(0,400)",followers_panel
)
sleep(7)
The problem is once i is at 12, the program raises the exception and scrolls. From there, he still can't find the next follower and is stuck in a loop where he does nothing but scroll. I've checked the source codeof the IG page, and it turns out the path is still good, but apparently I can't access the elements as I do anymore, probably because the ul-list in which I am accessing them has become to long (line 5 of the program).
I can't work out how to solve this. I hope you will be of some help.
UPDATE: the DOM looks like this:
html
body
span
script
...
div[3]
div
...
div
div
div[2]
ul
div
li
li
li
li
...
li
The ul is the list of the followers.
The lis contain the info i'm trying to extract (username). Even when I go go by myself on the webpage, open the popup window, scroll a little and let everything load, I can't find the element I'm looking for by typing the xpath in the search bar of the DOM manually. Although the path is correct, I can check it by looking at the DOM.
I've tried various webdrivers for selenium, currently I am using chromedriver 2.45.615291. I've also put an explicit wait to wait for the element to show (WebDriverWait(webdriver, 10).until(EC.presence_of_element_located((By.XPATH, '/html/body/div[3]/div/div/div[2]/ul/div/li[{}]/div/div[2]/div/div/div/a'.format(i+1))))), but I just get a timeout exception: selenium.common.exceptions.TimeoutException: Message:.
It just seems like once the ul list is too long (which is from the moment I've scrolled down enough to load new people), I can't access any element of the list by its XPATH, even the elements that were already loaded before I began scrolling.
Instead of using xpath for each of the child element... find the ul-list element then find all the child elements using something like : ul-list element.find_elements_by_tag_name(). Then iterate through each element in the collection & get the required text
I've foud a solution: i just access the element through the XPATH like this: find_element_by_xpath("(//*[#class='FPmhX notranslate _0imsa '])[{}]".format(i)). I don't know why it didn't work the other way, but like this it works just fine.

How can I create a list of elements with the same xpath using selenium with python?

I need to click on several elements from the same table on the same webpage. I was thinking to do so with a for loop but in order to perform that action I first need to create a list of these elements.
//table[#border='1']//a
This is the xpath which selects all the elements from the table, how can I create a list of all these?
Use find_elements instead of find_element:
links = driver.find_elements_by_xpath("//table[#border='1']//a")
for values in links:
values.click()
While #SergiyKonoplyaniy answer was in the right direction, addressing your queries one by one:
How can I create a list of elements with the same xpath : To create a list of elements you need to use find_elements_by_xpath(xpath) which will create a List of elements matching the xpath you have specified.
Example:
my_links = driver.find_elements_by_xpath("//table[#border='1']//a")
Need to click on several elements: As you need to click() on several elements you have to iterate through all the elements you have captured in the List as follows:
for link in my_links:
link.click()
Now the most important aspect is, as per your xpath //table[#border='1']//a each and every element:
Has 3 distinct stages interms of presence, visibility and interactibility (i.e. clickability)
To collect the elements in a List you should always invoke a waiter with expected-conditions as visibility_of_all_elements_located(locator) as follows:
my_list = WebDriverWait(driver, 20).until(expected_conditions.visibility_of_all_elements_located((By.XPATH, "//table[#border='1']//a")))
The pseudo code as a solution for your question will be:
my_links = WebDriverWait(driver, 20).until(expected_conditions.visibility_of_all_elements_located((By.XPATH, "//table[#border='1']//a")))
for link in my_links:
link.click()
For your future reference, if you intend to invoke click() on any particular element always invoke a waiter with expected-conditions as element_to_be_clickable(locator) as follows:
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "desired_element_xpath"))).click()

Categories

Resources