Wait for class to load value after clicking button selenium python - python

After the website is loaded I click a button successfully which will then generate some numbers in this class
<div class="styles__Value-sc-1bfbyy7-2 eVmhyz"></div>
but not instantly, it will put them in one by one. Selenium will instantly grab the first value that gets put into the class but doesn't wait for the other values to get added. Any way to wait for it to load all the values in there before grabbing it.
Here is the python code I use for grabbing the value:
total = driver.find_element_by_xpath("//div[#class='styles__Value-sc-1bfbyy7-2 eVmhyz']").text

Selenium has a WebDriverWait method:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions
browser = webdriver.Chrome()
delay = 5
total = WebDriverWait(browser, delay).until(expected_conditions.presence_of_element_located(<locator>)
I haven't tested it locally but it may work. There is also presence_of_all_elements_located method, you can find the details on this page.
Hope this helps!

Related

How to click an element when its available in selenium?

Im trying to create automation for a cookie clicker website.
I need to click on elements (like the cursor element for example) on the website when they go from "blocked" to "unlocked" I have been trying for 2 days now and I have tried using the WebDriverWait but nothing is working no matter what my code does not detect when the element becomes available.
this is my code right now
import time
import ec as ec
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
Play = True
ser_obj = Service("\Progr\OneDrive\Documents\PythonFolder\chromedriver.exe")
driver = webdriver.Chrome(service=ser_obj)
driver.get(url="https://orteil.dashnet.org/cookieclicker/")
WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a.cc_btn.cc_btn_accept_all"))).click()
WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "div.langSelectButton.title#langSelect-EN"))).click()
time.sleep(1)
Cookie = driver.find_element(By.CSS_SELECTOR, "#cookieAnchor #bigCookie")
while Play:
Cookie.click()
Cookie_number = (driver.find_element(By.XPATH,'//*[#id="cookies"]').text)
print(Cookie_number)
WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.XPATH, '//*[#id="product0"]'))).click()
and for whatever reason I cannot click on the cookie unless I have a time.sleep() method called and I do not know why. I have tried using WebDriverWait to wait when the cookie becomes avaible to click, but nope, it wont run without the time.sleep().
Any help would be great.
I have tried using if statments with the .isDisplayed() function.
I have tried using "try-except" methods.
I have tried giving Play a value and then saying when that value reaches 0, check to see if the cursor is available to click.
I have tried using CSS Selectors and Xpath

Web-scraping using Selenium: Get current url after selecting dropdown menu

I am trying to scrape pricing information for clothes from Amazon. But I have to select the clothes size. After selecting the size needed, how do I keep track of the new URL? The following code is working and selecting the first value in the dropdown menu. But I just don't know how to keep track of the new url.
original url: https://www.amazon.ae/Jack-Jones-Glenn-Original-Pants/dp/B07JQ8MDGD/ref=sr_1_5?crid=M8QQKGLLZ1O9&keywords=jeans&qid=1657289288&sprefix=jeans%2Caps%2C232&sr=8-5&th=1
url after selecting size (the url I want to get):
https://www.amazon.ae/Jack-Jones-Glenn-Original-Pants/dp/B07JQBYC8J/ref=sr_1_5?crid=M8QQKGLLZ1O9&keywords=jeans&qid=1657289288&sprefix=jeans%2Caps%2C232&sr=8-5&th=1&psc=1
click here if you want to see the screenshot of the web-page
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.support.ui import Select
url='https://www.amazon.ae/Jack-Jones-Glenn-Original-Pants/dp/B07JQB87KL/ref=sr_1_5?
crid=M8QQKGLLZ1O9&keywords=jeans&qid=1657289288&sprefix=jeans%2Caps%2C232&sr=8-
5&th=1'
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.get(url)
select=Select(driver.find_element_by_id("native_dropdown_selected_size_name"))
select.select_by_index(2)
#driver.current_url: is returning the original url
Maybe selenium is moving on from the .select_by_index step to getting the URL before the site has a chance to change its own URL.
You might try Implicit Wait (based on time) :
driver.implicitly_wait(10) # force driver to wait 10 seconds
Or Explicit Wait (based on expected condition):
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
wait = WebDriverWait(driver, 10)
element = wait.until(EC.element_to_be_clickable((By.ID, 'someid')))
Your expected condition will depend on your use case.
I would try the implicit wait first, just to see if you can get the updated driver.current_url

Trying to use selenium to automate signups. Running into a problem

Currently trying to automate signups on 'mail.com' using Selenium. So far i've managed to get the program to go to the URL. The problem i'm having is that even when I copied the full XPATH of "Sign Up" i'm getting an:
"selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"/html/body/table/tbody/tr[114]/td[2]"}"
error
Here is the code i'm working with so far:
import selenium
import time
from selenium.webdriver.common.by import By
driver = selenium.webdriver.Chrome(executable_path='pathtochromedriver')
driver.get('https://www.mail.com/')
driver.maximize_window()
# Delay added to allow elements to load on webpage
time.sleep(30)
# Find the signup element
sign_up = driver.find_element_by_xpath('/html/body/table/tbody/tr[114]/td[2]')
Try using ActionsChains to scroll to ensure the element is in view.
from selenium.webdriver.common.action_chains import ActionChains
some_page_item = driver.find_element_by_class_name('some_class')
ActionsChains(driver).move_to_element(some_page_item).click(some_page_item).perform()
Also another tip... instead of simply using time.sleep() to wait for an element to appear, instead use WebDriverWait
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
wait_for_item = WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.CLASS_NAME ,"some_class_name")))
30 is the amount of seconds that it will wait until the item appears; however if it appears before 30 seconds then it will immediately continue execution. If 30 seconds passes and the item doesn't appear a timeout error will occur.

Skip waiting for a website timer selenium Python

I'm running Selenium in Python ide with geckodriver.
The site I'm trying to open has a timer of 30 seconds that after this 30 seconds a button appears and I send a click on it.
What I'm asking is the following:
Can I somehow ignore/skip/speed up the waiting time?
Right now what I'm doing is the following:
driver = webdriver.Firefox()
driver.get("SITE_URL")
sleep(30)
driver.find_element_by_id("proceed").click()
Which is very inefficient because every time I run the code to do some tests I need to wait.
Thanks in advance, Avi.
UPDATE:
I haven't found a way to get over the obstacle but until I do I'm trying to focus the next achievable progress:
<video class="jw-video jw-reset" disableremoteplayback="" webkit-playsinline="" playsinline="" preload="metadata" src="//SITE.SITE.SITE/SITE/480/213925.mp4?token=jbavPPLqNqkQT1SEUt4crg&time=1525458550" style="object-fit: fill;"></video>
(censored site's name)
In each page there is a video, all the videos are under the class "jw-video jw-reset"
I had trouble using find element by class so I used:
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, "video[class='jw-video jw-reset']")))
It works but I can't figure how to select the element's src...
As per your code trial you can remove the time.sleep(30) and induce WebDriverWait for the element to be clickable as follows :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
# lines of code
WebDriverWait(driver, 30).until(EC.element_to_be_clickable((By.ID, "proceed"))).click()
Note : Configure the WebDriverWait instance with the maximum time limit as per your usecase. The expected_conditions method element_to_be_clickable() will return the WebElement as soon as the element is visible and enabled such that you can click it.

Scraper doesn't stop clicking on the next page button

I've written a script in python in combination with selenium to get some names and corresponding addresses displayed upon a search and the search keyword is "Saskatoon". However, the data, in this case, traverse multiple pages. My script almost does everything except for one thing.
It still runs even though there are no more pages to traverse. The last page also holds ">" sign for next page option and is not grayed out.
Here is the link: Page_link
Search_keyword: Saskatoon (in the city/town field).
Here is what I've written:
from selenium import webdriver; import time
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome()
wait = WebDriverWait(driver, 10)
driver.get("above_link")
time.sleep(3)
search_input = driver.find_element_by_id("cityField")
search_input.clear()
search_input.send_keys("Saskatoon")
search_input.send_keys(Keys.ENTER)
while True:
try:
wait.until(EC.visibility_of_element_located((By.LINK_TEXT, "›"))).click()
time.sleep(2)
except:
break
driver.quit()
BTW, I've just taken out the name and address part form this script which I suppose is not relevant here. Thanks.
You can use class attribute of > button as on last page it is "ng-scope disabled" while on rest pages - "ng-scope":
wait.until(EC.visibility_of_element_located((By.XPATH, "//li[#class='ng-scope']/a[.='›']"))).click()

Categories

Resources