Unable to wrap `driver.execute_script()` within `explicit wait` condition - python

I've created a python script together with selenium to parse a specific content from a webpage. I can get this result AARONS INC located under QUOTE in many different ways but the way I wish to scrape that is by using pseudo selector which unfortunately selenium doesn't support. The commented out line within the script below represents that selenium doesn't support pseudo selector.
However, when I use pseudo selector within driver.execute_script() then I can parse it flawlessly. To make this work I had to use hardcoded delay for the element to be avilable. Now, I wish to do the same wrapping this driver.execute_script() within Explicit Wait condition.
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome()
wait = WebDriverWait(driver, 20)
driver.get("https://www.nyse.com/quote/XNYS:AAN")
time.sleep(15)
# item = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "span:contains('AARONS')")))
item = driver.execute_script('''return $('span:contains("AARONS")')[0];''')
print(item.text)
How can I wrap driver.execute_script() within Explicit Wait condition?

This is one of the ways you can achieve that. Give it a shot.
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
with webdriver.Chrome() as driver:
wait = WebDriverWait(driver, 10)
driver.get('https://www.nyse.com/quote/XNYS:AAN')
item = wait.until(
lambda driver: driver.execute_script('''return $('span:contains("AARONS")')[0];''')
)
print(item.text)

You could do the while thing in the browser script which is probably safer:
item = driver.execute_async_script("""
var span, interval = setInterval(() => {
if(span = $('span:contains("AARONS")')[0]){
clearInterval(interval)
arguments[0](span)
}
}, 1000)
""")

Here is the simple approach.
url = 'https://www.nyse.com/quote/XNYS:AAN'
driver.get(url)
# wait for the elment to be presented
ele = WebDriverWait(driver, 30).until(lambda driver: driver.execute_script('''return $('span:contains("AARONS")')[0];'''))
# print the text of the element
print (ele.text)

Related

conditional python selenium to skip extracted clickable links

I linked two pictures below. Looking within both a tag, I want to extract only the 'quick apply' job postings which are defined with the target='self' compared to the external apply which is defined by target='_blank'. I want to put a conditional to exlcude all the _blank profiles. I'm confused but I assume it would follow some logic like:
quick_apply = driver.find_element(By.XPATH, "//a[#data-automation='job-detail-apply']")
internal = driver.find_element(By.XPATH, "//a[#target='_self']")
if internal in quick_apply:
quick_apply.click()
else:
pass
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from time import sleep
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.service import Service
driver_service = Service(executable_path="C:\Program Files (x86)\chromedriver.exe")
driver = webdriver.Chrome(service=driver_service)
driver.maximize_window() # load web driver
wait = WebDriverWait(driver, 5)
driver.get('https://www.seek.com.au/data-jobs-in-information-communication-technology/in-All-Perth-WA')
looking_job = [x.get_attribute('href') for x in driver.find_elements(By.XPATH, "//a[#data-automation='jobTitle']")]
for job in looking_job:
driver.get(job)
quick_apply = driver.find_element(By.XPATH, "//a[#data-automation='job-detail-apply']").click()
You can merged two conditions in single xpath.
1.Use WebDriverWait() and wait for element to be clickable.
2.Use try..except block to check if element there then click.
3.There are pages where you found two similar elements, where last element is clickable, that's why you need last() option to identify the element.
code.
driver.get('https://www.seek.com.au/data-jobs-in-information-communication-technology/in-All-Perth-WA')
looking_job = [x.get_attribute('href') for x in driver.find_elements(By.XPATH, "//a[#data-automation='jobTitle']")]
for job in looking_job:
driver.get(job)
try:
quick_apply = WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH,"(//a[#data-automation='job-detail-apply' and #target='_self'])[last()]")))
quick_apply.click()
except:
print("No records found")
pass

Cant get bot to select item off of supreme

Im a beginner to python btw.
My goal is to build a very simple ACO bot for supreme. Although im running into a problem and I have not had much luck with solutions.
I tried to use driver.find_element(By.XPATH, "") but this is not exactly possible since the website constantly changes and I would need to always change the name of the product I want.
I tried to use "title", "ID" or "class" but there are multiple items with the same class, title and ID so that does not work. I read that someone doing the same thing as I, was using "PARTIAL_LINK_TEXT" so I tried driver.find_element(By.PARTIAL_LINK_TEXT, "Boxer Briefs"). I tried this by putting in "Boxer Briefs" to see if it would select the "Supreme/Hanes Boxer Briefs as a test. Unfortunately this also did not work, and it give me error saying
selenium.common.exceptions.NoSuchElementException: Message:
this is all of my code, like I said im a beginner and this is a very basic bot.
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Safari()
url = "https://www.supremenewyork.com/shop/all"
driver.get(url)
driver.maximize_window()
click = driver.find_element(By.XPATH, '//*[#id="nav-categories"]/li[10]/a')
click.click()
element = driver.find_element(By.PARTIAL_LINK_TEXT, "Boxer Briefs")
any help would be really appreciated.
https://www.supremenewyork.com/shop/all
JavaScript needs some time to replace elements - when I use time.sleep(1) after click() then it works without problems.
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
#driver = webdriver.Safari()
driver = webdriver.Firefox()
driver.maximize_window()
url = "https://www.supremenewyork.com/shop/all"
driver.get(url)
click = driver.find_element(By.XPATH, '//*[#id="nav-categories"]/li[10]/a')
click.click()
time.sleep(1)
element = driver.find_element(By.PARTIAL_LINK_TEXT, "Boxer Briefs")
print(element.text)
See also Selenium doc: Waits to create more universal code.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
#driver = webdriver.Safari()
driver = webdriver.Firefox()
driver.maximize_window()
url = "https://www.supremenewyork.com/shop/all"
driver.get(url)
click = driver.find_element(By.XPATH, '//*[#id="nav-categories"]/li[10]/a')
click.click()
try:
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.PARTIAL_LINK_TEXT, "Boxer Briefs"))
)
print(element.text)
except Exception as ex:
print('Exception:', ex)
Result:
Supreme®/Hanes® Boxer Briefs (2 Pack)
Tested with Firefox and Chrome on Linux Mint 20 (based on Ubuntu 21.04)
EDIT:
If I presence_of_all_elements_located instead of presence_of_element_located
try:
all_elements = WebDriverWait(driver, 10).until(
EC.presence_of_all_elements_located((By.PARTIAL_LINK_TEXT, "Boxer Briefs"))
)
for element in all_elements:
print(element.text)
except Exception as ex:
print('Exception:', ex)
the result is
Supreme®/Hanes® Boxer Briefs (2 Pack)
Supreme®/Hanes® Boxer Briefs (4 Pack)
Supreme®/Hanes® Boxer Briefs (4 Pack)

How can I get text from inside a span element in selenium?

My code looks like this:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time as t
PATH = "D:\CDriver\chromedriver.exe"
driver = webdriver.Chrome(PATH)
website = "https://jobs.siemens.com/jobs?page=1"
driver.get(website)
try:
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.CSS_SELECTOR, "_ngcontent-wfx-c163="""))
)
print(element.text)
except:
driver.quit()
driver.quit()
Im trying to get the 6 numbers inside span _ngcontent-wfx-c163="">215022</span but cant seem to get it working, many others have had problems using span, but they have had a class inside it, mine doesnt.
How can I print the insides of the span tag that I have bolded?
If you are looking for req.ID to extract you can use the below CSS_SELECTOR :
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, "p.req-id.ng-star-inserted>span"))
Note that there are 10 spans for req ID. you may use find_elements instead of find_element or probably EC.presence_of_all_elements_located which will give you list object. you can manipulate list as per your requirement.
read more about their difference here

How to select a list in Selenium?

I'm trying to enter an address then they propose me some addresses and I had no idea how to select the first option they give me.
If you want to try, at the second step on this link: https://www.sneakql.com/en-GB/launch/culturekings/womens-air-jordan-1-high-og-court-purple-au/register
adresse = chrome.find_element_by_id('address-autocomplete')
adresse.send_keys(row['Adresse']) #Adress from a file
time.sleep(5)
country = chrome.find_element_by_xpath('//li[#id="suggestion_0"]').click();
Inspect element:
Try clicking on the first option with this:
driver.find_element_by_xpath('//li[#id="suggestion_0"]')
UPD
The element you trying to click is out of the view. You have to do the following:
from selenium.webdriver.common.action_chains import ActionChains
suggestion_0 = driver.find_element_by_xpath('//li[#id="suggestion_0"]')
actions = ActionChains(driver)
actions.move_to_element(suggestion_0).perform()
suggestion_0.click()
You should click this field and wait for the first option to become clickable.
I've wrote some code to test if my solution works and it works in all cases for me:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
url = 'https://www.sneakql.com/en-GB/launch/culturekings/womens-air-jordan-1-high-og-court-purple-au/register'
driver = webdriver.Chrome(executable_path='/snap/bin/chromium.chromedriver')
driver.get(url)
wait = WebDriverWait(driver, 15)
wait.until(EC.element_to_be_clickable((By.XPATH, "//span[contains(text(),'AGREE')]"))).click() # ACCEPT COOKIES
# Making inputs of the first page
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "#firstName"))).send_keys("test")
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "#lastName"))).send_keys("Last name")
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "#preferredName"))).send_keys("Mr. President")
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "#email"))).send_keys("mr.president#gmail.com")
driver.find_element_by_css_selector("#password").send_keys("11111111")
driver.find_element_by_css_selector("#phone").send_keys("222334413")
driver.find_element_by_css_selector("#birthdate").send_keys("2000-06-11")
wait.until(EC.element_to_be_clickable((By.XPATH, "//span[contains(text(),'Next')]"))).click()
# Second page and answer to your main question
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "#address-autocomplete"))).send_keys("street")
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "#suggestion_0"))).click()
Please note, that not all explicit waits are required and I used css selectors because I am not sure that all elements ids are correct.
My output:

Selinum Driver wait for SVG to be completely rendred

I'm using Selenium with Chrome driver to scrap pages that contain SVG .
I need a way to make Selenium wait until the svg is completely loaded otherwise I will get some incomplete charts when I scrap.
For the moment the script wait for 10sec before it start scrapping but that's is a lot for scraping 20000 pages .
def page_loaded(driver):
path = "//*[local-name() = 'svg']"
time.sleep(10)
return driver.find_element_by_xpath(path)
wait = WebDriverWait(self.driver, 10)
wait.until(page_loaded)
is there any efficient way to check if the SVG is loaded before starting to scrap?
An example from Selenium documentation:
from selenium.webdriver.support import expected_conditions as EC
wait = WebDriverWait(driver, 10)
element = wait.until(EC.element_to_be_clickable((By.ID, 'someid')))
So in your case it should be :
from selenium.webdriver.support import expected_conditions as EC
wait = WebDriverWait(self.driver, 10)
element = wait.until(EC.presence_of_element_located((By.XPATH, path)))
Here 10 in WebDriverWait(driver, 10) is the maximum seconds of wait. ie it waits until 10 or condition whichever is first.
Some common conditions that are frequently of use when automating web browsers:
title_is title_contains
presence_of_element_located
visibility_of_element_located visibility_of
presence_of_all_elements_located
text_to_be_present_in_element
text_to_be_present_in_element_value
etc.
More available here.
Also here's the documentation for expected conditions support.
Another way you can tackle this is write your on method like:
def find_svg(driver):
element = driver.find_element_by_xpath(path)
if element:
return element
else:
return False
And then call Webdriver wait like:
element = WebDriverWait(driver, max_secs).until(find_svg)

Categories

Resources