Cant get bot to select item off of supreme - python

Im a beginner to python btw.
My goal is to build a very simple ACO bot for supreme. Although im running into a problem and I have not had much luck with solutions.
I tried to use driver.find_element(By.XPATH, "") but this is not exactly possible since the website constantly changes and I would need to always change the name of the product I want.
I tried to use "title", "ID" or "class" but there are multiple items with the same class, title and ID so that does not work. I read that someone doing the same thing as I, was using "PARTIAL_LINK_TEXT" so I tried driver.find_element(By.PARTIAL_LINK_TEXT, "Boxer Briefs"). I tried this by putting in "Boxer Briefs" to see if it would select the "Supreme/Hanes Boxer Briefs as a test. Unfortunately this also did not work, and it give me error saying
selenium.common.exceptions.NoSuchElementException: Message:
this is all of my code, like I said im a beginner and this is a very basic bot.
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Safari()
url = "https://www.supremenewyork.com/shop/all"
driver.get(url)
driver.maximize_window()
click = driver.find_element(By.XPATH, '//*[#id="nav-categories"]/li[10]/a')
click.click()
element = driver.find_element(By.PARTIAL_LINK_TEXT, "Boxer Briefs")
any help would be really appreciated.
https://www.supremenewyork.com/shop/all

JavaScript needs some time to replace elements - when I use time.sleep(1) after click() then it works without problems.
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
#driver = webdriver.Safari()
driver = webdriver.Firefox()
driver.maximize_window()
url = "https://www.supremenewyork.com/shop/all"
driver.get(url)
click = driver.find_element(By.XPATH, '//*[#id="nav-categories"]/li[10]/a')
click.click()
time.sleep(1)
element = driver.find_element(By.PARTIAL_LINK_TEXT, "Boxer Briefs")
print(element.text)
See also Selenium doc: Waits to create more universal code.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
#driver = webdriver.Safari()
driver = webdriver.Firefox()
driver.maximize_window()
url = "https://www.supremenewyork.com/shop/all"
driver.get(url)
click = driver.find_element(By.XPATH, '//*[#id="nav-categories"]/li[10]/a')
click.click()
try:
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.PARTIAL_LINK_TEXT, "Boxer Briefs"))
)
print(element.text)
except Exception as ex:
print('Exception:', ex)
Result:
Supreme®/Hanes® Boxer Briefs (2 Pack)
Tested with Firefox and Chrome on Linux Mint 20 (based on Ubuntu 21.04)
EDIT:
If I presence_of_all_elements_located instead of presence_of_element_located
try:
all_elements = WebDriverWait(driver, 10).until(
EC.presence_of_all_elements_located((By.PARTIAL_LINK_TEXT, "Boxer Briefs"))
)
for element in all_elements:
print(element.text)
except Exception as ex:
print('Exception:', ex)
the result is
Supreme®/Hanes® Boxer Briefs (2 Pack)
Supreme®/Hanes® Boxer Briefs (4 Pack)
Supreme®/Hanes® Boxer Briefs (4 Pack)

Related

Selecting button in Selenium python using web driver

Can anyone please let me know how to correctly click on the button using Selenium webdriver?
I have the following html element I want to click:
<button type="button" class="btn button_primary" data-bind="click: $parent.handleSsoLogin.bind($parent)"> Sign In
</button>
I am trying to use WebDriver with python but it doesn't find the element. Please advise how to address it?
from xml.dom.expatbuilder import InternalSubsetExtractor
from selenium.webdriver.common.by import By
import time
# imports parts of interest
from selenium import webdriver
# controlling the chrome browser
driver = webdriver.Chrome()
link=xxxxx
driver.get(link2)
# login = driver.find_element(By.LINK_TEXT,"Login")
time.sleep(10)
# login.click()
driver.find_element(By.ID,'CybotCookiebotDialogBodyLevelButtonLevelOptinAllowAll')
time.sleep(10)
login=driver.find_element(By.CSS_SELECTOR("<button type="buttonclass="btn button_primary" data-bind="click: $parent.handleSsoLogin.bind($parent)"> Sign In
So far tried different elements but it doesn't find it
Here is a complete example of how you can go to login page and login, on terex parts (why you edited out the url, I don't know).
Assuming you have a working Selenium setup, you will also need the following imports:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time as t
[...]
wait = WebDriverWait(driver, 5)
url = 'https://parts.terex.com/'
driver.get(url)
t.sleep(3)
try:
wait.until(EC.element_to_be_clickable((By.ID, "CybotCookiebotDialogBodyLevelButtonLevelOptinAllowAll"))).click()
print('accepted cookies')
except Exception as e:
print('no cookie button!')
t.sleep(4)
login_button = wait.until(EC.element_to_be_clickable((By.XPATH, '//button[#data-bind="click: $parent.handleSsoLogin.bind($parent)"]')))
login_button.click()
print('clicked login button')
t.sleep(5)
wait.until(EC.frame_to_be_available_and_switch_to_it((By.XPATH, '//iframe[#class="truste_popframe"]')))
try:
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a[class='call']"))).click()
print('accepted cookies again')
except Exception as e:
print('no cookie iframe and button!')
driver.switch_to.default_content()
user_email_field = wait.until(EC.element_to_be_clickable((By.XPATH, '//input[#id="idcs-signin-basic-signin-form-username"]')))
user_email_field.send_keys('parts_dealer_112')
password_field = wait.until(EC.element_to_be_clickable((By.XPATH, '//input[#placeholder="Password"]')))
password_field.send_keys('password112')
login_button = wait.until(EC.element_to_be_clickable((By.XPATH, '//oj-button[#id="idcs-signin-basic-signin-form-submit"]')))
login_button.click()
print('logged in unsuccessfully')
Selenium documentation can be found here: https://www.selenium.dev/documentation/
I have managed to locate the element using Chrome Plugin Selectors Hub
Selectors Hub Google Chrome. It allows quickly select Xpath for elements which then can be used with Xpath locator- makes life so much easier- highly recommend giving it a try if you are struggling.
login=driver.find_element(By.XPATH,"//button[#data-bind='click: $parent.handleSsoLogin.bind($parent)']").click()

I keep getting the error message NoSuchElementException when trying to use selenium to log into my university's webpage

I'm pretty new to python and StackOverflow so please bear with me.
I'm trying to write a script in python and use selenium to log myself into my university's website but I keep getting the same error NoSuchElementException.
The full text of the error:
Exception has occurred: NoSuchElementException
Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="username"]"}
(Session info: chrome=86.0.4240.183)
File "C:\Users\User\Desktop\Python\Assignment6\nsuokSelenium.py", line 9, in <module>
browser.find_element_by_id('username').send_keys(bb_username)
I have my log in information in a separate script called credential.py that I'm calling with
from credentials import bb_username, bb_password
My Code
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.keys import Keys
from credentials import bb_password, bb_username
browser = webdriver.Chrome()
browser.get('https://bb.nsuok.edu')
browser.find_element_by_id('username').send_keys(bb_username)
browser.find_element_by_id('password').send_keys(bb_password)
browser.find_element_by_name('submit').click()
try:
WebDriverWait(browser, 1) .until(EC.url_matches('https://bb.nsuok.edu/ultra'))
except TimeoutError:
print('took too long')
WebDriverWait(browser, 10).until(EC.url_matches('https://bb.nsuok.edu/ultra'))
browser.find_element_by_name('Courses').click()
WebDriverWait(browser, 10).until(EC.url_matches('https://bb.nsuok.edu/ultra/course'))
browser.find_element_by_name('Organizations').click()
WebDriverWait(browser, 10).until(EC.url_matches('https://bb.nsuok.edu/ultra/logout'))
The error is showing up here
browser.find_element_by_id('username').send_keys(bb_username)
Could it be an issue with PATH?
What Justin Ezequiel said is correct. You need to add waits in your code for the page to load properly; due to the fact that, dependent upon internet speeds, some pages load faster than others. ( obviously )
With that in mind, I was able to identify the elements on the page for you. I added some comments in the code as well.
MAIN PROGRAM - For Reference
from selenium import webdriver
from selenium.webdriver.chrome.webdriver import WebDriver as ChromeDriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait as DriverWait
from selenium.webdriver.support import expected_conditions as DriverConditions
from selenium.common.exceptions import WebDriverException
def get_chrome_driver():
"""This sets up our Chrome Driver and returns it as an object"""
path_to_chrome = "F:\Selenium_Drivers\Windows_Chrome85_Driver\chromedriver.exe"
chrome_options = webdriver.ChromeOptions()
# Browser is displayed in a custom window size
chrome_options.add_argument("window-size=1500,1000")
return webdriver.Chrome(executable_path = path_to_chrome,
options = chrome_options)
def wait_displayed(driver : ChromeDriver, xpath: str, int = 5):
try:
DriverWait(driver, int).until(
DriverConditions.presence_of_element_located(locator = (By.XPATH, xpath))
)
except:
raise WebDriverException(f'Timeout: Failed to find {xpath}')
def enter_information(driver : ChromeDriver, xpath: str, text : str):
driver.find_element(By.XPATH, xpath).send_keys(text)
if(driver.find_element(By.XPATH, xpath).get_attribute('value').__len__() != text.__len__()):
raise Exception(f'Failed to populate our Textbox.\nXPATH: {xpath}')
# Gets our chrome driver and opens our site
chrome_driver = get_chrome_driver()
chrome_driver.get("https://logon.nsuok.edu/cas/login")
# Waits until our elements are loaded onto the page
wait_displayed(chrome_driver, "//form//input[#id='username']")
wait_displayed(chrome_driver, "//form//input[#id='password']")
wait_displayed(chrome_driver, "//form//input[contains(#class, 'btn-submit')]")
# Inputs our Username and Password
enter_information(chrome_driver, "//form//input[#id='username']", "MyUserNameHere")
enter_information(chrome_driver, "//form//input[#id='password']", "MyPasswordHere")
# Clicks Login
chrome_driver.find_element(By.XPATH, "//form//input[contains(#class, 'btn-submit')]").click()
chrome_driver.quit()
chrome_driver.service.stop()
You may need to wait for the element. Try something like the following:
element = WebDriverWait(browser, 10).until(
EC.presence_of_element_located((By.ID, "username"))
)
element.clear()
element.send_keys(bb_username)
element = WebDriverWait(browser, 10).until(
EC.presence_of_element_located((By.ID, "password"))
)
element.clear()
element.send_keys(bb_password)

Not sure how to get elements from dynamically loading webpage using selenium

So I am scraping reviews and skin type from Sephora and have run into a problem identifying how to get elements off of the page.
Sephora.com loads reviews dynamically after you scroll down the page so I have switched from beautiful soup to Selenium to get the reviews.
The Reviews have no ID, no name, nor a CSS identifier that seems to be stable. The Xpath doesn't seem to be recognized each time I try to use it by copying from chrome nor from firefox.
Here is an example of the HTML from the inspected element that I loaded in chrome:
Inspect Element view from the desired page
My Attempts thus far:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome("/Users/myName/Downloads/chromedriver")
url = 'https://www.sephora.com/product/the-porefessional-face-primer-P264900'
driver.get(url)
reviews = driver.find_elements_by_xpath(
"//div[#id='ratings-reviews']//div[#data-comp='Ellipsis Box ']")
print("REVIEWS:", reviews)
Output:
| => /Users/myName/anaconda3/bin/python "/Users/myName/Documents/ScrapeyFile Group/attempt32.py"
REVIEWS: []
(base)
So basically an empty list.
ATTEMPT 2:
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
# Open up a Firefox browser and navigate to web page.
driver = webdriver.Firefox()
driver.get(
"https://www.sephora.com/product/squalane-antioxidant-cleansing-oil-P416560?skuId=2051902&om_mmc=ppc-GG_1165716902_56760225087_pla-420378096665_2051902_257731959107_9061275_c&country_switch=us&lang=en&ds_rl=1261471&gclid=EAIaIQobChMIisW0iLbK6AIVaR6tBh005wUTEAYYBCABEgJVdvD_BwE&gclsrc=aw.ds"
)
#Scroll to bottom of page b/c its dynamically loading
html = driver.find_element_by_tag_name('html')
html.send_keys(Keys.END)
#scrape stats and comments
comments = driver.find_elements_by_css_selector("div.css-7rv8g1")
print("!!!!!!Comments!!!!!")
print(comments)
OUTPUT:
| => /Users/MYNAME/anaconda3/bin/python /Users/MYNAME/Downloads/attempt33.py
!!!!!!Comments!!!!!
[]
(base)
Empty again. :(
I get the same results when I try to use different element selectors:
#scrape stats and comments
comments = driver.find_elements_by_class_name("css-7rv8g1")
I also get nothing when I tried this:
comments = driver.find_elements_by_xpath(
"//div[#data-comp='GridCell Box']//div[#data-comp='Ellipsis Box ']")
and This (notice the space after Ellipsis Box is gone :
comments = driver.find_elements_by_xpath(
"//div[#data-comp='GridCell Box']//div[#data-comp='Ellipsis Box']")
I have tried using the solutions outlined here and here but ti no avail -- I think there is something I don't understand about the page or selenium that I am missing since this is my first time using selenium so i'm a super nube :(
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.common.action_chains import ActionChains
import time
from selenium import webdriver
driver = webdriver.Chrome(executable_path=r"")
driver.maximize_window()
wait = WebDriverWait(driver, 20)
driver.get("https://www.sephora.fr/p/black-ink---classic-line-felt-liner---eyeliner-feutre-precis-waterproof-P3622017.html")
scrolls = 1
while True:
scrolls -= 1
driver.execute_script("window.scrollTo(0, document.body.scrollHeight)")
time.sleep(3)
if scrolls < 0:
break
reviewText=wait.until(EC.presence_of_all_elements_located((By.XPATH, "//ol[#class='bv-content-list bv-content-list-reviews']//li//div[#class='bv-content-summary-body']//div[1]")))
for textreview in reviewText:
print textreview.text
Output:
I've been scraping reviews from Sephora and basically, even if there is plenty of room for improvement, it works like this :
Clicks on "reviews" to access reviews
Loads all reviews by scrolling until there aren't any review left to load
Finds review text and skin type by CSS SELECTOR
def load_all_reviews(driver):
while True:
try:
driver.execute_script(
"arguments[0].scrollIntoView(true);",
WebDriverWait(driver, 10).until(
EC.visibility_of_element_located(
(By.CSS_SELECTOR, ".bv-content-btn-pages-load-more")
)
),
)
driver.execute_script(
"arguments[0].click();",
WebDriverWait(driver, 20).until(
EC.element_to_be_clickable(
(By.CSS_SELECTOR, ".bv-content-btn-pages-load-more")
)
),
)
except Exception as e:
break
def get_review_text(review):
try:
return review.find_element(By.CLASS_NAME, "bv-content-summary-body-text").text
except:
return "NA" # in case it doesnt find a review
def get_skin_type(review):
try:
return review.find_element(By.XPATH, '//*[#id="BVRRContainer"]/div/div/div/div/ol/li[2]/div[1]/div/div[2]/div[5]/ul/li[4]/span[2]').text
except:
return "NA" # in case it doesnt find a skin type
to use those you've got to create a webdriver and first call the load_all_reviews() function.
Then you've got to find reviews with :
reviews = driver.find_elements(By.CSS_SELECTOR, ".bv-content-review")
and finally you can call for each review the get_review() and get_skin_type() functions :
for review in reviews :
print(get_review_text(review))
print(get_skin_type(review))

Unable to wrap `driver.execute_script()` within `explicit wait` condition

I've created a python script together with selenium to parse a specific content from a webpage. I can get this result AARONS INC located under QUOTE in many different ways but the way I wish to scrape that is by using pseudo selector which unfortunately selenium doesn't support. The commented out line within the script below represents that selenium doesn't support pseudo selector.
However, when I use pseudo selector within driver.execute_script() then I can parse it flawlessly. To make this work I had to use hardcoded delay for the element to be avilable. Now, I wish to do the same wrapping this driver.execute_script() within Explicit Wait condition.
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome()
wait = WebDriverWait(driver, 20)
driver.get("https://www.nyse.com/quote/XNYS:AAN")
time.sleep(15)
# item = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "span:contains('AARONS')")))
item = driver.execute_script('''return $('span:contains("AARONS")')[0];''')
print(item.text)
How can I wrap driver.execute_script() within Explicit Wait condition?
This is one of the ways you can achieve that. Give it a shot.
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
with webdriver.Chrome() as driver:
wait = WebDriverWait(driver, 10)
driver.get('https://www.nyse.com/quote/XNYS:AAN')
item = wait.until(
lambda driver: driver.execute_script('''return $('span:contains("AARONS")')[0];''')
)
print(item.text)
You could do the while thing in the browser script which is probably safer:
item = driver.execute_async_script("""
var span, interval = setInterval(() => {
if(span = $('span:contains("AARONS")')[0]){
clearInterval(interval)
arguments[0](span)
}
}, 1000)
""")
Here is the simple approach.
url = 'https://www.nyse.com/quote/XNYS:AAN'
driver.get(url)
# wait for the elment to be presented
ele = WebDriverWait(driver, 30).until(lambda driver: driver.execute_script('''return $('span:contains("AARONS")')[0];'''))
# print the text of the element
print (ele.text)

Unable to locate a visible element with python selenium

I would like to click on a calendar entry in this site using selenium in python. Although I can clearly see each calendar entry and I can get its xpath, id etc. But when I try to locate the element I get an errror.
(For example, I can see that the link for the 20th day of April has an id='20160420')
browser = webdriver.Firefox(firefox_profile=fp)
browser.get(url)
browser.implicitly_wait(5)
el=browser.find_element_by_id('20160420')
Any suggestions (I tried switching between frames, active elements etc. but to no avail so far ...)
The problem is your code didn't switch to iframe yet. See example code below:
browser = webdriver.Firefox(firefox_profile=fp)
browser.get(url)
browser.implicitly_wait(5)
time.sleep(5)
iframe = browser.find_element_by_css_selector('#crowdTorchTicketingMainContainer > iframe')
browser.switch_to.frame(iframe)
el=browser.find_element_by_id('20160420')
print(el.text)
The issue seems to be that the element that you are looking for is in an iframe, and you have to switch to that before you can access the calendar element:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait as WDW
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException, TimeoutException, StaleElementReferenceException
url = 'https://hornblowernewyork.com/cruises/international-sightseeing-cruise'
browser = webdriver.Firefox()
browser.get(url)
# seach for the appropriate iframe
frame = browser.find_element(By.CSS_SELECTOR, "#crowdTorchTicketingMainContainer > iframe")
if not frame:
raise Exception('Frame not found')
# once the correct iframe is found switch to it,
# then look for your calendar element
browser.switch_to.frame(frame)
try:
el = WDW(browser, 10).until(
EC.presence_of_element_located((By.ID, "20160406"))
)
except (TimeoutException, NoSuchElementException, StaleElementReferenceException), e:
print e
else:
print el

Categories

Resources