Selenium stale element reference in for loop - python

I Used Windows10, Chrome version 89.0.4389.9, vscode, python
This code only loops once then errors with the error below.
table = driver.find_element_by_xpath('//*[#id="frm"]/table')
tbody = table.find_element_by_tag_name("tbody")
rows = tbody.find_elements_by_tag_name("tr")
# btns = driver.find_element_by_xpath('//*[#id="frm"]/table/tbody/tr[*]/td[2]/a')
for index, value in enumerate(rows):
body=value.find_elements_by_tag_name("td")[1]
body.click()
sleep(2)
driver.back()
sleep(2)
Traceback (most recent call last):
File "d:/Study/Companylist/program/pandastest.py", line 80, in <module>
body=value.find_elements_by_tag_name("td")[1]
File "D:\Anaconda\lib\site-packages\selenium\webdriver\remote\webelement.py", line 320, in find_elements_by_tag_name
return self.find_elements(by=By.TAG_NAME, value=name)
File "D:\Anaconda\lib\site-packages\selenium\webdriver\remote\webelement.py", line 684, in find_elements
return self._execute(Command.FIND_CHILD_ELEMENTS,
File "D:\Anaconda\lib\site-packages\selenium\webdriver\remote\webelement.py", line 633, in _execute
return self._parent.execute(command, params)
File "D:\Anaconda\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "D:\Anaconda\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document

Seems like I found the solution, but can protect it that it's the best, will dig into it more:
elements = driver.find_elements(By.CSS_SELECTOR, 'div.g')
for n, el in enumerate(elements):
elements = driver.find_elements(By.CSS_SELECTOR, 'div.g')
elements[n].click()
time.sleep(1)
driver.back()
time.sleep(1)
driver.quit()
try to find elements, then move start the loop and find the same result and get items from that loop by item number from enumarate function.
from time import sleep
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.action_chains import ActionChains
def company_info(driver):
com_name = driver.find_element_by_xpath('Enter your site xpath')
print(com_name.text)
com_addr = driver.find_element_by_xpath('Enter your site xpath')
print(com_addr.text)
com_tel = driver.find_element_by_xpath('Enter your site xpath')
print(com_tel.text)
com_fax = driver.find_element_by_xpath('Enter your site xpath')
print(com_fax.text)
driver = webdriver.Chrome()
url_search = 'Enter your site URL'
#input values
web_open_wait = 5
web_close_wait = 3
driver.get(url_search)
sleep(web_open_wait)
check_names = driver.find_elements_by_xpath('//*[#id="frm"]/table/tbody/tr[1]/td/a'
for n, el in enumarate(check_names, start=1):
check_names = driver.find_elements_by_xpath('//*[#id="frm"]/table/tbody/tr[%d]/td/a' % n)
check_name[el].click()
company_info(driver)
driver.back()
driver.quit()

I don't use selenium, but I believe the problem is that the element is no longer in the DOM. To get around this, you could use a "try" block.
try:
body=value.find_elements_by_tag_name("td")[1]
#some code
except:
pass

We will find the answer. Like we always do
I was thinking of the code I just put on top, and I came up with this method when I was thinking about how to fix #Vovo, one of the hints you gave me that rows couldn't come in repeatedly.
It doesn't look pretty, but it works well.
(It is difficult to use two drivers because the site I open cannot click the control.)
Anyway. Share. Hope it helps if someone sees this.
What I was trying to make is,
Enter the site containing the company information,
Among the publicly available information,
I was trying to extract the company name, address, contact information, and fax number.
The order of operation is:
Site access> Set xpath path of desired data> Click> Extract> End site
It's repeating this cycle
As much as i want
For reference, the sites I open are like Control + Cilck or something like that. If you click it, you have to load the element unconditionally. Go back or go forward, so it keeps hurting.
It would be comfortable to open it up and extract it and turn it off and extract it and turn it off.
If there is any improvement, please tell me
from time import sleep
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.action_chains import ActionChains
def company_info():
com_name = driver.find_element_by_xpath('Enter your site xpath')
print(com_name.text)
com_addr = driver.find_element_by_xpath('Enter your site xpath')
print(com_addr.text)
com_tel = driver.find_element_by_xpath('Enter your site xpath')
print(com_tel.text)
com_fax = driver.find_element_by_xpath('Enter your site xpath')
print(com_fax.text)
num_add = 0
while True:
driver = webdriver.Chrome()
url_search = 'Enter your site URL'
#input values
web_open_wait = 5
web_close_wait = 3
# Url Open
driver.get(url_search)
sleep(web_open_wait)
# Collect info
num_add += 1
check_name = driver.find_element_by_xpath('//*[#id="frm"]/table/tbody/tr[{0}]/td[2]/a'.format(num_add))
check_name.click()
sleep(web_open_wait)
company_info()
# Url Close
driver.quit()

Related

How do I get the href content of a video with seleniun inside an iframe?

I have a problem wanting to get the content of a href that has a video. I got to enter the page and click on the video, the href appears on the page once I click on the video, until I click on the video I get, but then I can't get to the href.
This is the code of the page
HTML
The code is longer and the href is inside an iframe but honestly I don't know how to copy the text from the html so I sent the image .. I want to get the content of the href as the iagen shows, but it didn't get there. That content appears when I click on the video. Next I leave my code
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
import time, re, requests
algo = "Player YouTube"
driver.get("https://www.supertelevisionhd.net/dragon-ball-z-en-vivo/")
time.sleep(1)
driver.switch_to.frame(driver.find_element_by_xpath("//*[#id='post-100']/div/center/iframe"))
time.sleep(1)
click1 = driver.find_element_by_xpath('//*[#id="{}"]'.format(algo))
time.sleep(5)
click1.click()
WebDriverWait(driver, 10).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH, '/html/body/iframe')))
time.sleep(5)
enlace = WebDriverWait(driver, 10).until(EC.element_to_be_selected((By.CLASS_NAME,'html5-vpl_title_l vpl-antialiased')))
print(enlace)
This is the error I get
> File "C:\Users\Nico\Desktop\pruebas driver.py", line 36, in <module>
> enlace = WebDriverWait(driver,
> 10).until(EC.element_to_be_selected((By.CLASS_NAME,'html5-vpl_title_l
> vpl-antialiased'))) File
> "C:\Python39\lib\site-packages\selenium\webdriver\support\wait.py",
> line 71, in until value = method(self._driver) File
> "C:\Python39\lib\site-packages\selenium\webdriver\support\expected_conditions.py",
> line 329, in call return self.element.is_selected() AttributeError:
> 'tuple' object has no attribute 'is_selected'
I would appreciate any help. I repeat, what I want to obtain is the content of href, that is, what http: //ok.ru says ...
Thanks
The error is because you are passing a tuple instead of an element in the last before line.
Change the last before line in your code to this:
enlace = WebDriverWait(driver, 10).until(EC.element_to_be_selected(By.CLASS_NAME,'html5-vpl_title_l vpl-antialiased'))
You added an extra paranthesis () and therefore the error.

Bookmakers scraping with selenium

I'm trying do understand how to scrape this betting website https://www.betaland.it/
I'm trying to scrape all the table rows that have inside the information of the 1X2 odds of the italian "Serie A".
The code I have written is this:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support.expected_conditions import presence_of_element_located
import time
import sys
url = 'https://www.betaland.it/sport/calcio/italia/serie-a-OIA-scommesse-sportive-online'
# absolute path
chrome_driver_path = '/Users/39340/PycharmProjects/pythonProject/chromedriver'
chrome_options = Options()
chrome_options.add_argument('--headless')
webdriver = webdriver.Chrome(
executable_path=chrome_driver_path, options=chrome_options
)
with webdriver as driver:
#timeout
wait = WebDriverWait(driver, 10)
#retrieve the data
driver.get(url)
#wait
wait.until(presence_of_element_located((By.ID, 'prematch-container-events-1-33')))
#results
results = driver.find_elements_by_class_name('simple-row')
print(results)
for quote in results:
quoteArr = quote.text
print(quoteArr)
print()
driver.close()
And the error that I have is:
Traceback (most recent call last):
File "C:\Users\39340\PycharmProjects\pythonProject\main.py", line 41, in <module>
wait.until(presence_of_element_located((By.ID, 'prematch-container-events-1-33')))
File "C:\Users\39340\PycharmProjects\pythonProject\venv\lib\site-packages\selenium\webdriver\support\wait.py", line 80, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
P.S: if you try to access to the bookmakers you have to set an Italian IP address. Italian bookmakers are avaible only from Italy.
It's basically a timeout error which means the given time to load the page or find the element(as in this case) is insufficient. So firstly try to increase the wait time from 10 to 15 or 30 even.
Secondly you can use other element identifiers like xpath, css_selector and others instead of id and adjust the wait time like said in point one.

Unable to upload a pdf file using send_keys or requests

I've written a script in python using selenium to log in to a website and then go on to the target page in order to upload a pdf file. The script can log in successfully but throws element not interactable error when it comes to upload the pdf file. This is the landing_page in which the script first clicks on the button right next to Your Profile and uses SIM.iqbal_123 and SShift_123 respectively to log in to that site and then uses this target_link to upload that file. To upload that file it is necessary to click on select button first and then cv button. However, the script throws the following error when it is supposed to click on the cv button in order to upload the pdf file.
I've tried with:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
landing_page = 'https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/SEARCH/RESULTS/'
target_link = 'https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/APPLICATION/57274787/2/'
driver = webdriver.Chrome()
wait = WebDriverWait(driver,30)
driver.get(landing_page)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,".profileContainer > button.trigger"))).click()
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"input[name='alias']"))).send_keys("SIM.iqbal_123")
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"input[name='password']"))).send_keys("SShift_123")
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"button.loginBtn"))).click()
driver.get(target_link)
button = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"button[class*='uploadBtn']")))
driver.execute_script("arguments[0].click();",button)
elem = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"form[class='fileForm'] > label[data-type='12']")))
elem.send_keys("C://Users/WCS/Desktop/CV.pdf")
Error that the script encounters pointing at the last line:
Traceback (most recent call last):
File "C:\Users\WCS\AppData\Local\Programs\Python\Python37-32\keep_it.py", line 22, in <module>
elem.send_keys("C://Users/WCS/Desktop/CV.pdf")
File "C:\Users\WCS\AppData\Local\Programs\Python\Python37-32\lib\site-packages\selenium\webdriver\remote\webelement.py", line 479, in send_keys
'value': keys_to_typing(value)})
File "C:\Users\WCS\AppData\Local\Programs\Python\Python37-32\lib\site-packages\selenium\webdriver\remote\webelement.py", line 633, in _execute
return self._parent.execute(command, params)
File "C:\Users\WCS\AppData\Local\Programs\Python\Python37-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\Users\WCS\AppData\Local\Programs\Python\Python37-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable
(Session info: chrome=80.0.3987.149)
This is how I tried using requests which could not upload the file either:
import requests
from bs4 import BeautifulSoup
aplication_link = 'https://jobs.allianz.com/sap/opu/odata/hcmx/erc_ui_auth_srv/AttachmentSet?sap-client=100&sap-language=en'
with requests.Session() as s:
s.auth = ("SIM.iqbal_123", "SShift_123")
s.post("https://jobs.allianz.com/sap/hcmx/validate_ea?sap-client=100&sap-language={2}")
r = s.get("https://jobs.allianz.com/sap/opu/odata/hcmx/erc_ui_auth_srv/UserSet('me')?sap-client=100&sap-language=en", headers={'x-csrf-token':'Fetch'})
token = r.headers.get("x-csrf-token")
s.headers["x-csrf-token"] = token
file = open("CV.pdf","rb")
r = s.post(aplication_link,files={"Slug":f"Filename={file}&Title=CV%5FTEST&AttachmentTypeID=12"})
print(r.status_code)
Btw, this is the pdf file in case you wanna test.
How can I upload a pdf file using send_keys or requests?
EDIT:
I've brought about some changes in my existing script which now works for this link visible there as Cover Letter but fails miserably when it goes for this link visible as Documents . They both are almost identical.
Please refer below solution to avoid your exception,
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait as Wait
from selenium.webdriver.common.action_chains import ActionChains
import os
landing_page = 'https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/SEARCH/RESULTS/'
target_link = 'https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/APPLICATION/57262231/2/'
driver = webdriver.Chrome(executable_path=r"C:\New folder\chromedriver.exe")
wait = WebDriverWait(driver,30)
driver.get(landing_page)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,".profileContainer > button.trigger"))).click()
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"input[name='alias']"))).send_keys("SIM.iqbal_123")
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"input[name='password']"))).send_keys("SShift_123")
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"button.loginBtn"))).click()
driver.get(target_link)
driver.maximize_window()
button = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"button[class*='uploadBtn']")))
driver.execute_script("arguments[0].click();",button)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
element = wait.until(EC.element_to_be_clickable((By.XPATH,"//label[#class='button uploadType-12-btn']")))
print element.text
webdriver.ActionChains(driver).move_to_element(element).click(element).perform()
webdriver.ActionChains(driver).move_to_element(element).click(element).perform()
absolute_file_path = os.path.abspath("Path of your pdf file")
print absolute_file_path
file_input = driver.find_element_by_id("DOCUMENTS--fileElem")
file_input.send_keys(absolute_file_path)
Output:
Try this script , it upload document on both pages
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
landing_page = 'https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/SEARCH/RESULTS/'
first_target_link = 'https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/APPLICATION/57274787/1/'
second_target_link = 'https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/APPLICATION/57274787/2/'
driver = webdriver.Chrome()
wait = WebDriverWait(driver,30)
driver.get(landing_page)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,".profileContainer > button.trigger"))).click()
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"input[name='alias']"))).send_keys("SIM.iqbal_123")
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"input[name='password']"))).send_keys("SShift_123")
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"button.loginBtn"))).click()
#----------------------------first upload starts from here-----------------------------------
driver.get(first_target_link)
button = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"button[class*='uploadBtn']")))
driver.execute_script("arguments[0].click();",button)
element = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,"form[class='fileForm'] > label[class$='uploadTypeCoverLetterBtn']")))
driver.execute_script("arguments[0].click();",element)
file_input = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"input[id='COVERLETTER--fileElem")))
file_input.send_keys("C://Users/WCS/Desktop/script selenium/CV.pdf")
wait.until(EC.invisibility_of_element_located((By.CSS_SELECTOR,".loadingSpinner")))
save_draft = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,".applicationStepsUIWrapper > button.saveDraftBtn")))
driver.execute_script("arguments[0].click();",save_draft)
close = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,".promptWrapper button.closeBtn")))
driver.execute_script("arguments[0].click();",close)
#-------------------------second upload starts from here-------------------------------------
driver.get(second_target_link)
button = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"button[class*='uploadBtn']")))
driver.execute_script("arguments[0].click();",button)
element = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,"form[class='fileForm'] > label[data-type='12']")))
driver.execute_script("arguments[0].click();",element)
file_input = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"input[id='DOCUMENTS--fileElem")))
file_input.send_keys("C://Users/WCS/Desktop/script selenium/CV.pdf")
wait.until(EC.invisibility_of_element_located((By.CSS_SELECTOR,".loadingSpinner")))
save_draft = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,".applicationStepsUIWrapper > button.saveDraftBtn")))
driver.execute_script("arguments[0].click();",save_draft)
close = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,".promptWrapper button.closeBtn")))
driver.execute_script("arguments[0].click();",close)

Can't collect information at the same time from two different depth using selenium

I've written a script in python using selenium to get the name and reputation using get_names() function from it's landing page and then click on the link of different posts to reach the inner page in order to parse title using get_additional_info() function from there.
All of the information that I'm trying to parse are avaialabe in landing page as well as inner page. Moreover, they are not dynamic, so selenium is definitely overkill. However, my intention is to make use of selenium to scrape information simultaneously from two different depth.
In the script below If I comment out name and rep lines, I can see that the script can perform clicks on the links of landing pages and parse the titles from inner pages flawlessly.
However, when I run the script as it is, I get selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document error which points at this name = item.find_element_by_css_selector() line.
How can I get rid of this error and make it run flawlessly complying with the logic I've already implemented?
I've tried so far:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
lead_url = 'https://stackoverflow.com/questions/tagged/web-scraping'
def get_names():
driver.get(lead_url)
for count, item in enumerate(wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,".summary")))):
usableList = wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,".summary .question-hyperlink")))
name = item.find_element_by_css_selector(".user-details > a").text
rep = item.find_element_by_css_selector("span.reputation-score").text
driver.execute_script("arguments[0].click();",usableList[count])
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,"h1 > a.question-hyperlink")))
title = get_additional_info()
print(name,rep,title)
driver.back()
wait.until(EC.staleness_of(usableList[count]))
def get_additional_info():
title = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,"h1 > a.question-hyperlink"))).text
return title
if __name__ == '__main__':
driver = webdriver.Chrome()
wait = WebDriverWait(driver,5)
get_names()
Keeping broadly with your design pattern...Don't work off item. Use count to index into list of elements pulled from current page_source e.g.
driver.find_elements_by_css_selector(".user-details > a")[count].text
Py
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
lead_url = 'https://stackoverflow.com/questions/tagged/web-scraping'
def get_names():
driver.get(lead_url)
for count, item in enumerate(wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,".summary")))):
usableList = wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,".summary .question-hyperlink")))
name = driver.find_elements_by_css_selector(".user-details > a")[count].text
rep = driver.find_elements_by_css_selector("span.reputation-score")[count].text
driver.execute_script("arguments[0].click();",usableList[count])
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,"h1 > a.question-hyperlink")))
title = get_additional_info()
print(name,rep,title)
driver.back()
wait.until(EC.staleness_of(usableList[count]))
def get_additional_info():
title = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,"h1 > a.question-hyperlink"))).text
return title
if __name__ == '__main__':
driver = webdriver.Chrome()
wait = WebDriverWait(driver,5)
get_names()

Python Selenium: ElementNotInteractableException error on click()

I run the script and it gives me one of two errors:
selenium.common.exceptions.ElementNotInteractableException: Message: Element <a class="grid_size in-stock" href="javascript:void(0);"> could not be scrolled into view
or
Element not found error
Right now this is the error it's giving. Sometimes it works, sometimes it doesn't. I have been trying to change the timing around to get it working right but to no avail. It will not work.
Code:
import requests
from selenium.webdriver.support import ui
from selenium import webdriver
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
def get_page(model, sku):
url = "https://www.footlocker.com/product/model:"+str(model)+"/sku:"+ str(sku)+"/"
return url
browser = webdriver.Firefox()
page=browser.get(get_page(277097,"8448001"))
browser.find_element_by_xpath("//*[#id='pdp_size_select_mask']").click()
link = ui.WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "#size_selection_list")))
browser.find_element_by_css_selector('#size_selection_list').click()
browser.find_element_by_css_selector('#size_selection_list > a:nth-child(8)').click()
browser.find_element_by_xpath("//*[#id='pdp_addtocart_button']").click()
checkout = browser.get('https://www.footlocker.com/shoppingcart/default.cfm?sku=')
checkoutbutton = browser.find_element_by_css_selector('#cart_checkout_button').click()
The website automatically opens the size_selection_list div, so you don't need to click on it. But you do need to wait for the particular list element that you want to select. This code worked for me on this site a couple of times consistently.
from selenium.webdriver.support import ui
from selenium import webdriver
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
def get_page(model, sku):
url = "https://www.footlocker.com/product/model:"+str(model)+"/sku:"+ str(sku)+"/"
return url
browser = webdriver.Firefox()
page=browser.get(get_page(277097,"8448001"))
browser.find_element_by_xpath("//*[#id='pdp_size_select_mask']").click()
shoesize = ui.WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'a.grid_size:nth-child(8)')))
shoesize.click()
browser.find_element_by_xpath("//*[#id='pdp_addtocart_button']").click()
checkout = browser.get('https://www.footlocker.com/shoppingcart/default.cfm?sku=')
checkoutbutton = browser.find_element_by_css_selector('#cart_checkout_button').click()
I don't have enough reputation to comment, so: can you provide more of the stack trace for the "element not found" error? I stepped through the live Foot Locker site and didn't find the cart_checkout_button, though I may have made a mistake.
I'm also not sure (in your specific example) that you need to be clicking the size_selection_list the first time, before clicking the child, but I'm not as concerned about that. The syntax overall looks OK to me.
edit with the provided stack trace:
Traceback (most recent call last): File "./footlocker_price.py", line 29, in browser.find_element_by_css_selector('#size_selection_list > a:nth-child(8)').click()
File "/Library/Python/2.7/site-packages/selenium/webdriver/remote/webelement.py", line 80, in click
self._execute(Command.CLICK_ELEMENT)
File "/Library/Python/2.7/site-packages/selenium/webdriver/remote/webelement.py", line 501, in _execute
return self._parent.execute(command, params)
File "/Library/Python/2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 308, in execute
self.error_handler.check_response(response)
File "/Library/Python/2.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 194, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementNotInteractableException: Message: Element could not be scrolled into view
What this means is that the click on #size_selection_list > a:nth-child(8) is failing because it can't be interacted with directly. Some other element is in the way. reference
Due to the way the particular page you're interacting with works (which is here for others reading this), I believe the size selection list is simply hidden when the page loads, and is displayed after you click on the Size button.
It seems like the ui.WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "#size_selection_list"))) line should do what you want, though, so I'm at a little bit of a loss. If you add in a time.sleep(5) before you click on the size element, does it work? It's not ideal but maybe it will get you moving on to other parts of this.

Categories

Resources