Selenium select element which isnt directly in the page source in python - python

For instance I have this website: https://skinport.com/item/stattrak-usp-s-black-lotus-minimal-wear/6128018 and want to get the current price of the item. Selenium doesn't find the element by class name, XPath or css selector. I think that's just because the page source doesn't have the price. The site consists of a few scripts which prints the current price
So I have something like this in python:
driver.get("https://skinport.com/item/stattrak-usp-s-black-lotus-field-tested/6196388")
print(price = driver.find_element(By.XPATH, '//*[#id="content"]/div[1]/div[2]/div/div/div[2]/div[1]/div'))
And I get this error: selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element
With
print(driver.find_elements(By.CSS_SELECTOR("#content > div.ItemPage > div.ItemPage-column.ItemPage-column--right > div:nth-child(1) > div > div.ItemPage-price > div.ItemPage-value > div")))
I get this error: TypeError: 'str' object is not callable

You are missing a wait.
You should let the page loaded before accessing that element.
The preferred way to do that is to use the expected conditions explicit waits.
Also you are missing the text method to retrieve the text from the web element.
Also, your locator is bad.
Something like this should work:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver.get("https://skinport.com/item/stattrak-usp-s-black-lotus-field-tested/6196388")
wait = WebDriverWait(driver, 20)
price = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div.ItemPage-price div.Tooltip-link"))).text
print(price)

My guest is, you didn't add an implicit/explicit wait to your driver session. Your xpath seems to work.
Post your code. Maybe we could figure it out together.
The link to the documentation
https://selenium-python.readthedocs.io/waits.html

Related

Selenium: Element Click Intercepted while submitting a form

So I'm trying to submit a form but something is either preventing me from accessing the box or I'm using a wrong element but I think I'm using the correct one.
Here is my code:
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome(executable_path = 'mypath/chromedriver.exe')
driver.maximize_window()
#driver.implicitly_wait(50)
driver.get("https://ai.fmcsa.dot.gov/SMS")
wait = WebDriverWait(driver, 20)
wait.until(EC.element_to_be_clickable((By.XPATH, "//a[#title='Close']"))).click()
wait = WebDriverWait(driver, 20)
driver.find_element_by_xpath('//*[#id="home-body"]/div[1]/div/div[1]/form/label').click()
driver.find_element_by_xpath('//*[#id="home-body"]/div[1]/div/div[1]/form/label').send_keys('1818437')
driver.find_element_by_xpath('/html/body/div[3]/div[2]/article/section[2]/div[1]/div/div[1]/form/input[2]').click();
What I'm getting on the output is
ElementClickInterceptedException: Message: element click intercepted:
Element ... is
not clickable at point (553, 728). Other element would receive the
click:
(Session info: chrome=93.0.4577.63)
What might be the issue?
Things to noted down in this scenario :-
When you define an explicit waits wait = WebDriverWait(driver, 20), you can always use wait reference in the scope. you do not need to create again and again in same class.
Try to avoid absolute xpath /html/body/div[3]/div[2]/article/section[2]/div[1]/div/div[1]/form/input[2], try with relative xpath/xpath axes.
When we try to send keys to some element, in general it should be a input tag, not label
You may have to scroll, may be not in this case but when you scroll manually to interact with elements in UI, same has to automated with Selenium as well.
Also I observed to this webapp that search and input tags are duplicated, so I have used xpath indexing [2] to handle.
Sample code :-
driver = webdriver.Chrome(executable_path = 'mypath/chromedriver.exe')
driver.maximize_window()
#driver.implicitly_wait(50)
driver.get("https://ai.fmcsa.dot.gov/SMS")
wait = WebDriverWait(driver, 20)
wait.until(EC.element_to_be_clickable((By.XPATH, "//a[#title='Close']"))).click()
wait.until(EC.element_to_be_clickable((By.XPATH, "(//input[#name='MCSearch'])[2]"))).send_keys('1818437')
wait.until(EC.element_to_be_clickable((By.XPATH, "(//input[#name='search'])[2]"))).click()
You can use below xpaths too.
driver.find_element_by_xpath("//div[#class='sms-search-box']//input[1]").send_keys('1818437')
driver.find_element_by_xpath("//div[#class='sms-search-box']//input[2]").click()
Xpath you are using is not right. Your xpath for the input field should be like this.
driver.find_element_by_xpath("//input[#name='MCSearch' and #placeholder='Type Name or U.S. DOT#']").send_keys("1818437")
driver.find_element_by_xpath("//input[#placeholder='Type Name or U.S. DOT#']//following::input[#value='Search']").click();

Having trouble referencing to a certain element on page with Selenium

I am having a terribly hard time referencing to a certain "next page" button on a website that I am trying to scrape links from [https://www.sreality.cz/adresar?strana=2]. If you scroll down you can see a red right arrow button that you can click to go to the next page and so the website load new dynamic content. Every approach seems to report the same exact error and I don't know how am I supposed to point to the element without running into it.
This is the code that I currently have :
from selenium import webdriver
chromedriver_path = "/home/user/Dokumenty/iCloud/RealityScraper/chromedriver"
driver = webdriver.Chrome(chromedriver_path)
print("WebDriver Successfully Initialized")
driver.get("https://www.sreality.cz/adresar?strana=2")
links = driver.find_elements_by_css_selector("h2.title a")
nextPage = driver.find_element_by_css_selector("li.paging-item a.btn-paging-pn.icof.icon-arr-right.paging-next")
for link in links:
print(link.get_attribute("href"))
nextPage.click()
The "nextPage" variable is holding a supposed value to be clicked on once the "links" variable search finishes scraping all the links from the company titles. However when I run this code I get an error :
selenium.common.exceptions.StaleElementReferenceException: Message:
stale element reference: element is not attached to the page document
I have been searching for various fixes online but none of them seemed to resolve the issue. I think that the issue at this point is not caused by the element not loading quickly enough but rather Selenium having trouble finding the element because of wrong reference.
Because of this I have tried using XPath to accurately point to the actual element and so I changed the "nextPage" variable to :
nextPage = driver.find_element_by_xpath("""/html/body/div[2]/div[1]/div[2]/div[2]/div[4]/div/div/div/div[2]/div/div[2]/ul[1]/li[12]/a""")
Which returns exactly the same error as stated above. I have been trying to find a solution to this for hours now and I can't understand where the issue lies. I would be grateful if anyone could explain to me what am I doing wrong. Thanks to anyone.
If you want to get all the ng-href tags from every page. Or you could look into their api.
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from time import sleep
driver.get("https://www.sreality.cz/adresar?strana=2")
wait = WebDriverWait(driver, 10)
while True:
try:
links = wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "h2.title > a")))
#print(len(links))
for link in links:
print(link.get_attribute("ng-href"))
nextPage = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a.btn-paging-pn.icof.icon-arr-right.paging-next")))
nextPage.click()
time.sleep(10)
except Exception as e:
print(e)
break
First of all never use the absolute xpath it will breakdown easily, Use the relative xpath.
Secondly, i think the error you are getting is because after clicking "Next" button for the first time it loads a new page. Which has a different DOM structure and that's why you are not able to find that element.
You can try searching for the element after every new page load (after clicking "Next" button everytime.)
// imports
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver import ActionChains
from selenium.webdriver.common.by import By
// initialize
driver = webdriver.Chrome()
wait = WebDriverWait(driver, 20)
action = ActionChains(driver)
// Try to use the below code and see if it works.
Next_btn = wait.until(EC.presence_of_element_located((By.XPATH, '(//li[#class="paging-item"])[2]')))
action.move_to_element(Next_btn).click().perform()

Change date in Datepicker with Selenium/Python

I am trying to acces and change the date on the following website with Python/Selenium:
http://www.b3.com.br/en_us/market-data-and-indices/data-services/market-data/historical-data/derivatives/trading-session-settlements/
When trying to click on the calender i get the following error:
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: /html/body/div[1]/div[1]/div/form/div/div[1]
I guess i need to active some js-code but i am having trouble to locate the specific js-code. Does anyone have any suggestion to how i can activate the content on the webpage?
I have tried using the following code:
driver.get('http://www.b3.com.br/en_us/market-data-and-indices/data-services/market-data/historical-data/derivatives/trading-session-settlements/')
time.sleep(5)
driver.find_element_by_xpath('/html/body/div[1]/div[1]/div/form/div').click()
driver.find_element_by_xpath('/html/body/div[1]/div[1]/div/form/div/div[1]').click()
driver.find_element_by_xpath('//*[#id="dData1"]').click()
driver.find_element_by_xpath('//*[#id="dData1"]').clear()
driver.find_element_by_xpath('//*[#id="dData1"]').send_keys('04/08/2020')
I get that the code already fails at line 2, but i dont understand why as i copied the Xpath like i always do, when using selenium on a webpage.
Thanks in advance for the help!
iframe is present on your web page, switch control on it before performing send key. Refer below solution :
driver.maximize_window()
wait = WebDriverWait(driver, 10)
driver.get("http://www.b3.com.br/en_us/market-data-and-indices/data-services/market-data/historical-data/derivatives/trading-session-settlements/")
# driver.find_element_by_tag_name('body').send_keys("Keys.ESCAPE")
iframe=driver.find_element_by_id("bvmf_iframe")
driver.switch_to.frame(iframe)
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, "input#dData1.datepicker.hasdatepicker"))).clear()
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, "input#dData1.datepicker.hasdatepicker"))).send_keys('02/01/2021')
Note : Add below imports to your solution :
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
Output:

unable to click a perticular cell in web table in python selenium

the link is "https://www.psacard.com/smrpriceguide/baseball-card-values/1909-11-t206-white-border/1055/".
My question is: How I will get the index of the particular record of which Description contains "Carolina". Also how can I click the hyperlink(Shop) against that row. Please provide solution with python and selenium only.
With selenium, I usually use xpath searches. I suggest googling xpath, but here's a quick example for your case:
Find a div element who's text is exactly Carolina Brights:
//div[text()="Carolina Brights"]
in selenium:
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
browser = webdriver.PhantomJS()
browser.implicitly_wait(10)
my_xpath = '//div[text()="Carolina Brights"]]'
my_div = WebDriverWait(browser, 10).until(
EC.visibility_of_element_located((By.XPATH, my_xpath))
)
my_div.click()
Obviously that particular div has no onClick so no point clicking it, but you get the idea ...

Selenium webdriverwait to find element within element

I am trying to click on a specific element that dynamically changes locations and therefore it changes xpaths and css selectors as well.
Tried xpath.
//*[#id="hld"]/div/div[X]/div[1]/h2/select
Note: The X will range from 2 to 10 depending on various factors.
There are no class names or IDs to use either. All I have to work with are the tag names.
My current code is as follows.
h2 = driver.find_element_by_tag_name("h2")
select = h2.find_element_by_tag_name("select")
select.click()
Unfortunately the select tag will load some time after the h2 tag, and I am trying to do a webdriverwait to wait until the element is clickable/visible before running the above code.
Sadly the proper syntax to single out the select element isn't clear to me. Below is the code to find the h2 tag, but I am trying to expand it out to focus in on the select tag.
WebDriverWait(driver, 30).until(
EC.visibility_of_element_located((By.TAG_NAME, "h2")))
Any help is greatly appreciated.
Try removing the dynamic div locator - the driver will iterate through the elements on the page and only click on it if it exists.
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as ec
from selenium.webdriver.support.ui import WebDriverWait
xpath = ".//[#id="hld"]/div/div/div/h2/select"
timeout = 30
WebDrierWait(driver, timeout).until(ec.presence_of_element_located(By.XPATH))
driver.find_element_by_xpath(xpath).click()
Otherwise, if the driver is finding multiple xpaths that match your xpath, you could try something like:
elements = driver.find_elements_by_xpath(xpath)
for element in elements:
try:
element.click()
except ElementNotVisibleException:
pass

Categories

Resources