Python Selenium find element XPath doesn't work - python

I try to find refresing elements (time minute) on the webpage. My code worked only for simple text earlier. Now I use Ctrl+Shift+I and point out my element and "Copy Xpath".
Also, I have Chrome extension "XPath helper" and tried to do that with it one. There is more longer XPath, than in my code below. And it doesn't work too.
Error: NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//*[#id....
And, I also tried to use find by class, by tag, by CSS selector.. It only worked by tag and no perfect, on different page.
And I don't even say about print it, sometimes find_element(By.XPATH,'//*[...).text work, sometimes not.
I don't understand, why it work on one page and not on second.. I want to work with find elements by XPath in flash later.
UPDATE Now I retrying code and it work! But still doesn't work on the next webpage.. why it is so changeable? XPath change, when page reload or what? What is the simplest way to get text(refresing) info from flash, opened in chrome browser?
import selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
options = webdriver.ChromeOptions()
options.add_argument('headless')
driver = webdriver.Chrome(r"C:\Users\vishniakov\Desktop\python bj\driver\chromedriver.exe",chrome_options=options)
driver.get("https://www.betfair.com/sport/football/event?eventId=28935432")
print(driver.title)
elem =driver.find_element(By.XPATH,'//*[#id="yui_3_5_0_1_1538670363144_2571"]').text
print(elem)

This will work with assumption you want data of that page not of any specific element:
import selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
options = webdriver.ChromeOptions()
options.add_argument('headless')
driver = webdriver.Chrome(chrome_options=options)
driver.get("https://www.betfair.com/sport/football/event?eventId=28935730")
print(driver.title)
elem =driver.find_element(By.CSS_SELECTOR,'.scroller.context-event').text
print(elem)

Assuming you do want spcipic data, you can use the contains() Xpath method... You can read about this here
For your case:
import selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
options = webdriver.ChromeOptions()
options.add_argument('headless')
driver = webdriver.Chrome(r"C:\Users\vishniakov\Desktop\python bj\driver\chromedriver.exe",chrome_options=options)
driver.get("https://www.betfair.com/sport/football/event?eventId=28935432")
print(driver.title)
elements =driver.find_elements(By.XPATH,'//*[contains(#id, "yui_3_5_0_1_")]')
print([i.text for i in elements])
You can play around with the contains() if my example didn't work... You must find what part of the id changes and exclude that part of the ID.
Hope this helps you!

Related

Handling "Accept all cookie" popup with selenium when selector is unknown

I have a python script, It look like this.
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.select import Select
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from os import path
import time
# Tried this code
chrome_options = webdriver.ChromeOptions()
prefs = {"profile.default_content_setting_values.notifications" : 2}
chrome_options.add_experimental_option("prefs",prefs)
browser = webdriver.Chrome(ChromeDriverManager().install(), chrome_options=chrome_options)
links = ["https://www.henleyglobal.com/", "https://markets.ft.com/data"]
for link in links:
browser.get(link)
#WebDriverWait(browser, 20).until(EC.url_changes(link))
#How do I disable/Ignore/remove/escape this "Accept all cookie" popup and then access the website to scrape data?
browser.quit()
So each website in the links array displays an "Accept all cookie" popup after navigating to the site. check the below image.
I have tried many ways nothing works, Check the one after imports
How do I exit/pass/escape this popup and then access the website to scrape data?
If you open your page in a new browser you'll note the page fully loads, then, a moment later your popup appears. The default wait strategy in selenium is just that the page is loaded.
One way to handle this is to simply inspect the page and find the xpath of the popup window. The below code should work for that.
browser.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS)
if link == 'https://www.henleyglobal.com/':
browser.findElement(By.XPATH("/html/body/div[7]/div/div/div/div[2]/div/div[2]/button[2]")).click()
else:
browser.findElement(By.XPATH("/html/body/div[4]/div/div/div[2]/div[2]/a")).click()
The code is waiting until the element of the pop-up is clickable and then clicking it.
For unknown sites you could try:
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--disable-notifications")
webdriver.Chrome(os.path.join(path, 'chromedriver'), chrome_options=chrome_options)
generally, you can not use some universal locator that will match the "Accept cookies" buttons for each and every web site in the world.
Even here, you have 2 different sites and the elements you need to click are totally different on these sites.
For https://www.henleyglobal.com/ site the correct locator may be something like this CSS Selector .confirmation button.primary-btn while for https://markets.ft.com/data site I'd advise to use CSS Selector .o-cookie-message__actions a.o-cookie-message__button.
These 2 elements are totally different: the first one is button while the second is a, they have totally different class names and all other attributes.
You may thing about the Accept text. It seems to be common, so you could use this XPath //*[contains(text(),'Accept')] but even this will not work since on the first page it matches 2 elements while the accept cookies element is the second between them...
So, there is no General locators, you will have to define separate locators for each page.
Again, for https://www.henleyglobal.com/ I would prefer
driver.find_element(By.CSS_SELECTOR, ".confirmation button.primary-btn").click()
While for the second page https://markets.ft.com/data I would prefer this
driver.find_element(By.CSS_SELECTOR, ".o-cookie-message__actions a.o-cookie-message__button").click()
Also, generally we always use WebDriverWait expected_conditions explicit waits, so the code will be as following:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
wait = WebDriverWait(driver, 10)
# for the first page
wait.until(EC.element_to_be_clickable((By.XPATH, ".confirmation button.primary-btn"))).click()
# for the second page
wait.until(EC.element_to_be_clickable((By.XPATH, ".o-cookie-message__actions a.o-cookie-message__button"))).click()

Selenium cant find element on Javascript page

I am still quite new to python and selenium, However have managed to get quite far with what I am doing. But I appear to now be stuck. The page in question is an internal business page. I have tried using ID, name and XPATH with very little success.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver import ActionChains
import time
PATH = r"C:\Users\p819364\Downloads\chromedriver.exe"
options = webdriver.ChromeOptions()
options.add_argument('--ignore-ssl-errors=yes')
options.add_argument('--ignore-certificate-errors')
driver = webdriver.Chrome(PATH, options=options)
driver.get("https://10.47.31.102/3/Login")
driver.implicitly_wait(15)
print (driver.title)
username = driver.find_element(By.NAME, value='j_username')
username.send_keys("username")
password = driver.find_element(By.NAME, value='j_password')
password.send_keys("password")
password.send_keys(Keys.RETURN)
driver.implicitly_wait(20)
Settings = driver.find_element(By.XPATH, value="//*[#id='evo_widget_TBFisheyeItem_4']")
Settings.click()
driver.implicitly_wait(20)
Filter = driver.find_element(By.XPATH, value="//*[#id='tableForm:authenticationPolicyTable:tableActions']")
driver.implicitly_wait(20)
Filter.click()
The problem I am having is with the filter, I think it may be because this is a page within a page. I am sorry I cannot share the page as its internal. But I need to be able to click the filter and click and options
I keep getting the following error
Message: no such element; Unable to locate element: {"method":"css selector","selector":["//*[#id='tableForm:authenticationPolicyTable:tableActions']"
The XPATH is as follows:
//*[#id="tableForm:authenticationPolicyTable:tableActions"]
This is the full XPATH (which I have tried):
/html/body/form[3]/table/thead/tr[1]/th/table/tbody/tr/td[2]/select
I appreciate any help given and I also am not sure if its caused this because the field I want to select is not in view until I scroll. However I cannot seem to scroll as the element is within another as pictured
Thanks
Edit:
The iFrame was stopping it i had to switch to it first
iframe = driver.find_element_by_xpath("//*[#id='consoleCanvas']")
driver.switch_to.frame(iframe)
I think if the page is within page then you need to check if that element is within iframe. If iframe is there then you should first switch to frame and then click on filter. If you can post html code then it will be easy to understand issue.

Why is this Xpath not working with selenium?

I'm trying to input an email address to test logging in, but I'm continuing to receive this error: no such element: Unable to locate element:
I have tried using the relative and absolute Xpath and receive the same error message.
Forgive me as I'm sure missing something simple, very new to this!
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
import time
driver = webdriver.Chrome()
url = 'https://soundcloud.com/signin'
driver.get(url)
time.sleep(2)
driver.find_element_by_xpath('//*[#id="sign_in_up_email"]').send_keys('test#test.com')
The reason its throwing error is because the element sign_in_up_email is present inside the iframe
Refer image
Check the link here for detail about how to switch to iframe
You will first need to switch to iframe and then enter value in the input
Note:- When you first open the page you might see the accept cookies popup from soundcloud you will have to accept that
Your solution would look like
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
import time
driver = webdriver.Chrome()
url = 'https://soundcloud.com/signin'
driver.get(url)
driver.maximize_window()
time.sleep(2)
# Accept cookie
driver.find_element_by_id('onetrust-accept-btn-handler').click()
# Switch to frame
driver.switch_to.frame(driver.find_element_by_class_name("webAuthContainer__iframe"))
driver.find_element_by_xpath('//*[#id="sign_in_up_email"]').send_keys('test#test.com')

Webdriver selenium - can't find any element from TradingView

I'm new to using Selenium but I watched enough videos and followed enough articles to know something is missing. I'm trying to get values from TradingView but the problem I'm running into is that I simply can't find any of the elements, not by Xpath or Css. I went ahead and tried to do a simple visibility element test as shown in the code below and to my surprise it times out.
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
# Stops the UI interface (chrome browser) from popping up
# chrome_options.add_argument("--headless")
driver = webdriver.Chrome(executable_path='c:\se\chromedriver.exe', options=chrome_options)
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
import time
driver.get("https://www.tradingview.com/chart/")
timeout = 20
try:
WebDriverWait(driver, timeout).until(EC.visibility_of_element_located((By.XPATH, "/html/body/div[1]")))
print("Page loaded")
except TimeoutException:
print("Timed out waiting for page to load")
driver.quit()
I tried to click on one of the chart buttons too using the following and that doesn't work either. I noticed that unlike many other websites for Tradingview the elements don't have names and don't generate a relative path (only full) using Xpath.
driver.find.element_by_xpath('/html/body/div[2]/div[5]/div/div[2]/div/div/div/div/div[4]').click()
Any help is greatly appreciated!
I think there must be an issue with xpath.
When I try to click the AAPL button it is working for me.
The xpath I used is:
(//div[contains(text(),'AAPL')])[1]
If you specify exactly which element to be clicked I will try.
And also be familiar with the concept of frames because these type of websites has lot of frames in it.

Element Not Clickable - even though it is there

Hoping you can help. I'm relatively new to Python and Selenium. I'm trying to pull together a simple script that will automate news searching on various websites. The primary focus was football and to go and get me the latest Manchester United news from a couple of places and save the list of link titles and URLs for me. I could then look through the links myself and choose anything I wanted to review.
In trying the the independent newspaper (https://www.independent.co.uk/) I seem to have come up against a problem with element not interactable when using the following approaches:
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome('chromedriver')
driver.get('https://www.independent.co.uk')
time.sleep(3)
#accept the cookies/privacy bit
OK = driver.find_element_by_id('qcCmpButtons')
OK.click()
#wait a few seconds, just in case
time.sleep(5)
search_toggle = driver.find_element_by_class_name('icon-search.dropdown-toggle')
search_toggle.click()
This throws the selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable error
I've also tried with XPATH
search_toggle = driver.find_element_by_xpath('//*[#id="quick-search-toggle"]')
and I also tried ID.
I did a lot of reading on here and then also tried using WebDriverWait and execute_script methods:
element = WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.XPATH, '//*[#id="quick-search-toggle"]')))
driver.execute_script("arguments[0].click();", element)
This didn't seem to error but the search box never appeared, i.e. the appropriate click didn't happen.
Any help you could give would be fantastic.
Thanks,
Pete
Your locator is //*[#id="quick-search-toggle"], there are 2 on the page. The first is invisible and the second is visible. By default selenium refers to the first element, sadly the element you mean is the second one, so you need another unique locator. Try this:
search_toggle = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//div[#class="row secondary"]//a[#id="quick-search-toggle"]')))
search_toggle.click()
First you need to open search box, then send search keys:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.options import Options
import os
chrome_options = Options()
chrome_options.add_argument("--start-maximized")
browser = webdriver.Chrome(executable_path=os.path.abspath(os.getcwd()) + "/chromedriver", options=chrome_options)
link = 'https://www.independent.co.uk'
browser.get(link)
# accept privacy
button = browser.find_element_by_xpath('//*[#id="qcCmpButtons"]/button').click()
# open search box
li = browser.find_element_by_xpath('//*[#id="masthead"]/div[3]/nav[2]/ul/li[1]')
search_tab = li.find_element_by_tag_name('a').click()
# send keys to search box
search = browser.find_element_by_xpath('//*[#id="gsc-i-id1"]')
search.send_keys("python")
search.send_keys(Keys.RETURN)
Can you try with below steps
search_toggle = driver.find_element_by_xpath('//*[#class="row secondary"]/nav[2]/ul/li[1]/a')
search_toggle.click()

Categories

Resources