Selenium not finding an element - python

I am trying to retrieve an element that I would like to click on. Here's the opening of the website with Selenium in Python:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_argument('--dns-prefetch-disable')
driver = webdriver.Chrome("./chromedriver", options=chrome_options)
website = "https://www.agronet.gov.co/estadistica/Paginas/home.aspx?cod=4"
driver.get(website) # loads the page
Then, I look for the element I'm interested in:
driver.find_element_by_xpath('//*[#id="cmbDepartamentos"]')
which raises a NoSuchElementException error. When looking at the html source (driver.page_source), indeed "cmbDepartamentos" does not exist! and the text of the dropdown menu I am trying to locate which is "Departamentos:" does not exist either. How can I deal with this?

This should work:
iframe=driver.find_element_by_xpath('//div[#class="iframe"]//iframe')
driver.switch_to.frame(iframe)
driver.find_element_by_xpath('//*[#id="cmbDepartamentos"]').click()
Notes:
The reason for NoSuchElementException error is that the element is
inside an iframe. Unless you switch your driver to that iframe,
the identification will not work.
CTRL + F in the Dev Tools panel, then search for the xpath you
defined in your script is always a good way to rule out issues with
your xpath definition, as cause for NoSuchElementException error (and in your case, the xpath is correct)
You might want to consider adding a WebdriverWait for a complete load of the search area/iframe before attempting to find the "Departamentos" field

Related

Unable to locate the Sign In element within #shadow-root (open) using Selenium and Python

I'm trying to use selenium to automate some actions but am unable to find the first element on the page https://developer.servicenow.com/dev.do and so cannot login
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver_path = "../bin/chromedriver.exe"
driver = webdriver.Chrome(driver_path)
driver.get("https://developer.servicenow.com/dev.do")
driver.find_element_by_xpath("/html/body/dps-app//div/header/dps-navigation-header//header/div/div[2]/ul/li[3]/dps-login//div/dps-button//button/span")
I get the error
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"/html/body/dps-app//div/header/dps-navigation-header//header/div/div[2]/ul/li[3]/dps-login//div/dps-button//button/span"}
To Sign In button is deep within multiple #shadow-root (open)
Solution
Tto click() on the desired element you can use shadowRoot.querySelector() and you can use the following Locator Strategy:
from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_argument("start-maximized")
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
driver = webdriver.Chrome(options=options, executable_path=r'C:\WebDrivers\chromedriver.exe')
driver.get('https://developer.servicenow.com/dev.do')
SignInButton = driver.execute_script("return document.querySelector('dps-app').shadowRoot.querySelector('dps-navigation-header').shadowRoot.querySelector('header.dps-navigation-header dps-login').shadowRoot.querySelector('dps-button')")
SignInButton.click()
Browser Snapshot:
References
You can find a couple of relevant detailed discussions in:
How to interact with the elements within #shadow-root (open) while Clearing Browsing Data of Chrome Browser using cssSelector
How to automate shadow DOM elements using selenium?
You could look at the Automatepro app on the servicenow store. It uses selenium but they have pre-written the selenium behind the scenes to interact with all the servicenow UI components, including those using shadow DOM. You just pick the UI element and supply the data for your test. We found it saves a lot of time writing and maintaining the selenium code ourselves..
This solution was what I was looking for to log into my instance using Powershell.
driver.executescript("return document.querySelector('dps-app').shadowRoot.querySelector('dps-navigation-header').shadowRoot.querySelector('header.dps-navigation-header dps-login').shadowRoot.querySelector('dps-button')").Click()
I think its because you are trying to access the "Sign in" before the website has time to load all of the elements.
I suggest you use
driver.set_page_load_timeout(120)
so the selenium will look for the button after everything is loaded on the website.

HTML page seems to add iframe element when opening chrome by selenium in python

I am new to selenium and web automation tasks, and I am trying to code an application for automating papers search on PubMed, using chromedriver.
My aim is to click the top right "Sign in" button in the PubMed main page, https://www.ncbi.nlm.nih.gov/pubmed. So the problem is:
when I open PubMed main page manually, there are no iframes tags in the html source, and therefore the "Sign in" element should be simply accessible by its xpath "//*[#id="sign_in"]".
when same page is openened by selenium instead, I cannot find that "Sign in" element by its xpath, and if a try to inspect it, the html source seems to have embedded it in an <iframe> tag, so that it cannot be found anymore unless a driver._switch_to.frame method is executed. But if I open the html source code by Ctrl+Uand search for <iframe> element, there is still none of them. Here is the "Sign in" element inspection capture:
["Sign in" inspection][1]
I already got round this problem by:
bot = PubMedBot()
bot.driver.get('https://www.ncbi.nlm.nih.gov/pubmed')
sleep(2)
frames = bot.driver.find_elements_by_tag_name('iframe')
bot.driver._switch_to.frame(frames[0])
btn = bot.driver.find_element_by_xpath('/html/body/a')
btn.click()
But all I would like to understand is why the inspection code is different from the html source code, whether this <iframe> element is really appearing from nowhere, and if so why.
Thank you in advance.
You are facing issue due to synchronization .Kinddly find below solution to resolve your issue:
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium import webdriver
driver = webdriver.Chrome(executable_path=r"path ofchromedriver.exe")
driver.maximize_window()
wait = WebDriverWait(driver, 10)
driver.get("https://www.ncbi.nlm.nih.gov/pubmed")
iframe = wait.until(EC.presence_of_element_located((By.TAG_NAME, "iframe")))
driver.switch_to.frame(iframe)
wait.until(EC.element_to_be_clickable((By.XPATH, "//a[contains(text(), 'Sign in to NCBI')]"))).click()
Output:
Inspect element:

JavaScript works on chrome console but doesn't work in firefox or selenium code

I am trying to automate the login to the following page using selenium:
https://services.cal-online.co.il/Card-Holders/SCREENS/AccountManagement/Login.aspx?ReturnUrl=%2fcard-holders%2fScreens%2fAccountManagement%2fHomePage.aspx
Trying to find the elements of username and password using both id, css selector and xpath didn't work.
self._web_driver.find_element_by_xpath('//*[#id="txt-login-username"]')
self._web_driver.find_element_by_id("txt-login-password")
self._web_driver.find_element_by_css_selector('#txt-login-username')
For all three I get NoSuchElement exception
I tried the following JS script: document.getElementById('txt-login-username')
when I run this script in selenium or in firefox it returns Null
but when I run it in chrome console I get a result I can use.
Is there any way to make it work from the python code or to run this on the chrome console itself and not from the python execute_script?
To automate the login to the page using Selenium as the the desired elements are within an <iframe> so you have to:
Induce WebDriverWait for the desired frame to be available and switch to it.
Induce WebDriverWait for the desired element to be clickable.
You can use the following solution:
Code Block:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
options = webdriver.ChromeOptions()
options.add_argument("start-maximized")
driver = webdriver.Chrome(options=options, executable_path=r'C:\Utility\BrowserDrivers\chromedriver.exe')
driver.get("https://services.cal-online.co.il/Card-Holders/SCREENS/AccountManagement/Login.aspx?ReturnUrl=%2fcard-holders%2fScreens%2fAccountManagement%2fHomePage.aspx")
WebDriverWait(driver, 10).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,"//iframe[#id='calconnectIframe']")))
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//input[#id='txt-login-username']"))).send_keys("ariel6653")
driver.find_element_by_xpath("//input[#id='txt-login-password']").send_keys("ariel6653")
Browser Snapshot:
Here you can find a relevant discussion on Ways to deal with #document under iframe
found a solution to the problem. the problem really was that the object is inside an iframe.
I tried to use the solution suggested in Get element from within an iFrame
but got a security error. the solution is to switch frame the follwoing way:
driver.switch_to.frame("iframe")
and now you can use the normal find elment

How can I "click" download button on selenium in python

I want to download user data on Google analytics by using crawler so I write some code using selenium. However, I cannot click the "export" button. It always shows the error "no such element". I tried to use find_element_by_xpath, by_name and by_id.
I upload inspect of GA page below.
I TRIED:
driver.find_element_by_xpath("//*[#class='download-link']").click()
driver.find_element_by_xpath('//*[#id="ID-activity-userActivityTable"]/div/div[2]/span[6]/button')
driver.find_element_by_xpath('//*[#class='_GAD.W_DECORATE_ELEMENT.C_USER_ACTIVITY_TABLE_CONTROL_ITEM_DOWNLOAD']')
Python Code:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Chrome('/Users/parkjunhong/Downloads/chromedriver')
driver.implicitly_wait(3)
usrid = '1021'
url = 'https://analytics.google.com/analytics/web/#/report/app-visitors-user-activity/a113876882w169675624p197020837/_u.date00=20190703&_u.date01=20190906&_r.userId='+usrid+'&_r.userListReportStates=%3F_u.date00=20190703%2526_u.date01=20190906%2526explorer-
table.plotKeys=%5B%5D%2526explorer-table.rowStart=0%2526explorer-
table.rowCount=1000&_r.userListReportId=app-visitors-user-id'
driver.get(url)
driver.find_element_by_name('identifier').send_keys('ID')
idlogin = driver.find_element_by_xpath('//*[#id="identifierNext"]/span/span')
idlogin.click()
driver.find_element_by_name('password').send_keys('PASSWD')
element = driver.find_element_by_id('passwordNext')
driver.execute_script("arguments[0].click();", element)
#login
driver.find_element_by_xpath("//*[#class='download-link']").click()
#click the download button
ERROR:
Message: no such element: Unable to locate element
inspection of GA
your click element is in an iFrame (iFrame id="galaxyIframe" ...). Therefore, you need to tell the driver to switch from the "main" page to said iFrame. If you add this line of code after your #login it should work:
driver.switch_to.frame(galaxyIframe)
(If the frame did not have a name, you would use: iframe = driver.find_element_by_xpath("xpath-to-frame") and then driver.switch_to.frame(iframe)
To get back to your default frame, use:
driver.switch_to.default_content()
Crawling GA is generally a pain. Not just because you have these iFrames everywhere.
Apart from that, I would recommend looking into puppeteer, the new kid on the crawler block. Even though the prospect of switching to javascript from python may be daunting, it is worth it! Once you get into it, selenium will have felt super clunky.
You can try with the text:
If you want to click on 'Export'-
//button[contains(text(),'Export')]

Python Selenium find element XPath doesn't work

I try to find refresing elements (time minute) on the webpage. My code worked only for simple text earlier. Now I use Ctrl+Shift+I and point out my element and "Copy Xpath".
Also, I have Chrome extension "XPath helper" and tried to do that with it one. There is more longer XPath, than in my code below. And it doesn't work too.
Error: NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//*[#id....
And, I also tried to use find by class, by tag, by CSS selector.. It only worked by tag and no perfect, on different page.
And I don't even say about print it, sometimes find_element(By.XPATH,'//*[...).text work, sometimes not.
I don't understand, why it work on one page and not on second.. I want to work with find elements by XPath in flash later.
UPDATE Now I retrying code and it work! But still doesn't work on the next webpage.. why it is so changeable? XPath change, when page reload or what? What is the simplest way to get text(refresing) info from flash, opened in chrome browser?
import selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
options = webdriver.ChromeOptions()
options.add_argument('headless')
driver = webdriver.Chrome(r"C:\Users\vishniakov\Desktop\python bj\driver\chromedriver.exe",chrome_options=options)
driver.get("https://www.betfair.com/sport/football/event?eventId=28935432")
print(driver.title)
elem =driver.find_element(By.XPATH,'//*[#id="yui_3_5_0_1_1538670363144_2571"]').text
print(elem)
This will work with assumption you want data of that page not of any specific element:
import selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
options = webdriver.ChromeOptions()
options.add_argument('headless')
driver = webdriver.Chrome(chrome_options=options)
driver.get("https://www.betfair.com/sport/football/event?eventId=28935730")
print(driver.title)
elem =driver.find_element(By.CSS_SELECTOR,'.scroller.context-event').text
print(elem)
Assuming you do want spcipic data, you can use the contains() Xpath method... You can read about this here
For your case:
import selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
options = webdriver.ChromeOptions()
options.add_argument('headless')
driver = webdriver.Chrome(r"C:\Users\vishniakov\Desktop\python bj\driver\chromedriver.exe",chrome_options=options)
driver.get("https://www.betfair.com/sport/football/event?eventId=28935432")
print(driver.title)
elements =driver.find_elements(By.XPATH,'//*[contains(#id, "yui_3_5_0_1_")]')
print([i.text for i in elements])
You can play around with the contains() if my example didn't work... You must find what part of the id changes and exclude that part of the ID.
Hope this helps you!

Categories

Resources