There is a form on this site and I am trying to get every option from it and click a search button with this code:
from selenium import webdriver
driver = webdriver.PhantomJS('c:\\phantomjs-2.1.1-windows\\bin\\phantomjs.exe')
driver.get('http://www.cobar.org/Find-A-Lawyer')
driver.implicitly_wait(20)
options = driver.find_element_by_xpath('//select[#id="FAL_FOP_field"]')
for option in options.find_elements_by_tag_name('option'):
if option.text != 'ALL':
option.click()
#click search button
driver.find_element_by_xpath('//button[#class="btn btn-primary btn-main"]').click()
lawyer = driver.find_element_by_xpath('//table[#id="myTable"]/tbody/tr/td[0]')
print(lawyer)
However I am getting:
selenium.common.exceptions.NoSuchElementException: Message: {"errorMessage":
"Unable to find element with xpath '//select[#id=\"FAL_FOP_field\"]'",
"request":{"headers":{"Accept":"application/json","Accept-Encoding":
"identity","Connection":"close","Content-Length":"115","Content-Type":
"application/json;charset=UTF-8","Host":"127.0.0.1:52809","User-Agent":"Python-urllib/2.7"}
,"httpVersion":"1.1","method":"POST","post":
"{\"using\": \"xpath\", \"sessionId\": \"e03be070-e353-11e6-83b5-5f7f74696cce\","
" \"value\": \"//select[#id=\\\"FAL_FOP_field\\\"]\"}","url":"/element",
"urlParsed":{"anchor":"","query":"","file":"element","directory":"/",
"path":"/element","relative":"/element","port":"","host":"","password":"","user":"",
"userInfo":"","authority":"","protocol":"","source":"/element","queryKey":{},
"chunks":["element"]},"urlOriginal":"/session/e03be070-e353-11e6-83b5-5f7f74696cce/element"}}
what should I do?
Error is because fields you are locating are inside iFrame.
So, first you need to switch to iframe and then locate your elements.
still, if didn't work, add time delay.
To locate iFrame :
WebElement iframelocator = driver.findElement(By.xpath("//iframe[#id='dnn_ctr2047_IFrame_htmIFrame']"));
Then Switch to iFrame
driver.switchTo().frame(iframelocator);
Add above two steps in your code before locating elements.
Note : Above written code is in java.
Related
I'm new to webdrivers and am experimenting with them.
I'm trying to click a button when opening a webpage and it keeps giving the error of unable to locate element.
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.get("html page")
button = driver.find_element(By.ID, "onetrust-accept-btn-handler")
button.click()
i have tried id and xpath but i dont know what else to use.
the path for the button is:
/html/body/div[3]/div[3]/div/div/div[2]/div/div/button
<button id="onetrust-accept-btn-handler" tabindex="0">Run</button>
You can use implicit wait for wait until the page is fully loaded
driver.implicitly_wait(10)
I'm learning how to scrape data from websites. I started with this page: https://www.transfermarkt.co.uk/premier-league/torschuetzenliste/wettbewerb/GB1/saison_id/2020
I would like to extract the players name and goals they scored from this page and do it for the first few pages. Here is what I have
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
as preamble and then
driver = webdriver.Chrome(executable_path=r"C:\bin\chromedriver.exe")
driver.get("https://www.transfermarkt.co.uk/premier-league/torschuetzenliste/wettbewerb/GB1/saison_id/2020")
pageSoup = BeautifulSoup(driver.page_source, 'html.parser')
Players = pageSoup.find_all("a", {"class": "spielprofil_tooltip"})
This correctly extracts the information I want of the first page. Now to click and go to second page, I do
driver.find_element_by_css_selector('li.naechste-seite').click()
(I must say I'm not sure this is the right way to do so... but from the information I have gathered on here and other sites, it seems that it should do the trick.) I receive an error,
ElementClickInterceptedException: Message: element click intercepted:
Element ... is
not clickable at point (623, 695). Other element would receive the
click:
This error comes from the fact that there is this cookie pop-up thing (at least here in Europe), that requires you to accept them or change them if you don't to continue browsing on the website). In order to accept all and just continue on the website, I did
driver = webdriver.Chrome(executable_path=r"C:\bin\chromedriver.exe")
driver.get("https://www.transfermarkt.co.uk/premier-league/torschuetzenliste/wettbewerb/GB1/saison_id/2020")
WebDriverWait(driver,10).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,'//iframe[#id="sp_message_iframe_382444"]')))
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH,"//button[contains(#title,'ACCEPT ALL')]"))).click()
driver.implicitly_wait(10)
This seems to work as intended as my browser correctly clicks on the Cookie button 'accept all' and I end up on the right page. Something weird happens however. I can no longer access the data table. Indeed if I do like before:
pageSoup = BeautifulSoup(driver.page_source, 'html.parser')
Players = pageSoup.find_all("a", {"class": "spielprofil_tooltip"})
Players is empty. And if I do
driver.find_element_by_css_selector('li.naechste-seite').click()
to go to next page, it gives me the error
NoSuchElementException: Message: no such element: Unable to locate
element: {"method":"css selector","selector":"li.naechste-seite"}
I'm not sure what I should do.
Here is the html part of interest for the next page click "button" (I don't know if it can be of interested to any of you)
Use WebDriverWait() and wait for element_to_be_clickable() and following css selector.
Before that you need jump out from iframe.
driver.switch_to.default_content()
Then use
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR,"li.naechste-seite>a"))).click()
Your entire code would be
driver = webdriver.Chrome(executable_path=r"C:\bin\chromedriver.exe")
driver.get("https://www.transfermarkt.co.uk/premier-league/torschuetzenliste/wettbewerb/GB1/saison_id/2020")
WebDriverWait(driver,10).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,'//iframe[#id="sp_message_iframe_382444"]')))
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH,"//button[contains(#title,'ACCEPT ALL')]"))).click()
#Jump out from iframe
driver.switch_to.default_content()
#click on next button
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR,"li.naechste-seite>a"))).click()
I am trying to build a Web Scraping script using Python (rev. 3.8) and Selenium, Firefox and geckodriver (all latest versions).
I need to select an Item from a dropdown list. My Problem: when I try to select an Item I get the following message:
selenium.common.exceptions.ElementNotInteractableException: Message: Element <option> could not be scrolled into view
My code:
from selenium.webdriver.support.ui import Select
import time
url = 'https://www.hessen-forst.de/marktplatz/#brennholz'
driver = webdriver.Firefox(executable_path=r'D:\\Program Files\\geckodriver-v0.29.0-win64\\geckodriver.exe')
driver.get(url)
time.sleep(5)
print(len(driver.find_elements_by_id("nf-field-166")))
element = driver.find_element_by_id("nf-field-166")
select = Select(driver.find_element_by_name("nf-field-166"))
print(len(select.options))
driver.execute_script("arguments[0].scrollIntoView();", element)
select.select_by_visible_text("Nidda")
the print(len(select.options)) line gives me the correct number of instances I expect. Printing the options names with print(select.obtions[i].text) also gives correct results. Therefor I expect the right drop-down-list was selected.
driver.execute_script("arguments[0].scrollIntoView();", element) brings the drop-down-list in the center of the Browser-Window for a fraction of a second, with select.select_by_visible_text("Nidda") the drop-down-list is slightly below the visible area.
I have already tryed the WebDriverWait: WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "//select[#id='nf-field-166']//options[contains(.,'Nidda')]"))). Here the Result is a Timeout.
Using select_by_value gives the same error.
Can you help me with another idea?
Thanks!
i'm trying to click the "download Results" button on this website .
i'm using below python code to click this button
from selenium import webdriver
chromedriver_path = 'E:/software/python/chromedriver'
url = 'https://www.cms.gov/apps/physician-fee-schedule/search/search-results.aspx?Y=0&T=4&HT=2&CT=3&H1=74750&H2=74800&M=5'
driver= webdriver.Chrome(executable_path=chromedriver_path )
driver.get(url)
driver.find_element_by_xpath('//*[#title="Download Results"]').click()
i'm getting below error
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//frame[#name="aspnetForm"]"}
(Session info: headless chrome=83.0.4103.116)
i'm thinking if the button is within a iframe but how do I find out the iframe?
This may help you out: Unable to locate element using selenium webdriver in python
Try and switch to the frame first, and then search for the element.
i realized that there's a agreement page that i need to click 'Agree' button first.
i didn't see this in browser because i already clicked 'Agree' weeks ago. but in webdriver, i need to click that each time.
Just after the URL opens, a page comes up asking you to agree the terms. You need to click on that Agree first. Since it is possible that once clicking on the Agree button, it won't come again if you accepted it on default browser. So just be sure with code, you can first check the presence of the agree button first.
This kind of function you may make to check presence:
def elementPresent(locatorType, locator):
#present = true
#not present = false
wait = WebDriverWait(driver, 20)
try:
wait.until(EC.presence_of_element_located((locatorType, locator)))
except Exception:
return False
return True
And then using if condition you may proceed further like:
if(elementPresent("xpath", "//a[#title='Accept']/span")):
driver.find_element_by_xpath("//a[#title='Accept']/span").click()
and then you may click on the element you want, there is no frame that is required to be switched to.
driver.find_element_by_xpath("//a[#title = 'Download Results']/span").click()
Just extract the element and click on it and your required file will be downloaded.
I am trying to click button(name command page) on web page but i am unable to do so. i am using selenium with python
code:
wait= WebDriverWait(driver,20)
command_page = wait.until(EC.element_to_be_clickable((By.ID,"Button_ID")))
command_page.click()
I have tried by class name also but i am unable to click the element.
Please help me on this.
As an alternative you can use JavascriptExecutor to perfrom click on certain element if Selenium click() method doesn't trigger the action without any Exception.
element = driver.find_element_by_id("etoolbar_toolbarSection_newcommandpagebtn_id")
driver.execute_script("arguments[0].click();", element)
Please try below solution :
WebDriverWait(driver, 20)
both_button=wait.until(EC.element_to_be_clickable((By.XPATH, "//*[contains(text(), 'Command Page')]")))
both_button.click()
I tried this, seems to be working
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("file://c:/cygwin64/home/das2/abcd.html")
element = driver.find_element_by_id("etoolbar_toolbarSection_newcommandpagebtn_id")
element.click()