python selenium get asp-classic element - python

I tried find this URL page to do some selenium automation test, usually can get web element but some page is strange...i can not find the web element even using XPATH (use chrome browser can see the element, But can not do some action for this element..in fact...wrong message is "can not find the element)
looks like is asp.classic generate page....windows.onload() page
driver_path = os.path.join(os.getcwd(), "IEDriverServer.exe")
driver = webdriver.Ie(driver_path)
driver.get(mytestpage)
element = WebDriverWait(driver, 300).until(
EC.element_to_be_clickable((By.LINK_TEXT, "Hardware")))
element.click()
element = WebDriverWait(driver, 300).until(
EC.element_to_be_clickable((By.XPATH, "'//*[#id="testpage"]/table/tbody/tr/td[2]/table/tbody/tr[1]/td[2]/font/a[1]'")))
element.click()

Have you tried with selenium explicit wait? This might be because a page is getting loaded fully while you are performing selenium action on that page. Also, check response time for loading page.

Related

Selenium can't find locate download link element by ID in Python

I'm trying to get Selenium to automate uploading and downloading files from https://8mb.video/ I can upload the file just fine, but after it processes on the site Selenium can't locate the element for the download link even though the ID given matches the ID in the html. Here's my code:
driver = webdriver.Edge()
driver.get('https://8mb.video/')
driver.maximize_window()
driver.get("https://8mb.video/")
s = driver.find_element(By.XPATH, "//input[#type='file']")
s.send_keys("C:\\Users\\ijwto\\Desktop\\VUT\\bladee.mp4")
s = driver.find_element(By.ID, "rockandroll")
s.click()
try:
element = WebDriverWait(driver, 30).until(
EC.presence_of_element_located((By.ID, "dllink"))
)
finally:
print("nope")
I've also tried using element_to_be_clickable which didn't work, and checked for iframes in the HTML and didn't find any.
Any help would be greatly appreciated.
In order to download the file need to click on the element in the try block
Also if the intention of printing Nope in the finally block is to indicate if the element was not found then it can be added under except instead of finally
Note:- The wait time for WebDriverWait may increase in case the video you are trying to upload is large and the site requires more time to process it
Your solution would like
driver = webdriver.Edge()
driver.get('https://8mb.video/')
driver.maximize_window()
driver.get("https://8mb.video/")
s = driver.find_element(By.XPATH, "//input[#type='file']")
s.send_keys("C:\\Users\\ijwto\\Desktop\\VUT\\bladee.mp4")
s = driver.find_element(By.ID, "rockandroll")
s.click()
try:
element = WebDriverWait(driver, 30).until(
EC.presence_of_element_located((By.ID, "dllink"))
)
element.click()
except:
print("Nope")

Cannot find "next page" button element of page and click it nor any info with Selenium in Python after accepting cookies

I'm learning how to scrape data from websites. I started with this page: https://www.transfermarkt.co.uk/premier-league/torschuetzenliste/wettbewerb/GB1/saison_id/2020
I would like to extract the players name and goals they scored from this page and do it for the first few pages. Here is what I have
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
as preamble and then
driver = webdriver.Chrome(executable_path=r"C:\bin\chromedriver.exe")
driver.get("https://www.transfermarkt.co.uk/premier-league/torschuetzenliste/wettbewerb/GB1/saison_id/2020")
pageSoup = BeautifulSoup(driver.page_source, 'html.parser')
Players = pageSoup.find_all("a", {"class": "spielprofil_tooltip"})
This correctly extracts the information I want of the first page. Now to click and go to second page, I do
driver.find_element_by_css_selector('li.naechste-seite').click()
(I must say I'm not sure this is the right way to do so... but from the information I have gathered on here and other sites, it seems that it should do the trick.) I receive an error,
ElementClickInterceptedException: Message: element click intercepted:
Element ... is
not clickable at point (623, 695). Other element would receive the
click:
This error comes from the fact that there is this cookie pop-up thing (at least here in Europe), that requires you to accept them or change them if you don't to continue browsing on the website). In order to accept all and just continue on the website, I did
driver = webdriver.Chrome(executable_path=r"C:\bin\chromedriver.exe")
driver.get("https://www.transfermarkt.co.uk/premier-league/torschuetzenliste/wettbewerb/GB1/saison_id/2020")
WebDriverWait(driver,10).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,'//iframe[#id="sp_message_iframe_382444"]')))
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH,"//button[contains(#title,'ACCEPT ALL')]"))).click()
driver.implicitly_wait(10)
This seems to work as intended as my browser correctly clicks on the Cookie button 'accept all' and I end up on the right page. Something weird happens however. I can no longer access the data table. Indeed if I do like before:
pageSoup = BeautifulSoup(driver.page_source, 'html.parser')
Players = pageSoup.find_all("a", {"class": "spielprofil_tooltip"})
Players is empty. And if I do
driver.find_element_by_css_selector('li.naechste-seite').click()
to go to next page, it gives me the error
NoSuchElementException: Message: no such element: Unable to locate
element: {"method":"css selector","selector":"li.naechste-seite"}
I'm not sure what I should do.
Here is the html part of interest for the next page click "button" (I don't know if it can be of interested to any of you)
Use WebDriverWait() and wait for element_to_be_clickable() and following css selector.
Before that you need jump out from iframe.
driver.switch_to.default_content()
Then use
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR,"li.naechste-seite>a"))).click()
Your entire code would be
driver = webdriver.Chrome(executable_path=r"C:\bin\chromedriver.exe")
driver.get("https://www.transfermarkt.co.uk/premier-league/torschuetzenliste/wettbewerb/GB1/saison_id/2020")
WebDriverWait(driver,10).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,'//iframe[#id="sp_message_iframe_382444"]')))
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH,"//button[contains(#title,'ACCEPT ALL')]"))).click()
#Jump out from iframe
driver.switch_to.default_content()
#click on next button
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR,"li.naechste-seite>a"))).click()

Button Click element issue in selenium using python

I am trying to click button(name command page) on web page but i am unable to do so. i am using selenium with python
code:
wait= WebDriverWait(driver,20)
command_page = wait.until(EC.element_to_be_clickable((By.ID,"Button_ID")))
command_page.click()
I have tried by class name also but i am unable to click the element.
Please help me on this.
As an alternative you can use JavascriptExecutor to perfrom click on certain element if Selenium click() method doesn't trigger the action without any Exception.
element = driver.find_element_by_id("etoolbar_toolbarSection_newcommandpagebtn_id")
driver.execute_script("arguments[0].click();", element)
Please try below solution :
WebDriverWait(driver, 20)
both_button=wait.until(EC.element_to_be_clickable((By.XPATH, "//*[contains(text(), 'Command Page')]")))
both_button.click()
I tried this, seems to be working
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("file://c:/cygwin64/home/das2/abcd.html")
element = driver.find_element_by_id("etoolbar_toolbarSection_newcommandpagebtn_id")
element.click()

Python Selenium click on element by xpath

I can locate the XPATH of the element that I want, however it will not allow me to click on it. In specific, it throws a "WebDriverException."
from selenium import webdriver
browser=webdriver.Chrome()
url='https://fred.stlouisfed.org/categories/32261'
browser.get(url)
click=browser.find_element_by_xpath("//a[#title='next page']")
print(click.get_attribute('title'))
click.click()
Returns the following error:
You cannot click on required element because it's not visible currently. You should scroll down to "Next" button before clicking it:
from selenium import webdriver
browser = webdriver.Chrome()
url = 'https://fred.stlouisfed.org/categories/32261'
browser.get(url)
next_button = browser.find_element_by_xpath("//a[#title='next page']")
browser.execute_script("arguments[0].scrollIntoView();", next_button)
next_button.click()
So, the XPath was there, however I don't believe I was actually pointed to it when trying the initial "click.click()." There might be a better solution, however this seems to be working for now.
from selenium import webdriver
browser=webdriver.Chrome()
url='https://fred.stlouisfed.org/categories/32261'
browser.get(url)
click=browser.find_element_by_xpath("//a[#title='next page']")
print(click.get_attribute('title'))
click.send_keys('next page')
click.click()

Python Selenium on AngularJs site

I am trying to automate reading my phone bill from the carrier website. www.fido.ca
However, the site is built with angularjs and I can't find the element using python and selenium webdriver. Please see below for the codes I've tried.
driver = webdriver.Firefox()
url = 'https://www.fido.ca/pages/#/login?m=login'
driver.get(url)
wait = WebDriverWait(driver, 10)
wait.until(EC.visibility_of_element_located((By.XPATH, "//a[#id='BC']")))
It returns selenium.common.exceptions.TimeoutException: Message:
Note: I can see the element from the front end, but no idea why webdriver can't see it.
When you navigate to a page, you would see the overlay "Welcome to Fido!" screen which makes your desired element invisible - hence the timeout error.
Handle the screen by selecting a region and clicking "Continue" or "X" (close).

Categories

Resources