can not switch to a new page using python selenium webdriver? - python

I need to scrape the CSV result of the website (https://requestmap.webperf.tools/). But I can’t.
This is the process I need to do:
1- load the website (https://requestmap.webperf.tools/)
2- enter a new website as an input (for example, https://stackoverflow.com/)
3- click submit
4- in the new page which opens after the submit, download the csv file at the end of the page
But I think the driver has the main page and doesn’t switch to the new page. That's why I can’t download the CSV file.
Would you please tell me how to do it?
here is my code:
options = webdriver.ChromeOptions()
options.add_argument('-headless')
options.add_argument('-no-sandbox')
options.add_argument('-disable-dev-shm-usage')
driver = webdriver.Chrome('chromedriver',options=options)
driver.get("https://requestmap.webperf.tools/")
driver.find_element_by_id("url").send_keys("https://stackoverflow.com/")
driver.find_element_by_id("submitter").click()
I have tested different ways to solve my issue:
First of all:
I got this code from here :
But it doesn't work.
window_after = driver.window_handles[1]
driver.switch_to.window(window_after)
I also tried this or this, but they are not working as well.
# wait to make sure there are two windows open
# it is not working
WebDriverWait(driver, 30).until(lambda d: len(d.window_handles) == 2)
# switch windows
# it is not working
driver.switch_to_window(driver.window_handles[1])
content = driver.page_source
soup = BeautifulSoup(content)
driver.find_elements_by_name("Download CSV")
So How can I get this issue solved?
Is there any other way in python to do so and switch to a new windows?

Related

Open all links in a new tab using xpath and run a command Selenium - Python

New to selenium. I am trying to have Selenium open a new tab for each/every link provided in a dynamic table. I would like to run the same command on each tab (input information). Below is my current code. The code allows me to search and access the data table using its xpath. In the xpath I put a * in the "tr[ ]" which allows me to select all the needed links in the table cells. However, the code only takes me to the first link in the data table and runs the command.
driver.get('https://www.#.com')
driver.maximize_window()
time.sleep(2)
link = driver.find_element('link text', 'Search')
link.click()
time.sleep(4)
table = driver.find_element('xpath', '//*[#id="TableBody"]/tr[*]/td[1]/div[2]/a')
table.click()
time.sleep(4)
#Enter text in the links page
high_field = driver.find_element('id', 'formBid')
high_field.send_keys('#')
low_field = driver.find_element('id', 'formBidMin')
low_field.send_keys('#')
low_field.send_keys(Keys.RETURN)
print('Text entered!')
time.sleep(10)
How can I solve the following:
Have each link open in a new tab
Run a command in each tab (same command)

How to automatically find link of download button and download corresponding file with Python?

I have permission to download some weather data from the following website:
https://www.meteobridel.com/messnetz/index3.php#
I was wondering is there is a possibility to automatically find the download URL behind the 'CSV' button and then download that csv file with Python.
I tried this, but it didn't work:
from selenium import webdriver
browser = webdriver.Safari()
url = 'https://meteobridel.lu/?page_id=5'
browser.get(url)
browser.find_element_by_xpath('//*[#id="CSV"]').click()
browser.close()
Thanks already!
Try
from selenium import webdriver
browser = webdriver.Safari()
url = 'https://meteobridel.lu/?page_id=5'
browser.get(url)
browser.find_element_by_xpath('//body/div[#id='main']/div[1]/div[1]/div[1]/a[4]').click()
browser.close()
Checking the page you provided I can't find an "CSV"-ID.
Maybe try getting the button by class:
browser.find_element_by_xpath(r"//a[contains(#class, 'buttons-csv')]").click()
element is inside iframe so you have to switch to it first , as the id of frame is unique you can switch like
browser.switch_to.frame("iframe")
browser.find_element_by_xpath('//span[contains(text(),"CSV")]/..').click()

Selenium Python code

This is my very first time trying to scrape data from a website using Selenium. Fortunately I have got Selenium and Chrome to coordinate and the desired website opens.Once it opens up, I want to tell Python to click 'SEARCH' leaving the empty box blank (next to contains) and then tell Python to export the results ' and save the xlsx file as result_file. I do not know why the snippet is blowing up. Please provide your kind assistance.
from selenium import webdriver
driver = webdriver.Chrome("C:\Python27\Scripts\chromedriver.exe")
driver.get("https://etrakit.friscotexas.gov/Search/permit.aspx")
number_option = driver.find_element_by_id("cplMain_btnSearch")
number_option.click()
search_button = driver.find_element_by_id("cplMain_btnExportToExcel")
search_button.click()
result_file = open("result_file.xlsx", "w")
driver.close()
result_file.close()
Looking at the source of that page, the ID of the search button is "cplMain_btnSearch" not "SEARCH". And the Export button is "cplMain_btnExportToExcel".
To expand the answer of Daniel Roseman, you also need to specify the download location
options.add_argument("download.default_directory=C:/Python27")
driver = webdriver.Chrome(chrome_options=options)
The file will then be stored in your python27 directory with the name RadGridExport.csv.

Python webdriver connect to already webpage (selenium)

I need to open multiple links in separate tabs or sessions...
I already know how to do it, so what i would like to know is if it's possible to connect to an already open webpage instead of open every links every time i run the script.
What i used now in Python is:
from selenium import webdriver
driver.get(link)
The purpose would be once i run the first script (to load multiple links), the second should connect to the webpages, refresh them and continue with the code.
Is it possible? Anyone know how to do it?
Thanks a lot for the help!!!!
Connecting to the previously opened window is easy:
driver = webdriver.Firefox()
url = driver.command_executor._url
session_id = driver.session_id
driver2 = webdriver.Remote(command_executor=url,desired_capabilities={})
driver2.session_id = session_id
#You're all set to do whatever with the previously opened browser
driver2.get("http://www.stackoverflow.com")

Python selenium get redirected url with Phantomjs

Here is my problem: I'm trying to use selenium to access a webpage and the special about this page is it is an auto redirecting page (you open that page and after few seconds, it automatically redirect to another page). When i use driver = webdriver.Firefox(), my IDM catched that link just perfectly after few seconds.
And because i don't want the browser to come up so i use Phantomjs instead, ut it not working. My application just can get the loading page url (bitdl-1336...) but not the redirected link. Please help!
This is my code:
link = 'http://torrent.ajee.sh/hash.php?hash=' + self.global_hash_code
driver = webdriver.PhantomJS('phantomjs.exe')
driver.get(str(link))
element = driver.find_element_by_link_text('Download Zip')
element.click()
time.sleep(10)
msg = QMessageBox.information(self, QString('Thành công'),QString(driver.current_url))
And this is the result:
Please help!
Sorry about my english
Not exactly an answer to your PhantomJS-specific question, but a workaround to the problem.
And because i don't want the browser to come up so i use Phantomjs instead
You can continue using Firefox, but start it in a Virtual Display, see more information at:
How do I run Selenium in Xvfb?
You may also need to let the browser automatically save the archive in a specified directory, see:
How do I automatically download files from a pop up dialog using selenium-python
Access to file download dialog in Firefox

Categories

Resources