I'm browsing through a website using dryscrape in python and i need to upload a file to this site. But there is only one way of doing it, that is clicking in a button and browse into my files and select the one i want. How can i do it with python? i would appreciate if someone could help me using dryscrape too, but i'm accepting all answers.
heres the example image:
You can use Selenium. I tested this code and it works.
from selenium import webdriver
url = "https://example.com/"
driver = webdriver.Chrome("./chromedriver")
driver.get(url)
input_element = driver.find_element_by_css_selector("input[type=\"file\"]")
# absolute path to file
abs_file_path = "/Users/foo/Downloads/bar.png"
input_element.send_keys(abs_file_path)
sleep(5)
driver.quit()
Resources
Selenium python
Chrome driver download
For those who are searching for the answer in dryscrape i translated the selenium code to dryscrape:
element = sessin.at_xpath("xpath...") # Session is a dryscrape session
# The xpath is from the button like the one in the image "Browse..."
element.set("fullpath")
just as simple as it is.
Related
So I am trying to login programatically (python) to https://www.datacamp.com/users/sign_in using my email & password.
I have tried 2 methods of login. One using requests library & another using selenium (code below). Both time facing [403] issue.
Could someone please help me login programatically to it ?
Thank you !
Using Requests library.
import requests; r = requests.get("https://www.datacamp.com/users/sign_in"); r (which gives <response [403]>)
Using Selenium webdriver.
driver = webdriver.Chrome(executable_path=driver_path, options=option)
driver.get("https://www.datacamp.com/users/sign_in")
driver.find_element_by_id("user_email") # there is supposed to be form element with id=user_email for inputting email
Implicit wait at least should have worked, like this:
from selenium import webdriver
driver = webdriver.Chrome(executable_path='/snap/bin/chromium.chromedriver')
driver.implicitly_wait(10)
url = "https://www.datacamp.com/users/sign_in"
driver.get(url)
driver.find_element_by_id("user_email").send_keys("test#dsfdfs.com")
driver.find_element_by_css_selector("#new_user>button[type=button]").click()
BUT
The real issue is the the site uses anti-scraping software.
If you open Console and go to request itself you'll see:
It means that the site blocks your connection even before you try to login.
Here is similar question with different solutions: Can a website detect when you are using Selenium with chromedriver?
Not all answers will work for you, try different approaches suggested.
With Firefox you'll have the same issue (I've already checked).
You have to add a wait after driver.get("https://www.datacamp.com/users/sign_in") before driver.find_element_by_id("user_email") to let the page loaded.
Try something like WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, 'user_email')))
I have permission to download some weather data from the following website:
https://www.meteobridel.com/messnetz/index3.php#
I was wondering is there is a possibility to automatically find the download URL behind the 'CSV' button and then download that csv file with Python.
I tried this, but it didn't work:
from selenium import webdriver
browser = webdriver.Safari()
url = 'https://meteobridel.lu/?page_id=5'
browser.get(url)
browser.find_element_by_xpath('//*[#id="CSV"]').click()
browser.close()
Thanks already!
Try
from selenium import webdriver
browser = webdriver.Safari()
url = 'https://meteobridel.lu/?page_id=5'
browser.get(url)
browser.find_element_by_xpath('//body/div[#id='main']/div[1]/div[1]/div[1]/a[4]').click()
browser.close()
Checking the page you provided I can't find an "CSV"-ID.
Maybe try getting the button by class:
browser.find_element_by_xpath(r"//a[contains(#class, 'buttons-csv')]").click()
element is inside iframe so you have to switch to it first , as the id of frame is unique you can switch like
browser.switch_to.frame("iframe")
browser.find_element_by_xpath('//span[contains(text(),"CSV")]/..').click()
This is my very first time trying to scrape data from a website using Selenium. Fortunately I have got Selenium and Chrome to coordinate and the desired website opens.Once it opens up, I want to tell Python to click 'SEARCH' leaving the empty box blank (next to contains) and then tell Python to export the results ' and save the xlsx file as result_file. I do not know why the snippet is blowing up. Please provide your kind assistance.
from selenium import webdriver
driver = webdriver.Chrome("C:\Python27\Scripts\chromedriver.exe")
driver.get("https://etrakit.friscotexas.gov/Search/permit.aspx")
number_option = driver.find_element_by_id("cplMain_btnSearch")
number_option.click()
search_button = driver.find_element_by_id("cplMain_btnExportToExcel")
search_button.click()
result_file = open("result_file.xlsx", "w")
driver.close()
result_file.close()
Looking at the source of that page, the ID of the search button is "cplMain_btnSearch" not "SEARCH". And the Export button is "cplMain_btnExportToExcel".
To expand the answer of Daniel Roseman, you also need to specify the download location
options.add_argument("download.default_directory=C:/Python27")
driver = webdriver.Chrome(chrome_options=options)
The file will then be stored in your python27 directory with the name RadGridExport.csv.
Here is my problem: I'm trying to use selenium to access a webpage and the special about this page is it is an auto redirecting page (you open that page and after few seconds, it automatically redirect to another page). When i use driver = webdriver.Firefox(), my IDM catched that link just perfectly after few seconds.
And because i don't want the browser to come up so i use Phantomjs instead, ut it not working. My application just can get the loading page url (bitdl-1336...) but not the redirected link. Please help!
This is my code:
link = 'http://torrent.ajee.sh/hash.php?hash=' + self.global_hash_code
driver = webdriver.PhantomJS('phantomjs.exe')
driver.get(str(link))
element = driver.find_element_by_link_text('Download Zip')
element.click()
time.sleep(10)
msg = QMessageBox.information(self, QString('Thành công'),QString(driver.current_url))
And this is the result:
Please help!
Sorry about my english
Not exactly an answer to your PhantomJS-specific question, but a workaround to the problem.
And because i don't want the browser to come up so i use Phantomjs instead
You can continue using Firefox, but start it in a Virtual Display, see more information at:
How do I run Selenium in Xvfb?
You may also need to let the browser automatically save the archive in a specified directory, see:
How do I automatically download files from a pop up dialog using selenium-python
Access to file download dialog in Firefox
I am trying to get some comments off the car blog, Jalopnik. It doesn't come with the web page initially, instead the comments get retrieved with some Javascript. You only get the featured comments. I need all the comments so I would click "All" (between "Featured" and "Start a New Discussion") and get them.
To automate this, I tried learning Selenium. I modified their script from Pypi, guessing the code for clicking a link was link.click() and link = broswer.find_element_byxpath(...). It doesn't look liek the "All" button (displaying all comments) was pressed.
Ultimately I'd like to download the HTML of that version to parse.
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
import time
browser = webdriver.Firefox() # Get local session of firefox
browser.get("http://jalopnik.com/5912009/prius-driver-beat-up-after-taking-out-two-bikers/") # Load page
time.sleep(0.2)
link = browser.find_element_by_xpath("//a[#class='tc cn_showall']")
link.click()
browser.save_screenshot('screenie.png')
browser.close()
Using Firefox with the Firebug plugin, I browsed to http://jalopnik.com/5912009/prius-driver-beat-up-after-taking-out-two-bikers.
I then opened the Firebug console and clicked on ALL; it obligingly showed a single AJAX call to http://jalopnik.com/index.php?op=threadlist&post_id=5912009&mode=all&page=0&repliesmode=hide&nouser=true&selected_thread=null
Opening that url in a new window gets me the comment feed you are seeking.
More generally, if you substitute the appropriate article-ID into that url, you should be able to automate the process without Selenium.