Getting page source by Python + Selenium not works, connection refused - python

I try to get page source by using Selenium.
My code looks like below:
#!/usr/bin/env python
from selenium import webdriver
browser = webdriver.Firefox()
browser.get('https://python.org')
html_source = browser.page_source
print html_source
When I run the script, it opens browser but nothing happens. When I'm waiting without doing anything it throws "Connection refused", after about 15 seconds.
If I enter the address and go to the website, nothing happens too.
Why doesn't it work? Script looks good in my opinion and it should work
I'm doing it because I need to get page source after JS scripts are executed and I suspect that it can be done by Selenium.
Or maybe you know any other ways to get page source after JavaScript is loaded?

As per your question you have invoked get() method passing the argument as https://python.org. Instead you must have passed the argument as https://www.python.org/ as follows :
from selenium import webdriver
browser = webdriver.Firefox()
browser.get('https://www.python.org/')
html_source = browser.page_source
print (html_source)
Note : Ensure that you are using the latest Selenium-Python v3.8.0 clients, GeckoDriver v0.19.1 binary along with latest Firefox Quantum v57.x Web Browser.

Related

Unable to programatically login to a website

So I am trying to login programatically (python) to https://www.datacamp.com/users/sign_in using my email & password.
I have tried 2 methods of login. One using requests library & another using selenium (code below). Both time facing [403] issue.
Could someone please help me login programatically to it ?
Thank you !
Using Requests library.
import requests; r = requests.get("https://www.datacamp.com/users/sign_in"); r (which gives <response [403]>)
Using Selenium webdriver.
driver = webdriver.Chrome(executable_path=driver_path, options=option)
driver.get("https://www.datacamp.com/users/sign_in")
driver.find_element_by_id("user_email") # there is supposed to be form element with id=user_email for inputting email
Implicit wait at least should have worked, like this:
from selenium import webdriver
driver = webdriver.Chrome(executable_path='/snap/bin/chromium.chromedriver')
driver.implicitly_wait(10)
url = "https://www.datacamp.com/users/sign_in"
driver.get(url)
driver.find_element_by_id("user_email").send_keys("test#dsfdfs.com")
driver.find_element_by_css_selector("#new_user>button[type=button]").click()
BUT
The real issue is the the site uses anti-scraping software.
If you open Console and go to request itself you'll see:
It means that the site blocks your connection even before you try to login.
Here is similar question with different solutions: Can a website detect when you are using Selenium with chromedriver?
Not all answers will work for you, try different approaches suggested.
With Firefox you'll have the same issue (I've already checked).
You have to add a wait after driver.get("https://www.datacamp.com/users/sign_in") before driver.find_element_by_id("user_email") to let the page loaded.
Try something like WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, 'user_email')))

Python selenium get redirected url with Phantomjs

Here is my problem: I'm trying to use selenium to access a webpage and the special about this page is it is an auto redirecting page (you open that page and after few seconds, it automatically redirect to another page). When i use driver = webdriver.Firefox(), my IDM catched that link just perfectly after few seconds.
And because i don't want the browser to come up so i use Phantomjs instead, ut it not working. My application just can get the loading page url (bitdl-1336...) but not the redirected link. Please help!
This is my code:
link = 'http://torrent.ajee.sh/hash.php?hash=' + self.global_hash_code
driver = webdriver.PhantomJS('phantomjs.exe')
driver.get(str(link))
element = driver.find_element_by_link_text('Download Zip')
element.click()
time.sleep(10)
msg = QMessageBox.information(self, QString('Thành công'),QString(driver.current_url))
And this is the result:
Please help!
Sorry about my english
Not exactly an answer to your PhantomJS-specific question, but a workaround to the problem.
And because i don't want the browser to come up so i use Phantomjs instead
You can continue using Firefox, but start it in a Virtual Display, see more information at:
How do I run Selenium in Xvfb?
You may also need to let the browser automatically save the archive in a specified directory, see:
How do I automatically download files from a pop up dialog using selenium-python
Access to file download dialog in Firefox

Failing to open HTTPS pages in PhantomJS using Splinter

I'm trying to use PhantomJS to write a scraper but even the example in the documentation of morph.io is not working. I guess the problem is "https", I tested it with http and it is working. Can you please give me a solution?
I tested it using firefox and it works.
from splinter import Browser
with Browser("phantomjs") as browser:
# Optional, but make sure large enough that responsive pages don't
# hide elements on you...
browser.driver.set_window_size(1280, 1024)
# Open the page you want...
browser.visit("https://morph.io")
# submit the search form...
browser.fill("q", "parliament")
button = browser.find_by_css("button[type='submit']")
button.click()
# Scrape the data you like...
links = browser.find_by_css(".search-results .list-group-item")
for link in links:
print link['href']
PhantomJS is not working on https urls?
Splinter uses the Selenium WebDriver bindings (example) for Python under the hood, so you can simply pass the necessary options like this:
with Browser("phantomjs", service_args=['--ignore-ssl-errors=true', '--ssl-protocol=any']) as browser:
...
See PhantomJS failing to open HTTPS site for why those options might be necessary. Take a look at the PhantomJS commandline interface for more options.

Can't capture HAR using Python Selenium Script with BrowserMob-Proxy

Goal:
I want to run a Selenium Python script through BrowserMob-Proxy, which will capture and output a HAR file capture.
Problem:
I have a functional (very basic) Python script (shown below). When it is altered to utilize BrowserMob-Proxy to capture HAR however, it fails. Below I provide two different scripts that both fail, but for differing reasons (details provided after code snippets).
BrowserMob-Proxy Explanation:
As mentioned before, I am using both 0.6.0 AND 2.0-beta-8. The reasoning for this is that A) LightBody (lead designer of BMP) recently indicated that his most current release (2.0-beta-9) is not functional and advises users to use 2.0-beta-8 instead and B) from what I can tell from reading various site/stackoverflow information is that 0.6.0 (acquired through PIP) is used to make calls to the Client.py/Server.py, whereas 2.0-beta-8 is used to initiate the Server. To be honest, this confuses me. When importing BMP's Server however, it requires a batch (.bat) file to initiate the server, which is not provided in 0.6.0, but is with 2.0-beta-8...if anyone can shed some light on this area of confusion (I suspect it is the root of my problems described below), then I'd be most appreciative.
Software Specs:
Operating System: Windows 7 (64x) -- running in VirtualBox
Browser: FireFox (32.0.2)
Script Language: Python (2.7.8)
Automated Web Browser: Selenium (2.43.0) -- installed via PIP
BrowserMob-Proxy: 0.6.0 AND 2.0-beta-8 -- see explanation below
Selenium Script (this script works):
"""This script utilizes Selenium to obtain the Google homepage"""
from selenium import webdriver
driver = webdriver.Firefox() # Opens FireFox browser.
driver.get('https://google.com/') # Gets google.com and loads page in browser.
driver.quit() # Closes Firefox browser
This script succeeds in running and does not produce any errors. It is provided for illustrative purposes to indicate it works before adding BMP logic.
Script ALPHA with BMP (does not work):
"""Using the same functional Selenium script, produce ALPHA_HAR.har output"""
from browsermobproxy import Server
server = Server('C:\Users\Matt\Desktop\\browsermob-proxy-2.0-beta-8\\bin\\browsermob-proxy')
server.start()
proxy = server.create_proxy()
from selenium import webdriver
driver = webdriver.Firefox() # Opens FireFox browser.
proxy.new_har("ALPHA_HAR") # Creates a new HAR
driver.get("https://www.google.com/") # Gets google.com and loads page in browser.
proxy.har # Returns a HAR JSON blob
server.stop()
This code will succeed in running the script and will not produce any errors. However, when searching the entirety of my hard drive, I never succeed in locating ALPHA_HAR.har.
Script BETA with BMP (does not work):
"""Using the same functional Selenium script, produce BETA_HAR.har output"""
from browsermobproxy import Server
server = Server("C:\Users\Matt\Desktop\\browsermob-proxy-2.0-beta-8\\bin\\browsermob-proxy")
server.start()
proxy = server.create_proxy()
from selenium import webdriver
profile = webdriver.FirefoxProfile()
profile.set_proxy(proxy.selenium_proxy())
driver = webdriver.Firefox(firefox_profile=profile)
proxy.new_har("BETA_HAR") # Creates a new HAR
driver.get("https://www.google.com/") # Gets google.com and loads page in browser.
proxy.har # Returns a HAR JSON blob
server.stop()
This code was taken from http://browsermob-proxy-py.readthedocs.org/en/latest/. When running the above code, FireFox will attempt to get google.com, but will never succeed in loading the page. Eventually it will time out without producing any errors. And BETA_HAR.har can't be found anywhere on my hard drive. I have also noticed that, when trying to use this browser to visit any other site, it will similarly fail to load (I suspect this is due to the proxy not being configured properly).
Try this:
from browsermobproxy import Server
from selenium import webdriver
import json
server = Server("path/to/browsermob-proxy")
server.start()
proxy = server.create_proxy()
profile = webdriver.FirefoxProfile()
profile.set_proxy(self.proxy.selenium_proxy())
driver = webdriver.Firefox(firefox_profile=profile)
proxy.new_har("http://stackoverflow.com", options={'captureHeaders': True})
driver.get("http://stackoverflow.com")
result = json.dumps(proxy.har, ensure_ascii=False)
print result
proxy.stop()
driver.quit()
I use phantomJS, here is an example of how to use it with python:
import browsermobproxy as mob
import json
from selenium import webdriver
BROWSERMOB_PROXY_PATH = '/usr/share/browsermob/bin/browsermob-proxy'
url = 'http://google.com'
s = mob.Server(BROWSERMOB_PROXY_PATH)
s.start()
proxy = s.create_proxy()
proxy_address = "--proxy=127.0.0.1:%s" % proxy.port
service_args = [ proxy_address, '--ignore-ssl-errors=yes', ] #so that i can do https connections
driver = webdriver.PhantomJS(service_args=service_args)
driver.set_window_size(1400, 1050)
proxy.new_har(url)
driver.get(url)
har_data = json.dumps(proxy.har, indent=4)
screenshot = driver.get_screenshot_as_png()
imgname = "google.png"
harname = "google.har"
save_img = open(imgname, 'a')
save_img.write(screenshot)
save_img.close()
save_har = open(harname, 'a')
save_har.write(har_data)
save_har.close()
driver.quit()
s.stop()
What worked for me was to downgrade java version to java11. I used jenv to install and manage multiple java versions.
When you do:
proxy.har
You need to parse that response, proxy.har is a JSON object, so if you need to generate a file, you need to do this:
myFile = open('BETA_HAR.har','w')
myFile.write( str(proxy.har) )
myFile.close()
Then you will find your .har
Finding your HAR file
Inherently, the HAR object generated by the proxy is just that: an object in memory. The reason you can't find it on your hard drive is because it's not being saved there unless you write it there yourself. This is a pretty simple operation, as the HAR is just JSON.
with open("harfile", "w") as harfile:
harfile.write(json.dumps(proxy.har))
Why does ALPHA not work?
When you start dumping your HAR file, you'll find that your HAR file is empty with the ALPHA script. This is because you are not adding the proxy to the settings for Firefox, meaning that it will just connect directly bypassing your proxy.
What about BETA?
This code is written correctly as far as connecting to the proxy, although personally I prefer adding the proxy to the capabilities and passing those through. The code for that is:
cap = webdriver.DesiredCapabilities.FIREFOX.copy()
proxy.add_to_capabilities(cap)
driver = webdriver.Firefox(capabilities=cap)
I would guess that your issue lies with the proxy itself. Check the bmp.log and/or server.log files in the location of the python script and see what it is saying if something is going wrong.
Another alternative is that selenium is reporting back that the webpage has loaded before it actually has finished getting all of the elements, and as such your proxy is shutting down too early. Try making the script wait a bit longer before shutting down the proxy, or running it interactively through the interpreter.

error while parsing url using python

I am working on a url using python.
If I click the url, I am able to get the excel file.
but If I run following code, it gives me weird output.
>>> import urllib2
>>> urllib2.urlopen('http://intranet.stats.gov.my/trade/download.php?id=4&var=2012/2012%20MALAYSIA%27S%20EXPORTS%20BY%20ECONOMIC%20GROUPING.xls').read()
output :
"<script language=javascript>window.location='2012/2012 MALAYSIA\\'S EXPORTS BY ECONOMIC GROUPING.xls'</script>"
why its not able to read content with urllib2?
Take a look using an http listener (or even Google Chrome Developer Tools), there's a redirect using javascript when you get to the page.
You will need to access the initial url, parse the result and fetch again the actual url.
#Kai in this question seems to have found an answer to javascript redirects using the module Selenium
from selenium import webdriver
driver = webdriver.Firefox()
link = "http://yourlink.com"
driver.get(link)
#this waits for the new page to load
while(link == driver.current_url):
time.sleep(1)
redirected_url = driver.current_url

Categories

Resources