Controlling a browser using Python, on a Mac - python

I'm looking for a way to programatically control a browser on a Mac (i.e. Firefox or Safari or Chrome/-ium or Opera, but not IE) using Python.
The actions I need include following links, checking if elements exist in a page, and submitting forms.
Which solution would you recommend?

I like Selenium, it's scriptable through Python. The Selenium IDE only runs in Firefox, but Selenium RC supports multiple browsers.

Check out python-browsercontrol.
Also, you could read this forum page (I know, it's old, but it seems extremely relevant to your question):
http://bytes.com/topic/python/answers/45528-python-client-side-browser-script-language
Also: http://docs.python.org/library/webbrowser.html
Example:
from browser import *
my_browser = Firefox(99, '/usr/lib/firefox/firefox-bin') my_browser.open_url('cnn.com')
open_url returns when the cnn.com home page document is loaded in the browser frame.

Might be a bit restrictive, but py-appscript may be the easiest way of controlling a Applescript'able browser from Python.
For more complex things, you can use the PyObjC to achieve pretty much anything - for example, webkit2png is a Python script which uses WebKit to load a page, and save an image of it. You need to have a decent understanding of Objective-C and Cocoa/etc to use it (as it just exposes ObjC objects to Python)
Screen-scaping may achieve what you want with much less complexity.

Check out spynner Python module.
Spynner is a stateful programmatic web browser module for Python. It is based upon PyQT and WebKit. It supports Javascript, AJAX, and every other technology that !WebKit is able to handle (Flash, SVG, ...). Spynner takes advantage of JQuery. a powerful Javascript library that makes the interaction with pages and event simulation really easy.
Using Spynner you would able to simulate a web browser with no GUI (though a browsing window can be opened for debugging purposes), so it may be used to implement crawlers or acceptance testing tools.
See some examples at GitHub page.

Try mechanize, if you don't actually need a browser.
Example:
import re
import mechanize
br = mechanize.Browser()
br.open("http://www.example.com/")
# follow second link with element text matching regular expression
response1 = br.follow_link(text_regex=r"cheese\s*shop", nr=1)
assert br.viewing_html()
print br.title()
print response1.geturl()
print response1.info() # headers
print response1.read() # body
br.select_form(name="order")
# Browser passes through unknown attributes (including methods)
# to the selected HTMLForm.
br["cheeses"] = ["mozzarella", "caerphilly"] # (the method here is __setitem__)
# Submit current form. Browser calls .close() on the current response on
# navigation, so this closes response1
response2 = br.submit()

Several Mac applications can be controlled via OSAScript (a.k.a. AppleScript), which can be sent via the osascript command. O'Reilly has an article on invoking osascript from Python. I can't vouch for it doing exactly what you want, but it's a starting point.

Maybe overpowered, but check out Marionette to control Firefox. There is a tutorial at readthedocs:
You first start a Marionette-enabled firefox instance:
firefox -marionette
Then you create a client:
client = Marionette('localhost', port=2828)
client.start_session()
Navigation f.ex. is done via
url = 'http://mozilla.org'
client.navigate(url)
client.go_back()
client.go_forward()
assert client.get_url() == url

Checkout Mozmill https://github.com/mikeal/mozmill
Mozmill is a UI Automation framework for Mozilla apps like Firefox and Thunderbird. It's both an addon and a Python command-line tool. The addon provides an IDE for writing and running the JavaScript tests and the Python package provides a mechanism for running the tests from the command line as well as providing a way to test restarting the application.

Take a look at PyShell (an extension to PyXPCOM).
Example:
promptSvc = components.classes["#mozilla.org/embedcomp/prompt-service;1"].\
getService(Components.interfaces.nsIPromptService)
promptSvc.alert(None, 'Greeting...', "Hello from Python")

You can use selenium library for Python, here is a simple example (in form of unittest):
#!/usr/bin/env python3
import unittest
from selenium import webdriver
class FooTest(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.base_url = "http://example.com"
def is_text_present(self, text):
return str(text) in self.driver.page_source
def test_example(self):
self.driver.get(self.base_url + "/")
self.assertTrue(self.is_text_present("Example"))
if __name__ == '__main__':
suite = unittest.TestLoader().loadTestsFromTestCase(FooTest)
result = unittest.TextTestRunner(verbosity=2).run(suite)

Related

selenium Not Running Javascript [duplicate]

I've been testing out Selenium with Chromedriver and I noticed that some pages can detect that you're using Selenium even though there's no automation at all. Even when I'm just browsing manually just using Chrome through Selenium and Xephyr I often get a page saying that suspicious activity was detected. I've checked my user agent, and my browser fingerprint, and they are all exactly identical to the normal Chrome browser.
When I browse to these sites in normal Chrome everything works fine, but the moment I use Selenium I'm detected.
In theory, chromedriver and Chrome should look literally exactly the same to any web server, but somehow they can detect it.
If you want some test code try out this:
from pyvirtualdisplay import Display
from selenium import webdriver
display = Display(visible=1, size=(1600, 902))
display.start()
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--disable-extensions')
chrome_options.add_argument('--profile-directory=Default')
chrome_options.add_argument("--incognito")
chrome_options.add_argument("--disable-plugins-discovery");
chrome_options.add_argument("--start-maximized")
driver = webdriver.Chrome(chrome_options=chrome_options)
driver.delete_all_cookies()
driver.set_window_size(800,800)
driver.set_window_position(0,0)
print 'arguments done'
driver.get('http://stubhub.com')
If you browse around stubhub you'll get redirected and 'blocked' within one or two requests. I've been investigating this and I can't figure out how they can tell that a user is using Selenium.
How do they do it?
I installed the Selenium IDE plugin in Firefox and I got banned when I went to stubhub.com in the normal Firefox browser with only the additional plugin.
When I use Fiddler to view the HTTP requests being sent back and forth I've noticed that the 'fake browser's' requests often have 'no-cache' in the response header.
Results like this Is there a way to detect that I'm in a Selenium Webdriver page from JavaScript? suggest that there should be no way to detect when you are using a webdriver. But this evidence suggests otherwise.
The site uploads a fingerprint to their servers, but I checked and the fingerprint of Selenium is identical to the fingerprint when using Chrome.
This is one of the fingerprint payloads that they send to their servers:
{"appName":"Netscape","platform":"Linuxx86_64","cookies":1,"syslang":"en-US","userlang":"en-
US","cpu":"","productSub":"20030107","setTimeout":1,"setInterval":1,"plugins":
{"0":"ChromePDFViewer","1":"ShockwaveFlash","2":"WidevineContentDecryptionMo
dule","3":"NativeClient","4":"ChromePDFViewer"},"mimeTypes":
{"0":"application/pdf","1":"ShockwaveFlashapplication/x-shockwave-
flash","2":"FutureSplashPlayerapplication/futuresplash","3":"WidevineContent
DecryptionModuleapplication/x-ppapi-widevine-
cdm","4":"NativeClientExecutableapplication/x-
nacl","5":"PortableNativeClientExecutableapplication/x-
pnacl","6":"PortableDocumentFormatapplication/x-google-chrome-
pdf"},"screen":{"width":1600,"height":900,"colorDepth":24},"fonts":
{"0":"monospace","1":"DejaVuSerif","2":"Georgia","3":"DejaVuSans","4":"Trebu
chetMS","5":"Verdana","6":"AndaleMono","7":"DejaVuSansMono","8":"LiberationM
ono","9":"NimbusMonoL","10":"CourierNew","11":"Courier"}}
It's identical in Selenium and in Chrome.
VPNs work for a single use, but they get detected after I load the first page. Clearly some JavaScript code is being run to detect Selenium.
Basically, the way the Selenium detection works, is that they test for predefined JavaScript variables which appear when running with Selenium. The bot detection scripts usually look anything containing word "selenium" / "webdriver" in any of the variables (on window object), and also document variables called $cdc_ and $wdc_. Of course, all of this depends on which browser you are on. All the different browsers expose different things.
For me, I used Chrome, so, all that I had to do was to ensure that $cdc_ didn't exist anymore as a document variable, and voilà (download chromedriver source code, modify chromedriver and re-compile $cdc_ under different name.)
This is the function I modified in chromedriver:
File call_function.js:
function getPageCache(opt_doc) {
var doc = opt_doc || document;
//var key = '$cdc_asdjflasutopfhvcZLmcfl_';
var key = 'randomblabla_';
if (!(key in doc))
doc[key] = new Cache();
return doc[key];
}
(Note the comment. All I did I turned $cdc_ to randomblabla_.)
Here is pseudocode which demonstrates some of the techniques that bot networks might use:
runBotDetection = function () {
var documentDetectionKeys = [
"__webdriver_evaluate",
"__selenium_evaluate",
"__webdriver_script_function",
"__webdriver_script_func",
"__webdriver_script_fn",
"__fxdriver_evaluate",
"__driver_unwrapped",
"__webdriver_unwrapped",
"__driver_evaluate",
"__selenium_unwrapped",
"__fxdriver_unwrapped",
];
var windowDetectionKeys = [
"_phantom",
"__nightmare",
"_selenium",
"callPhantom",
"callSelenium",
"_Selenium_IDE_Recorder",
];
for (const windowDetectionKey in windowDetectionKeys) {
const windowDetectionKeyValue = windowDetectionKeys[windowDetectionKey];
if (window[windowDetectionKeyValue]) {
return true;
}
};
for (const documentDetectionKey in documentDetectionKeys) {
const documentDetectionKeyValue = documentDetectionKeys[documentDetectionKey];
if (window['document'][documentDetectionKeyValue]) {
return true;
}
};
for (const documentKey in window['document']) {
if (documentKey.match(/\$[a-z]dc_/) && window['document'][documentKey]['cache_']) {
return true;
}
}
if (window['external'] && window['external'].toString() && (window['external'].toString()['indexOf']('Sequentum') != -1)) return true;
if (window['document']['documentElement']['getAttribute']('selenium')) return true;
if (window['document']['documentElement']['getAttribute']('webdriver')) return true;
if (window['document']['documentElement']['getAttribute']('driver')) return true;
return false;
};
According to user szx, it is also possible to simply open chromedriver.exe in a hex editor, and just do the replacement manually, without actually doing any compiling.
Replacing cdc_ string
You can use Vim or Perl to replace the cdc_ string in chromedriver. See the answer by #Erti-Chris Eelmaa to learn more about that string and how it's a detection point.
Using Vim or Perl prevents you from having to recompile source code or use a hex editor.
Make sure to make a copy of the original chromedriver before attempting to edit it.
Our goal is to alter the cdc_ string, which looks something like $cdc_lasutopfhvcZLmcfl.
The methods below were tested on chromedriver version 2.41.578706.
Using Vim
vim /path/to/chromedriver
After running the line above, you'll probably see a bunch of gibberish. Do the following:
Replace all instances of cdc_ with dog_ by typing :%s/cdc_/dog_/g.
dog_ is just an example. You can choose anything as long as it has the same amount of characters as the search string (e.g., cdc_), otherwise the chromedriver will fail.
To save the changes and quit, type :wq! and press return.
If you need to quit without saving changes, type :q! and press return.
Using Perl
The line below replaces all cdc_ occurrences with dog_. Credit to Vic Seedoubleyew:
perl -pi -e 's/cdc_/dog_/g' /path/to/chromedriver
Make sure that the replacement string (e.g., dog_) has the same number of characters as the search string (e.g., cdc_), otherwise the chromedriver will fail.
Wrapping Up
To verify that all occurrences of cdc_ were replaced:
grep "cdc_" /path/to/chromedriver
If no output was returned, the replacement was successful.
Go to the altered chromedriver and double click on it. A terminal window should open up. If you don't see killed in the output, you've successfully altered the driver.
Make sure that the name of the altered chromedriver binary is chromedriver, and that the original binary is either moved from its original location or renamed.
My Experience With This Method
I was previously being detected on a website while trying to log in, but after replacing cdc_ with an equal sized string, I was able to log in. Like others have said though, if you've already been detected, you might get blocked for a plethora of other reasons even after using this method. So you may have to try accessing the site that was detecting you using a VPN, different network, etc.
As we've already figured out in the question and the posted answers, there is an anti Web-scraping and a bot detection service called "Distil Networks" in play here. And, according to the company CEO's interview:
Even though they can create new bots, we figured out a way to identify
Selenium the a tool they’re using, so we’re blocking Selenium no
matter how many times they iterate on that bot. We’re doing that now
with Python and a lot of different technologies. Once we see a pattern
emerge from one type of bot, then we work to reverse engineer the
technology they use and identify it as malicious.
It'll take time and additional challenges to understand how exactly they are detecting Selenium, but what can we say for sure at the moment:
it's not related to the actions you take with Selenium. Once you navigate to the site, you get immediately detected and banned. I've tried to add artificial random delays between actions, take a pause after the page is loaded - nothing helped
it's not about browser fingerprint either. I tried it in multiple browsers with clean profiles and not, incognito modes, but nothing helped
since, according to the hint in the interview, this was "reverse engineering", I suspect this is done with some JavaScript code being executed in the browser revealing that this is a browser automated via Selenium WebDriver
I decided to post it as an answer, since clearly:
Can a website detect when you are using selenium with chromedriver?
Yes.
Also, I haven't experimented with older Selenium and older browser versions. In theory, there could be something implemented/added to Selenium at a certain point that Distil Networks bot detector currently relies on. Then, if this is the case, we might detect (yeah, let's detect the detector) at what point/version a relevant change was made, look into changelog and changesets and, may be, this could give us more information on where to look and what is it they use to detect a webdriver-powered browser. It's just a theory that needs to be tested.
A lot have been analyzed and discussed about a website being detected being driven by Selenium controlled ChromeDriver. Here are my two cents:
According to the article Browser detection using the user agent serving different webpages or services to different browsers is usually not among the best of ideas. The web is meant to be accessible to everyone, regardless of which browser or device an user is using. There are best practices outlined to develop a website to progressively enhance itself based on the feature availability rather than by targeting specific browsers.
However, browsers and standards are not perfect, and there are still some edge cases where some websites still detects the browser and if the browser is driven by Selenium controled WebDriver. Browsers can be detected through different ways and some commonly used mechanisms are as follows:
Implementing captcha / recaptcha to detect the automatic bots.
You can find a relevant detailed discussion in How does recaptcha 3 know I'm using selenium/chromedriver?
Detecting the term HeadlessChrome within headless Chrome UserAgent
You can find a relevant detailed discussion in Access Denied page with headless Chrome on Linux while headed Chrome works on windows using Selenium through Python
Using Bot Management service from Distil Networks
You can find a relevant detailed discussion in Unable to use Selenium to automate Chase site login
Using Bot Manager service from Akamai
You can find a relevant detailed discussion in Dynamic dropdown doesn't populate with auto suggestions on https://www.nseindia.com/ when values are passed using Selenium and Python
Using Bot Protection service from Datadome
You can find a relevant detailed discussion in Website using DataDome gets captcha blocked while scraping using Selenium and Python
However, using the user-agent to detect the browser looks simple but doing it well is in fact a bit tougher.
Note: At this point it's worth to mention that: it's very rarely a good idea to use user agent sniffing. There are always better and more broadly compatible way to address a certain issue.
Considerations for browser detection
The idea behind detecting the browser can be either of the following:
Trying to work around a specific bug in some specific variant or specific version of a webbrowser.
Trying to check for the existence of a specific feature that some browsers don't yet support.
Trying to provide different HTML depending on which browser is being used.
Alternative of browser detection through UserAgents
Some of the alternatives of browser detection are as follows:
Implementing a test to detect how the browser implements the API of a feature and determine how to use it from that. An example was Chrome unflagged experimental lookbehind support in regular expressions.
Adapting the design technique of Progressive enhancement which would involve developing a website in layers, using a bottom-up approach, starting with a simpler layer and improving the capabilities of the site in successive layers, each using more features.
Adapting the top-down approach of Graceful degradation in which we build the best possible site using all the features we want and then tweak it to make it work on older browsers.
Solution
To prevent the Selenium driven WebDriver from getting detected, a niche approach would include either/all of the below mentioned approaches:
Rotating the UserAgent in every execution of your Test Suite using fake_useragent module as follows:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from fake_useragent import UserAgent
options = Options()
ua = UserAgent()
userAgent = ua.random
print(userAgent)
options.add_argument(f'user-agent={userAgent}')
driver = webdriver.Chrome(chrome_options=options, executable_path=r'C:\WebDrivers\ChromeDriver\chromedriver_win32\chromedriver.exe')
driver.get("https://www.google.co.in")
driver.quit()
You can find a relevant detailed discussion in Way to change Google Chrome user agent in Selenium?
Rotating the UserAgent in each of your Tests using Network.setUserAgentOverride through execute_cdp_cmd() as follows:
from selenium import webdriver
driver = webdriver.Chrome(executable_path=r'C:\WebDrivers\chromedriver.exe')
print(driver.execute_script("return navigator.userAgent;"))
# Setting user agent as Chrome/83.0.4103.97
driver.execute_cdp_cmd('Network.setUserAgentOverride', {"userAgent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36'})
print(driver.execute_script("return navigator.userAgent;"))
You can find a relevant detailed discussion in How to change the User Agent using Selenium and Python
Changing the property value of navigator for webdriver to undefined as follows:
driver.execute_cdp_cmd("Page.addScriptToEvaluateOnNewDocument", {
"source": """
Object.defineProperty(navigator, 'webdriver', {
get: () => undefined
})
"""
})
You can find a relevant detailed discussion in Selenium webdriver: Modifying navigator.webdriver flag to prevent selenium detection
Changing the values of navigator.plugins, navigator.languages, WebGL, hairline feature, missing image, etc.
You can find a relevant detailed discussion in Is there a version of selenium webdriver that is not detectable?
Changing the conventional Viewport
You can find a relevant detailed discussion in How to bypass Google captcha with Selenium and python?
Dealing with reCAPTCHA
While dealing with 2captcha and recaptcha-v3 rather clicking on checkbox associated to the text I'm not a robot, it may be easier to get authenticated extracting and using the data-sitekey.
You can find a relevant detailed discussion in How to identify the 32 bit data-sitekey of ReCaptcha V2 to obtain a valid response programmatically using Selenium and Python Requests?
tl; dr
You can find a cutting edge solution to evade webdriver detection in:
selenium-stealth - a proven way to evade webdriver detection
With the availability of Selenium Stealth evading the detection of Selenium driven ChromeDriver initiated google-chrome Browsing Context have become much more easier.
selenium-stealth
selenium-stealth is a Python package to prevent detection. This programme tries to make python selenium more stealthy. However, as of now selenium-stealth only support Selenium Chrome.
Features that currently selenium-stealth can offer:
selenium-stealth with stealth passes all public bot tests.
With selenium-stealth selenium can do google account login.
selenium-stealth help with maintaining a normal reCAPTCHA v3 score
Installation
Selenium-stealth is available on PyPI so you can install with pip as follows:
pip install selenium-stealth
selenium4 compatible code
Code Block:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium_stealth import stealth
options = Options()
options.add_argument("start-maximized")
# Chrome is controlled by automated test software
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
s = Service('C:\\BrowserDrivers\\chromedriver.exe')
driver = webdriver.Chrome(service=s, options=options)
# Selenium Stealth settings
stealth(driver,
languages=["en-US", "en"],
vendor="Google Inc.",
platform="Win32",
webgl_vendor="Intel Inc.",
renderer="Intel Iris OpenGL Engine",
fix_hairline=True,
)
driver.get("https://bot.sannysoft.com/")
Browser Screenshot:
tl; dr
You can find a couple of relevant detailed discussion in:
Can a website detect when you are using Selenium with chromedriver?
How to automate login to a site which is detecting my attempts to login using selenium-stealth
Undetected Chromedriver not loading correctly
Example of how it's implemented on wellsfargo.com:
try {
if (window.document.documentElement.getAttribute("webdriver")) return !+[]
} catch (IDLMrxxel) {}
try {
if ("_Selenium_IDE_Recorder" in window) return !+""
} catch (KknKsUayS) {}
try {
if ("__webdriver_script_fn" in document) return !+""
Obfuscating JavaScript result
I have checked the chromedriver source code. That injects some JavaScript files into the browser.
Every JavaScript file in this link is injected to the web pages:
https://chromium.googlesource.com/chromium/src/+/master/chrome/test/chromedriver/js/
So I used reverse engineering and obfuscated the JavaScript files by hex editing. Now I was sure that no more JavaScript variables, function names and fixed strings were used to uncover selenium activity. But still some sites and reCAPTCHA detect Selenium!
Maybe they check the modifications that are caused by chromedriver JavaScript execution :)
Chrome 'navigator' parameters modification
I discovered there are some parameters in 'navigator' that briefly uncover using of chromedriver.
These are the parameters:
"navigator.webdriver" In non-automated mode it is 'undefined'. In automated mode it's 'true'.
"navigator.plugins" In headless Chrome, it has 0 length. So I added some fake elements to fool the plugin length checking process.
"navigator.languages" was set to default chrome value '["en-US", "en", "es"]'.
So what I needed was a chrome extension to run JavaScript on the web pages. I made an extension with the JavaScript code provided in the article and used another article to add the zipped extension to my project. I have successfully changed the values; but still nothing changed!
I didn't find other variables like these, but it doesn't mean that they don't exist. Still reCAPTCHA detects chromedriver, So there should be more variables to change. The next step should be reverse engineering of the detector services that I don't want to do.
Now I'm not sure if is it worth it to spend more time on this automation process or search for alternative methods!
Try to use Selenium with a specific user profile of Chrome. That way you can use it as specific user and define anything you want. When doing so, it will run as a 'real' user. Look at the Chrome process with some process explorer and you'll see the difference with the tags.
For example:
username = os.getenv("USERNAME")
userProfile = "C:\\Users\\" + username +
"\\AppData\\Local\\Google\\Chrome\\User Data\\Default"
options = webdriver.ChromeOptions()
options.add_argument("user-data-dir={}".format(userProfile))
# Add any tag here you want.
options.add_experimental_option(
"excludeSwitches",
"""
ignore-certificate-errors
safebrowsing-disable-download-protection
safebrowsing-disable-auto-update
disable-client-side-phishing-detection
""".split()
)
chromedriver = "C:\Python27\chromedriver\chromedriver.exe"
os.environ["webdriver.chrome.driver"] = chromedriver
browser = webdriver.Chrome(executable_path=chromedriver, chrome_options=options)
Google Chrome tag list here
partial interface Navigator { readonly attribute boolean webdriver; };
The webdriver IDL attribute of the Navigator interface must return the value of the webdriver-active flag, which is initially false.
This property allows websites to determine that the user agent is under control by WebDriver, and can be used to help mitigate denial-of-service attacks.
Taken directly from the 2017 W3C Editor's Draft of WebDriver. This heavily implies that at the very least, future iterations of Selenium's drivers will be identifiable to prevent misuse. Ultimately, it's hard to tell without the source code, what exactly causes chrome driver in specific to be detectable.
All I had to do was:
my_options = webdriver.ChromeOptions()
my_options.add_argument( '--disable-blink-features=AutomationControlled' )
Some more information to this: This relates to website skyscanner.com. In the past I have been able to scrape it. Yes, it did detect the browser automation and it gave me a captcha to press and hold a button. I used to be able to complete the captcha manually, then search flights and then scrape. But this time around after completing the captcha I get the same captcha again and again, just can't seem to escape from it. I tried some of the most popular suggestions to avoid automation being detected, but they didn't work. Then I found this article which did work, and by process of elimination I found out it only took the option above to get around their browser automation detection. Now I don't even get the captcha and everything else seems to be working normally.
Versions I am running currently:
OS: Windows 7 64 bit
Python 3.8.0 (tags/v3.8.0:fa919fd, 2019-10-14) (MSC v.1916 64 bit (AMD64)) on win32
Browser: Chrome Version 100.0.4896.60 (Official
Build) (64-bit)
Selenium 4.1.3
ChromeDriver 100.0.4896.60 chromedriver_win32.zip 930ff33ae8babeaa74e0dd1ce1dae7ff
It works for some websites, remove property webdriver from navigator
from selenium import webdriver
driver = webdriver.Chrome()
driver.execute_cdp_cmd("Page.addScriptToEvaluateOnNewDocument", {
"source":
"const newProto = navigator.__proto__;"
"delete newProto.webdriver;"
"navigator.__proto__ = newProto;"
})
Firefox is said to set window.navigator.webdriver === true if working with a webdriver. That was according to one of the older specs (e.g.: archive.org) but I couldn't find it in the new one except for some very vague wording in the appendices.
A test for it is in the selenium code in the file fingerprint_test.js where the comment at the end says "Currently only implemented in firefox" but I wasn't able to identify any code in that direction with some simple greping, neither in the current (41.0.2) Firefox release-tree nor in the Chromium-tree.
I also found a comment for an older commit regarding fingerprinting in the firefox driver b82512999938 from January 2015. That code is still in the Selenium GIT-master downloaded yesterday at javascript/firefox-driver/extension/content/server.js with a comment linking to the slightly differently worded appendix in the current w3c webdriver spec.
Additionally to the great answer of Erti-Chris Eelmaa - there's annoying window.navigator.webdriver and it is read-only. Even if you change the value of it to false, it will still have true. That's why the browser driven by automated software can still be detected.
MDN
The variable is managed by the flag --enable-automation in chrome. The chromedriver launches Chrome with that flag and Chrome sets the window.navigator.webdriver to true. You can find it here. You need to add to "exclude switches" the flag. For instance (Go):
package main
import (
"github.com/tebeka/selenium"
"github.com/tebeka/selenium/chrome"
)
func main() {
caps := selenium.Capabilities{
"browserName": "chrome",
}
chromeCaps := chrome.Capabilities{
Path: "/path/to/chrome-binary",
ExcludeSwitches: []string{"enable-automation"},
}
caps.AddChrome(chromeCaps)
wd, err := selenium.NewRemote(caps, fmt.Sprintf("http://localhost:%d/wd/hub", 4444))
}
One more thing I found is that some websites uses a platform that checks the User Agent. If the value contains: "HeadlessChrome" the behavior can be weird when using headless mode.
The workaround for that will be to override the user agent value, for example in Java:
chromeOptions.addArguments("--user-agent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36");
The bot detection I've seen seems more sophisticated or at least different than what I've read through in the answers below.
Experiment 1
I open a browser and web page with Selenium from a Python console.
The mouse is already at a specific location where I know a link will appear once the page loads. I never move the mouse.
I press the left mouse button once (this is necessary to take focus from the console where Python is running to the browser).
I press the left mouse button again (remember, cursor is above a given link).
The link opens normally, as it should.
Experiment 2
As before, I open a browser and the web page with Selenium from a Python console.
This time around, instead of clicking with the mouse, I use Selenium (in the Python console) to click the same element with a random offset.
The link doesn't open, but I am taken to a sign up page.
Implications
opening a web browser via Selenium doesn't preclude me from appearing human
moving the mouse like a human is not necessary to be classified as human
clicking something via Selenium with an offset still raises the alarm
It seems mysterious, but I guess they can just determine whether an action originates from Selenium or not, while they don't care whether the browser itself was opened via Selenium or not. Or can they determine if the window has focus? It would be interesting to hear if anyone has any insights.
It sounds like they are behind a web application firewall. Take a look at modsecurity and OWASP to see how those work.
In reality, what you are asking is how to do bot detection evasion. That is not what Selenium WebDriver is for. It is for testing your web application not hitting other web applications. It is possible, but basically, you'd have to look at what a WAF looks for in their rule set and specifically avoid it with selenium if you can. Even then, it might still not work because you don't know what WAF they are using.
You did the right first step, that is, faking the user agent. If that didn't work though, then a WAF is in place and you probably need to get more tricky.
Point taken from other answer. Make sure your user agent is actually being set correctly first. Maybe have it hit a local web server or sniff the traffic going out.
Even if you are sending all the right data (e.g. Selenium doesn't show up as an extension, you have a reasonable resolution/bit-depth, &c), there are a number of services and tools which profile visitor behaviour to determine whether the actor is a user or an automated system.
For example, visiting a site then immediately going to perform some action by moving the mouse directly to the relevant button, in less than a second, is something no user would actually do.
It might also be useful as a debugging tool to use a site such as https://panopticlick.eff.org/ to check how unique your browser is; it'll also help you verify whether there are any specific parameters that indicate you're running in Selenium.
Answer: YES
Some sites will detect selenium by the browser's fingeprints and other data, other sites will detect selenium based on behavior, not only based on what you do, but what you don't do as well.
Usually with the data that selenium provides is enough to detect it.
you can check the browser fingerprints in sites like this ones
https://bot.sannysoft.com
https://fingerprintjs.github.io/fingerprintjs/
https://antoinevastel.com/bots/
try with your user browser, then try with selenium, you'll see the differences.
You can change some fingerprints with options(), like user agent and others, see the results by yourself.
You can try to avoid this detection by many ways, I recommend using this library:undetected_chromedriver:
https://github.com/ultrafunkamsterdam/undetected-chromedriver
import undetected_chromedriver.v2 as uc
Else you can try using an alternative to selenium. I heard of PhantomJS, but didn't tried.
Some sites are detecting this:
function d() {
try {
if (window.document.$cdc_asdjflasutopfhvcZLmcfl_.cache_)
return !0
} catch (e) {}
try {
//if (window.document.documentElement.getAttribute(decodeURIComponent("%77%65%62%64%72%69%76%65%72")))
if (window.document.documentElement.getAttribute("webdriver"))
return !0
} catch (e) {}
try {
//if (decodeURIComponent("%5F%53%65%6C%65%6E%69%75%6D%5F%49%44%45%5F%52%65%63%6F%72%64%65%72") in window)
if ("_Selenium_IDE_Recorder" in window)
return !0
} catch (e) {}
try {
//if (decodeURIComponent("%5F%5F%77%65%62%64%72%69%76%65%72%5F%73%63%72%69%70%74%5F%66%6E") in document)
if ("__webdriver_script_fn" in document)
return !0
} catch (e) {}
It seems to me the simplest way to do it with Selenium is to intercept the XHR that sends back the browser fingerprint.
But since this is a Selenium-only problem, it’s better just to use something else. Selenium is supposed to make things like this easier, not way harder.
Write an HTML page with the following code. You will see that in the DOM selenium applies a webdriver attribute in the outerHTML:
<html>
<head>
<script type="text/javascript">
<!--
function showWindow(){
javascript:(alert(document.documentElement.outerHTML));
}
//-->
</script>
</head>
<body>
<form>
<input type="button" value="Show outerHTML" onclick="showWindow()">
</form>
</body>
</html>
You can try to use the parameter "enable-automation"
var options = new ChromeOptions();
// hide selenium
options.AddExcludedArguments(new List<string>() { "enable-automation" });
var driver = new ChromeDriver(ChromeDriverService.CreateDefaultService(), options);
But, I want to warn that this ability was fixed in ChromeDriver 79.0.3945.16.
So probably you should use older versions of chrome.
Also, as another option, you can try using InternetExplorerDriver instead of Chrome. As for me, IE does not block at all without any hacks.
And for more info try to take a look here:
Selenium webdriver: Modifying navigator.webdriver flag to prevent selenium detection
Unable to hide "Chrome is being controlled by automated software" infobar within Chrome v76
I've found changing the JavaScript "key" variable like this:
//Fools the website into believing a human is navigating it
((JavascriptExecutor)driver).executeScript("window.key = \"blahblah\";");
works for some websites when using Selenium WebDriver along with Google Chrome, since many sites check for this variable in order to avoid being scraped by Selenium.
I have the same problem and solved the issue with the following configuration (in C#)
options.AddArguments("start-maximized");
options.AddArguments("--user-agent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36");
options.AddExcludedArgument("enable-automation"); // For hiding chrome being controlled by automation..
options.AddAdditionalCapability("useAutomationExtension", false);
// Import cookies
options.AddArguments("user-data-dir=" + userDataDir);
options.AddArguments("profile-directory=" + profileDir);
The Chromium developers recently added a 2nd headless mode in 2021, which no longer adds HeadlessChrome to the user agent string. See https://bugs.chromium.org/p/chromium/issues/detail?id=706008#c36
Add they later renamed the option in 2023 for Chrome 109 -> https://github.com/chromium/chromium/commit/e9c516118e2e1923757ecb13e6d9fff36775d1f4
The newer --headless=new flag will now allow you to get the full functionality of Chrome in the new headless mode, and you can even run extensions in it, for Chrome 109 and above. (If using Chrome 96 through 108, use the older --headless=chrome option.)
Usage: (Chrome 109 and above):
options.add_argument("--headless=new")
Usage: (Chrome 96 through Chrome 108):
options.add_argument("--headless=chrome")
This new headless mode makes Chromium browsers work just like regular mode, which means they won't be as easily detected as Chrome in the older headless mode.
Combine that with other tools such as undetected-chromedriver for maximum evasion against Selenium-detection.

Open URL in Python in existing tab

I'm using webcommands to control a Sonoff. To change the setting I run the following line in Python:
webbrowser.open('http://Sonoff_IP/cm?cmnd=POWER%20TOGGLE')
I am looking for a way to run the URL in the same tab, so as not to create a new tab every time the command runs.
My understanding is that using if you are using webbrowser.open(<url>) it's not possible to avoid getting a new tab each time; with webbrowser it is possible to make sure it opens in the same browser window, but not in the same tab. To target the same window you need to set new=0 like:
webbrowser.open('http://Sonoff_IP/cm?cmnd=POWER%20TOGGLE', 0);
However, if you are able to open the link using the selenium library instead it is possible.
Read the docs for selenium and webdriver here: https://selenium-python.readthedocs.io/api.html
The main issue with doing it using Selenium is that, I think, you lose the ability to target the user's default web browser, and by default Selenium seems to default to Firefox since a lightweight port of Firefox is included in the Selenium library itself.
An example of opening a link in Selenium would be like:
from selenium import webdriver
link1="https://www.google.com"
link2="https://www.youtube.com/"
driver=webdriver.Firefox()
driver.get(link1)
driver.get(link2)
Selenium does support a lot of different browsers, so if you are able to get the user's default web browser from the webbrowser module or by some other method, you would be able to use that information to open URLs in the same tab with the user's default browser.
Hope this helps and good luck! :)
Use Javascript:
OpenSameTab = '<script language="JavaScript" type="text/JavaScript">window.location = \'%s\';</script>'
and then
print OpenSameTab % 'file.py'

how to convert IP address into http for urllib

I'm looking to embark on my own personal project of creating an application which i can save doc/texts/image from the site my browser is at. I have done a lot of research to conclude that either of the two ways is possible for now: using cookies or packet sniffers to identify the IP address(the packet sniffer method being more relevent at the moment).
I would like to automate the application so I would not have to copy and paste the url on my browser and paste it into the script using urllib.
Are there any suggestions that experienced network programmers can provide with regards to the process or modules or libraries I need?
thanks so much
jonathan
If you want to download all images, docs, and text while you're actively browsing (which is probably a bad idea considering the sheer amount of bandwidth) then you'll want something more than urllib2. I assume you don't want to have to keep copying and pasting all the urls into a script to download everything, if that is not the case a simple urllib2 and beautifulsoup filter would do you wonders.
However if what I assume is correct then you are probably going to want to investigate selenium. From there you can launch a selenium window (defaults to Firefox) and then do your browsing normally. The best option from there is to continually poll the current url and if it is different identify all of the elements you want to download and then use urllib2 to download them. Since I don't know what you want to download I can't really help you on that part. However here is what something like that would look like in selenium:
from selenium import webdriver
from time import sleep
# Startup the web-browser
browser = webdriver.Firefox()
current_url = browser.current_url
while True:
try:
# If we have a url, identify and download your items
if browser.current_url != current_url:
# Download the stuff here
current_url = browser.current_url
# Triggered once you close the web-browser
except:
break
# Sleep for half a second to avoid demolishing your machine from constant polling
sleep(0.5)
Once again I advise against doing this, as constantly downloading images, text, and documents would take up a huge amount of space.

What is called a library that allows emulation of a browser for automating purposes?

I have an automating task in which I need to fill several forms in a site with data from word documents. For that I would need a library that emulates a browser and allows me to programatically enter a site and access html elements. What is this called? Are there examples of libraries that do this for python or clojure?
You have a couple of choices:
Mechanize
Selenium
There are others too, but I can't remember them off the top of my head right now (will post as and when I remember more)
You may wanna take a look at PhantomJS too:
PhantomJS is a headless WebKit with JavaScript API. It has fast and
native support for various web standards: DOM handling, CSS selector,
JSON, Canvas, and SVG.
If you just want to submit a form, it would probably be easier to forge a request and send it using urllib2.
In nowadays Clojure, http-kit is my favorite. It just makes http interaction very easy.
; taken from github
(defn on-response [resp]
;; {:status 200 :body "....." :headers {:key val :key val}}
(println resp))
;;; initialize, timeout is 40s, and default user-agent
(http/init :timeout 40000 :user-agent "http-kit/1.1")
;;; other params :headers :proxy binary? keyify?
(http/get {:url "http://shenfeng.me" :cb on-response})
;;; other params :headers :proxy binary? keyify?
(http/post {:url "http://example/"
:cb on-response
:body {"name" "http-kit" "author" "shenfeng"} :binary? true})
I have also used CasperJs and it just makes any headless browsing possible. Also, you can interact with the client side javascript while automating the browsing.
The only draw back I found was that it was slightly harder to integrate all this with existing code, but as a standalone tool it was perfect. It also supports both coffescript and javascript scripting.
Look at the Quickstart to get an idea on how it works.

Python 3.X Playing with the internet

I'm doing a small project to help my work go by faster.
I currently have a program written in Python 3.2 that does almost all of the manual labour for me, with one exception.
I need to log on to the company website (username and password) then choose a month and year and click download.
I would like to write a little program to do that for me, so that the whole process is completely done by the program.
I have looked into it and I can only find tools for 2.X.
I have looked into urllib and I know that some of the 2.X moudles are now in urllib.request.
I have even found some code to start it off, however I'm confused as to how to put it into practise.
Here is what I have found:
import urllib2
theurl = 'http://www.someserver.com/toplevelurl/somepage.htm'
username = 'johnny'
password = 'XXXXXX'
# a great password
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
# this creates a password manager
passman.add_password(None, theurl, username, password)
# because we have put None at the start it will always
# use this username/password combination for urls
# for which `theurl` is a super-url
authhandler = urllib2.HTTPBasicAuthHandler(passman)
# create the AuthHandler
opener = urllib2.build_opener(authhandler)
urllib2.install_opener(opener)
# All calls to urllib2.urlopen will now use our handler
# Make sure not to include the protocol in with the URL, or
# HTTPPasswordMgrWithDefaultRealm will be very confused.
# You must (of course) use it when fetching the page though.
pagehandle = urllib2.urlopen(theurl)
# authentication is now handled automatically for us
All Credit to Michael Foord and his page: Basic Authentication
So I changed the code around a bit and replaced all the 'urllib2' with 'urllib.request'
Then I learned how to open a webpage, figuring the program should open the webpage, use the login and password data to open the page, then I'll learn how to download the files from it.
ie = webbrowser.get('c:\\program files\\internet explorer\\iexplore.exe')
ie.open(theurl)
(I know Explorer is garbage, just using it to test then I'll be using crome ;) )
But that doesnt open the page with the login data entered, it simply opens the page as though you had typed in the url.
How do I get it to open the page with the password handle?
I sort of understand how Michael made them, but I'm not sure which to use to actually open the website.
Also an after thought, might I need to look into cookies?
Thanks for your time
you get things confused here.
webbrowser is a wrapper around your actual webbrowser, and urllib is a library for http- and url-related stuff.
They don't know each other, and serve very different purposes.
In former IE versions, you could encode HTTP Basic Auth username and password in the URL like so:
http(s)://Username:Password#Server/Ressource.ext - I believe Firefox and Chrome still support that, IE killed it: http://support.microsoft.com/kb/834489/EN-US
if you want to emulate a browser, rather than just open a real one, take a look at mechanize: http://wwwsearch.sourceforge.net/mechanize/
your browser doesn't know anything about the authenitcation you've done in python (and that has nothing to do wheater your browser is garbage or not). the webbrowser module simply offers convenience methods for launching a browser and pointing it to a webbrowser. you can't 'transfer' your credentials to the browser.
as for migrating from python2 to python3: the 2to3 tool can convert simple scripts like your automatically.
They are not running in the same environment.
You need to figure out what really happened when you click the download button. Use your browser's develop tool to get the POST format the website is using. Then build a request in python to fetch the file.
Requests is a nice lib to do that kind of things much easier.
I would use selenium, this is some code from a little script I have hacked about a bit to give you an idea:
def get_name():
user = 'johnny'
passwd = 'XXXXXX'
try :
driver = webdriver.Remote(desired_capabilities=webdriver.DesiredCapabilities.HTMLUNIT)
driver.get('http://www.someserver.com/toplevelurl/somepage.htm')
assert 'Page Title' in driver.title
username = driver.find_element_by_name('name_of_userid_box')
username.send_keys(user)
password = driver.find_element_by_name('name_of_password_box')
password.send_keys(passwd)
submit = driver.find_element_by_name('name_of_login_button')
submit.click()
driver.get('http://www.someserver.com/toplevelurl/page_with_download_button.htm')
assert 'page_with_download_button title' in driver.title
download = driver.find_element_by_name('download_button')
download.click()
except :
print('process failed')
I'm new to python so that may not be the best code every written but it should give you the general idea.
Hope it helps

Categories

Resources