Selenium proxy unable to connect with Chrome - python

I've been attempting to connect to an HTTPS proxy for hours and can't seem to figure out why it's not working. I'm using Python 3.9.0 with Selenium 3.141.0 and Chrome 92.0.4515.159. I'm grabbing free HTTPS- and Google-accessible proxies from here, and using the following preferences:
options = webdriver.ChromeOptions()
useragent = UserAgent.random
options.add_argument(f"--user-agent={useragent}")
options.add_argument("--window-size=1920,1080")
options.add_argument('--disable-blink-features=AutomationControlled')
options.add_experimental_option('useAutomationExtension', False)
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_argument(f"--proxy-server={proxystring}")
capabilities = DesiredCapabilities.CHROME
capabilities["AcceptSslCerts"] = True
capabilities["marionette"] = True
driver = webdriver.Chrome(options=options, desired_capabilities=capabilities)
where proxystring is a <host>:<port> string. Whenever I attempt to use the proxy, I get the following Chrome error:
Whenever I don't use the proxy, the page loads fine – I'm just confused as to why my attempt to use proxies isn't working.

Sadly, free proxies from https://free-proxy-list.net/ and other similar websites have very low success rate and work very rarely.
I'd suggest you to use premium proxies or other methods to avoid detection such as User-Agent rotation, using more waits etc.

Related

Selenium Chrome WebDriver doesn't use proxy

I'm using Selenium webdriver to open a webpage and I set up a proxy for the driver to use. The code is listed below:
PATH = "C:\Program Files (x86)\chromedriver.exe"
PROXY = "212.237.16.60:3128" # IP:PORT or HOST:PORT
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument(f'--proxy-server={PROXY}')
proxy = Proxy()
proxy.auto_detect = False
proxy.http_proxy = PROXY
proxy.sslProxy = PROXY
proxy.socks_proxy = PROXY
capabilities = webdriver.DesiredCapabilities.CHROME
proxy.add_to_capabilities(capabilities)
driver = webdriver.Chrome(PATH, chrome_options=chrome_options,desired_capabilities=capabilities)
driver.get("https://whatismyipaddress.com")
The problem is that the web driver is not using the given proxy and it accesses the page with my normal IP. I already tried every type of code I could find on the internet and it didn't work. I also tried to set a proxy directly in my pc settings and when I open a normal chrome page it works fine (it's not a proxy server problem then), but if I open a page with the driver it still uses my normal IP and somehow bypasses the proxy. I also tried changing the proxy settings of the IDE (pycharm) and still it's not working. I'm out of ideas, could someone help me?
This should work.
Code snippet-
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
PROXY = "212.237.16.60:3128"
#add proxy in chrome_options
chrome_options.add_argument(f'--proxy-server={PROXY}')
driver = webdriver.Chrome(PATH,options=chrome_options)
#to check new IP
driver.get("https://api.ipify.org/?format=json")
Note:- chrome_options is deprecated now, you have to use options instead

Not able to open the webpage through selenium python

I am new to selenium python and I am trying to scrape the data from a website. Below is the code, where I have taken all the necessary precautions to not get blocked.
from random import randrange
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
#Function to generate random useragent.
def generate_user_agent():
user_agents_file = open("user_agents.txt", "r")
user_agents = user_agents_file.read().split("\n")
i = randrange(len(user_agents))
userAgent = user_agents[i]
user_agents_file.close()
return userAgent
#Function to generate random IP address.
def generate_ip_address():
proxies_file = open("proxyscrape_premium_http_proxies.txt", "r")
proxies = proxies_file.read().split("\n")
i = randrange(len(proxies))
proxy = proxies[i]
proxies_file.close()
return proxy
#Function to create and set chrome options.
def set_chrome_options():
proxy = generate_ip_address()
options = webdriver.ChromeOptions()
options.add_argument("start-maximized")
options.add_argument("--incognito")
options.add_argument(f'--proxy-server={proxy}')
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
return options, proxy
#Function to create a webdriver object and set its properties.
def create_webdriver():
options, proxy = set_chrome_options()
userAgent = generate_user_agent()
webdriver.DesiredCapabilities.CHROME['proxy'] = {
"httpProxy": proxy,
"ftpProxy": proxy,
"sslProxy": proxy,
"proxyType": "MANUAL",}
webdriver.DesiredCapabilities.CHROME['acceptSslCerts']=True
driver = webdriver.Chrome(options=options, executable_path=r'chromedriver.exe')
driver.execute_script("Object.defineProperty(navigator, 'webdriver', {get: () => undefined})")
driver.execute_cdp_cmd('Network.setUserAgentOverride', {"userAgent": userAgent})
return driver
url = 'http://www.doctolib.de/impfung-covid-19-corona/berlin'
driver = create_webdriver()
driver.get(url)
The webpage is not opened via selenium web driver(but can be opened normally). Below is the screenshot of how the browser is opened when I run the code.
Please let me know If I am missing something. Any help would be highly appreciated
PS: I am using the premium proxies for IP rotation.
Browser_output
I've had similar experience in the past where the website detects that selenium is being used, even after using several methods like IP rotation, User-Agent rotation or using proxies.
I would suggest you to use the undetected_chromedriver library.
pip install undetected-chromedriver
It's able to load the website without any problem.
The code snippet is given below:-
import undetected_chromedriver.v2 as uc
driver = uc.Chrome()
with driver:
driver.get('http://www.doctolib.de/impfung-covid-19-corona/berlin')
I was having similar issue with Firefox on Linux. I just deleted the log file which was quite big for text file (4.8 mb) created by geckodriver and everything started to work fine again

How to minimize or hide the Geckodriver in Selenium?

This is what I have:
from selenium import webdriver
driver = webdriver.Firefox()
How can I let the geckodriver open minimized or hidden?
You have to set the firefox driver options to headless so that it opens minimized. Here is the code to do that.
fireFoxOptions = webdriver.FirefoxOptions()
fireFoxOptions.set_headless()
driver = webdriver.Firefox(firefox_options=fireFoxOptions)
If this doesn't work for you there are other methods that you can use. Check out this other SO question for those: How to make firefox headless programmatically in Selenium with python?
geckodriver
geckodriver in it's core form is a proxy for using W3C WebDriver-compatible clients to interact with Gecko-based browsers.
This program provides the HTTP API described by the WebDriver protocol to communicate with Gecko browsers, such as Firefox. It translates calls into the Firefox remote protocol by acting as a proxy between the local- and remote ends.
So more or less it acts like a service. So the question to minimize the GeckoDriver shouldn't arise.
Minimizing Mozilla Firefox browser
The Selenium driven GeckoDriver initiated firefox Browsing Context by default opens in semi maximized mode. Perhaps while executing tests with the browsing context being minimized would be against all the best practices as Selenium may loose the focus over the Browsing Context and an exception may raise during the Test Execution.
However, Selenium's python client does have a minimize_window() method which eventually minimizes the Chrome Browsing Context effectively.
Solution
You can use the following solution:
Firefox:
from selenium import webdriver
options = webdriver.FirefoxOptions()
options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe'
driver = webdriver.Firefox(firefox_options=options, executable_path=r'C:\WebDrivers\geckodriver.exe')
driver.get('https://www.google.co.in')
driver.minimize_window()
Chrome:
from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_argument("--start-maximized")
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
driver = webdriver.Chrome(chrome_options=options, executable_path=r'C:\Utility\BrowserDrivers\chromedriver.exe')
driver.get('https://www.google.co.in')
driver.minimize_window()
Reference
You can find a detailed relevant discussion in:
How to Minimize browser window in selenium webdriver 3
tl; dr
How to make firefox headless programmatically in Selenium with python?

Headless Chrome Browser - Using Selenium and ChromeDriver

Running latest version of all features
(chrome version 66, selenium version 3.12, chromedriver version 2.39, python version 3.6.5)
I have tried all of the solutions that I have found online but nothing seems to be working. I have automated something using Selenium for Chrome and it does exactly what I need it to do.
The last thing I need to to is hide the browser because I do not need to see the UI. I have attempted to make the browser headless using the following code but when I do the program crashes.
I have also tried to alter the window size to 0x0 and even tried the command: options.set_headless(headless=True) instead but nothing seems to work.
NOTE: I am running on Windows and have also tried with the command:
options.add_argument('--disable-gpu')
Try to move browser over the monitor
driver = webdriver.Chrome()
driver.set_window_position(-2000,0) # if -20000 don't help put -10000
What is happening when you run, does it produce any error or just run as if you hadn't even added the headless argument?
Here's what I do for mine to run headlessly -
from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_argument('--headless')
options.add_argument('--disable-gpu')
selenium_chrome_driver_path = "blah blah my path is here"
my_driver = webdriver.Chrome(chrome_options = options, executable_path = selenium_chrome_driver_path)
I'm running this code with python 3.6 with up to date selenium drivers on Windows 8 (unfortunately)
Got the solutions after combining a few sources. It seems that the libraries get often updated and eventually it becomes deprecated. Atm this one worked for me in Windows, using Python 3.4:
from selenium import webdriver
options = webdriver.ChromeOptions()
options.headless = True
driver = webdriver.Chrome(executable_path="C:/Users/Admin/Documents/chromedriver_win32/chromedriver", options=options)
Try to use user agent in chrome_options
ua = UserAgent()
userAgent = ua.random
print(userAgent)
chrome_options.add_argument('user-agent={userAgent}')

Disable proxy via Python for Selenium with headless Chrome

I'm using Chrome with Selenium in Python 2.7.
I've tried to run Chrome in headless mode but it slows down my tests significantly.
One workaround shall be to disable the proxy settings but I don't know how to do it in python.
This is my code so far:
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--disable-extensions")
chrome_options.add_argument('headless')
chrome_options.add_argument('--hide-scrollbars')
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('???') # which option?
self.driver = webdriver.Chrome("C:\Python27\Scripts\chromedriver.exe", chrome_options=chrome_options)
Does anyone know how to solve this?
Try this:
chrome_options.add_argument('--no-proxy-server')

Categories

Resources