Python Selenium Webdriver - Proxy: query parameters - python

The challenge I see is that, through selenium, I am trying to click on a website element (a div with some js attached). The "button" navigates you to another page.
How can I configure the browser to automatically route the requests through a proxy?
My proxy is set up as follows:
http://api.myproxy.com?key=AAA111BBB6&url=http://awebsitetobrowse.com
I am trying to put webdriver (chrome) behind the proxy
from selenium import webdriver
options = webdriver.ChromeOptions()
driver = webdriver.Chrome(chrome_options=options)
where options, so far, is some basic configuration of the browser window size.
I have seen quite some examples (ex1, ex2, ex3) but I somehow fail to find an example that suits my needs.
import os
dir_path = os.path.dirname(os.path.realpath(__file__)) + "\\chromedriver.exe"
PROXY = "http://api.scraperapi.com?api_key=1234&render=true"
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--proxy-server=%s' % PROXY)
driver = webdriver.Chrome(executable_path = dir_path, chrome_options=chrome_options)
driver.get("https://stackoverflow.com/questions/11450158/how-do-i-set-proxy-for-chrome-in-python-webdriver")

Though it seems like the Proxy address you are using is not an actual proxy it is an API that returns HTML content of page itself after handling proxies, captcha or any IP blocking. But still for different scenario there can be different solution. some of those are as follow.
Scenario 1
So according to me, you are using this API in the wrong manner if your
api provide the facility to return the response of your visited page through the proxy.
So it should be used directly in 'driver.get()' with
address="http://api.scraperapi.com/?api_key=YOURAPIKEY&url="+url_to_be_visited_via_api
Example code for this would look like:
import os
dir_path = os.path.dirname(os.path.realpath(__file__)) + "\\chromedriver.exe"
APIKEY=1234 #replace with your API Key
apiURL = "http://api.scraperapi.com/?api_key="+APIKEY+"&render=true&url="
visit_url = "https://stackoverflow.com/questions/11450158/how-do-i-set-proxy-for-chrome-in-python-webdriver"
from selenium import webdriver
driver = webdriver.Chrome(executable_path = dir_path)
driver.get(apiURL+visit_url)
Scenario 2
But if you have some API that provides proxy address and login
credentials in response then it can be fudged in chrome options to use
it with chrome itself.
This should be in case if response of api is something like
"PROTOCOL://user:password#proxyserver:proxyport" (In case of authentication)
"PROTOCOL://proxyserver:proxyport" (In case of null authentication)
In both cases PROTOCOL can like HTTP, HTTPS, SOCKS4, SOCKS5 etc.
And that code should look like:
import os
dir_path = os.path.dirname(os.path.realpath(__file__)) + "\\chromedriver.exe"
import requests
proxyapi = "http://api.scraperapi.com?api_key=1234&render=true"
proxy=requests.get(proxyapi).text
visit_url = "https://stackoverflow.com/questions/11450158/how-do-i-set-proxy-for-chrome-in-python-webdriver"
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--proxy-server='+proxy)
driver = webdriver.Chrome(executable_path = dir_path, chrome_options=chrome_options)
driver.get(visit_url)
Scenario 3
But if you have some API itself is a proxy with null authentication, then it can be fudged in chrome options to use
it with chrome itself.
And that code should look like:
import os
dir_path = os.path.dirname(os.path.realpath(__file__)) + "\\chromedriver.exe"
proxyapi = "http://api.scraperapi.com?api_key=1234&render=true"
visit_url = "https://stackoverflow.com/questions/11450158/how-do-i-set-proxy-for-chrome-in-python-webdriver"
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--proxy-server='+proxyapi)
driver = webdriver.Chrome(executable_path = dir_path, chrome_options=chrome_options)
driver.get(visit_url)
So the solution can be used as per the different scenario.

Well, after countless of experiments, I have figure out that the thing works with:
apiURL = "http://api.scraperapi.com/?api_key="+APIKEY+"&render=true&url="
while fails miserably with
apiURL = "http://api.scraperapi.com?api_key="+APIKEY+"&render=true&url="
I have to admit my ignorance here: I thought the two should be equivalent

Related

I cannot get my proxies to work on Selenium

I have searched on stackoverflow as well as just googled how to use Proxies with Selenium. I found two different ways but none are working for me. Can you guys please help me figure out what I am doing wrong?
from selenium.webdriver.chrome.options import Options
from selenium import webdriver
from selenium.webdriver.common.proxy import Proxy, ProxyType
proxy = "YYY.YYY.YYY.YY:XXXX"
prox = Proxy()
prox.proxy_type = ProxyType.MANUAL
prox.http_proxy = proxy
prox.https_proxy = proxy
capabilities = webdriver.DesiredCapabilities.CHROME
prox.add_to_capabilities(capabilities)
options = webdriver.ChromeOptions()
options.add_experimental_option('detach', True)
driver = webdriver.Chrome(desired_capabilities=capabilities, options=options)
Code Above did not work. The page would open but if I go to "Whatsmyip.com" I could see my home IP.
I then tried another method I found on this link:
https://www.browserstack.com/guide/set-proxy-in-selenium
proxy = "YYY.YYY.YYY.YY:XXXX"
options = webdriver.ChromeOptions()
options.add_experimental_option('detach', True)
options.add_argument("--proxy--server=%s" % proxy)
driver = webdriver.Chrome(options = options)
Same result as with the previous method. Browser will open but home IP.
Worth mentioning that I tried with USER:PASS proxies, as well as IP Authorized proxies. None worked!
In addition to helping me figure out how to use proxies, I would also like to understand why these methods are different. On the one hand, the Selenium documentation talks about a proxy class which you access via the "common.proxy" class, yet the second method is directly using Chrome's options and not Selenium's proxy class. I am confused as to why have two methods, and of course which one works more reliably.
Thanks
Authentificated proxies aren't supported for chrome by default. If you still need them, refer to Selenium-Profiles.
Your second code snippet should work as following:
proxy = "https://host_or_ip:port" # or "socks5://" or "http://"
options = webdriver.ChromeOptions()
options.add_argument("--proxy--server=%s" % proxy)
driver = webdriver.Chrome(options = options)
driver.get("http://lumtest.com/myip.json") # test proxy
input("Press ENTER to exit")
If it still doesn't work, check your proxy with curl or python requests.

Selenium Chrome WebDriver doesn't use proxy

I'm using Selenium webdriver to open a webpage and I set up a proxy for the driver to use. The code is listed below:
PATH = "C:\Program Files (x86)\chromedriver.exe"
PROXY = "212.237.16.60:3128" # IP:PORT or HOST:PORT
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument(f'--proxy-server={PROXY}')
proxy = Proxy()
proxy.auto_detect = False
proxy.http_proxy = PROXY
proxy.sslProxy = PROXY
proxy.socks_proxy = PROXY
capabilities = webdriver.DesiredCapabilities.CHROME
proxy.add_to_capabilities(capabilities)
driver = webdriver.Chrome(PATH, chrome_options=chrome_options,desired_capabilities=capabilities)
driver.get("https://whatismyipaddress.com")
The problem is that the web driver is not using the given proxy and it accesses the page with my normal IP. I already tried every type of code I could find on the internet and it didn't work. I also tried to set a proxy directly in my pc settings and when I open a normal chrome page it works fine (it's not a proxy server problem then), but if I open a page with the driver it still uses my normal IP and somehow bypasses the proxy. I also tried changing the proxy settings of the IDE (pycharm) and still it's not working. I'm out of ideas, could someone help me?
This should work.
Code snippet-
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
PROXY = "212.237.16.60:3128"
#add proxy in chrome_options
chrome_options.add_argument(f'--proxy-server={PROXY}')
driver = webdriver.Chrome(PATH,options=chrome_options)
#to check new IP
driver.get("https://api.ipify.org/?format=json")
Note:- chrome_options is deprecated now, you have to use options instead

Not able to open the webpage through selenium python

I am new to selenium python and I am trying to scrape the data from a website. Below is the code, where I have taken all the necessary precautions to not get blocked.
from random import randrange
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
#Function to generate random useragent.
def generate_user_agent():
user_agents_file = open("user_agents.txt", "r")
user_agents = user_agents_file.read().split("\n")
i = randrange(len(user_agents))
userAgent = user_agents[i]
user_agents_file.close()
return userAgent
#Function to generate random IP address.
def generate_ip_address():
proxies_file = open("proxyscrape_premium_http_proxies.txt", "r")
proxies = proxies_file.read().split("\n")
i = randrange(len(proxies))
proxy = proxies[i]
proxies_file.close()
return proxy
#Function to create and set chrome options.
def set_chrome_options():
proxy = generate_ip_address()
options = webdriver.ChromeOptions()
options.add_argument("start-maximized")
options.add_argument("--incognito")
options.add_argument(f'--proxy-server={proxy}')
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
return options, proxy
#Function to create a webdriver object and set its properties.
def create_webdriver():
options, proxy = set_chrome_options()
userAgent = generate_user_agent()
webdriver.DesiredCapabilities.CHROME['proxy'] = {
"httpProxy": proxy,
"ftpProxy": proxy,
"sslProxy": proxy,
"proxyType": "MANUAL",}
webdriver.DesiredCapabilities.CHROME['acceptSslCerts']=True
driver = webdriver.Chrome(options=options, executable_path=r'chromedriver.exe')
driver.execute_script("Object.defineProperty(navigator, 'webdriver', {get: () => undefined})")
driver.execute_cdp_cmd('Network.setUserAgentOverride', {"userAgent": userAgent})
return driver
url = 'http://www.doctolib.de/impfung-covid-19-corona/berlin'
driver = create_webdriver()
driver.get(url)
The webpage is not opened via selenium web driver(but can be opened normally). Below is the screenshot of how the browser is opened when I run the code.
Please let me know If I am missing something. Any help would be highly appreciated
PS: I am using the premium proxies for IP rotation.
Browser_output
I've had similar experience in the past where the website detects that selenium is being used, even after using several methods like IP rotation, User-Agent rotation or using proxies.
I would suggest you to use the undetected_chromedriver library.
pip install undetected-chromedriver
It's able to load the website without any problem.
The code snippet is given below:-
import undetected_chromedriver.v2 as uc
driver = uc.Chrome()
with driver:
driver.get('http://www.doctolib.de/impfung-covid-19-corona/berlin')
I was having similar issue with Firefox on Linux. I just deleted the log file which was quite big for text file (4.8 mb) created by geckodriver and everything started to work fine again

How to handle SSL Certificate in IE using selenium with python?

I'm getting the error as per the image.
Error_img
I tried the following code to solve it.
Method 1 :
from selenium import webdriver
from selenium.webdriver.ie.options import Options
options = Options()
options.set_capability={"acceptInsecureCerts", True}
options.set_capability={"ignoreProtectedModeSettings":True, "ignoreZoomSetting":True}
driver = webdriver.Ie(options=options,executable_path='D:/
Project/Testing/IEDriverServer_Win32_3.150.1/IEDriverServer.exe')
driver.get(url)
options.set_capability={"ie.ensureCleanSession",True}
driver.close()
Method 2:
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
desired_capabilities = DesiredCapabilities.INTERNETEXPLORER.copy()
desired_capabilities['acceptInsecureCerts'] = True
driver = webdriver.Ie(capabilities=desired_capabilities,executable_path='E:/DriverServer_Win32_3.150.1/IEDriverServer.exe')
driver.get(url)
print(driver.title)
driver.close()
**Can't share the URL therefore I have just written URL word
I tried both code but it's not working
Is there any another solution ?**
The acceptInsecureCerts capability doesn't work because IE doesn't allow to accept it. You can refer to this link for more detailed information.
In IE 11, you can click the link Go on to the webpage (not recommended) as a workaround to bypass the SSL certificate error. This link has an id "overridelink". You can find the id using F12 dev tools.
I use this site: https://expired.badssl.com/ as an example, the sample code is like below:
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
import time
url = "https://expired.badssl.com/"
ieoptions = webdriver.IeOptions()
ieoptions.ignore_protected_mode_settings = True
driver = webdriver.Ie(executable_path='IEDriverServer.exe', options=ieoptions)
driver.get(url)
time.sleep(3)
driver.find_element_by_id('moreInfoContainer').click()
time.sleep(3)
driver.find_element_by_id('overridelink').click()
It works well in IE 11, you can also try the same method.

How do I save a whatsapp web session in selenium?

I am trying to acces whatsapp web with python without having to scan the QR code everytime I restart the program (because in my normal browser I also dont have to do that). But how can I do that? Where is the data stored that tells whatsapp web to connect to my phone? And how do I save this data and send it to the browser when I rerun the code?
I already tried this because someone told me I should save the cookies:
from selenium import webdriver
import time
browser = None
cookies = None
def init():
browser = webdriver.Firefox(executable_path=r"C:/Users/Pascal/Desktop/geckodriver.exe")
browser.get("https://web.whatsapp.com/")
time.sleep(5) # in this time I scanned the QR to see if there are cookies
cookies = browser.get_cookies()
print(len(cookies))
print(cookies)
init()
Unfortunately there were no cookies..
The output was 0 and [].
How do I fix this probblem?
As mentioned in the answer to this question, pass your Chrome profile to the Chromedriver in order to avoid this problem. You can do it like this:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = webdriver.ChromeOptions()
options.add_argument("user-data-dir=C:\\Path") #Path to your chrome profile
driver = webdriver.Chrome(executable_path="C:\\Users\\chromedriver.exe", options=options)
This one works for me, I just created a folder, on the home directory of the script and a little modifications and it works perfectly.
###########
E_PROFILE_PATH = "user-data-dir=C:\Users\Denoh\Documents\Project\WhatBOts\SessionSaver"
##################
This is the Config File that I will import later
##################
The main script starts here
##################
from selenium import webdriver
from config import E_PROFILE_PATH
options = webdriver.ChromeOptions()
options.add_argument(E_PROFILE_PATH)
driver = webdriver.Chrome(executable_path='chromedriver_win32_86.0.4240.22\chromedriver.exe', options=options)
driver.get('https://web.whatsapp.com/')

Categories

Resources