How to open browser with all credentials and settings already loaded - python

Hope everyone is doing okay 😊
Currently I’m trying to get info from a page where I have to sign in and then complete a Captcha (selecting different pictures) in order to proceed. The thing is that when I enter the page from my default browser (like manually clicking chrome) I don’t need to sign in and complete the captcha, because I am already signed in (I think because of the cookies?).
Every time I run selenium it opens a “blank” Chrome, is it possible to open a browser with all my data already loaded in order to skip the captcha?
I have tried different solutions explained here but with no success How to save and load cookies using Python + Selenium WebDriver
Thank you very much!

Yes there is a way: You can load your Chromebrowser with the chrome-profile saved on your computer. To do this you have to use webdriver.Chromeoptions().
options = webdriver.ChromeOptions()
options.add_argument(r'--user-data-dir=C:\Users\YourUser\AppData\Local\Google\Chrome\User Data\')
PATH = "/Users/YourUser/Desktop/chromedriver"
driver = webdriver.Chrome(PATH, options=options)
In your chrome-user, every cookie and stuff is saved and you no longer have a clean browser. Remember: There are still websites which can detect that your using selenium, but there are only a few...

Related

I am using Google Chrome with Selenium and while going on Amazon, it detects it and Blocks everything. Can I bypass this?

This is tied to my other question here: Selenium cant seem to find Amazon "Pre-order now" button. Python
It worked for a bit but now when I run the code, it detects the robot and shows me the DOG image even after I enter in the captcha text. Please help.
I am not sure if this will help, But I would suggest you to open browser every time in incognito mode.
options = webdriver.ChromeOptions()
options.add_argument("--incognito")
driver = webdriver.Chrome(executable_path = driver_path, options = options)
I think I found the solution.
I deleted the Chromdriver.exe from where I had it downloaded and that fixed everything.

Selenium gets stuck in Login phase, even the real browser is already logged in (needs session data)

Unlike the usual, I am having trouble while fetching elements from the website below.
The element hierarchy is as follows:
My goal is to fetch the rows (data-row-key='n') inside the class "rc-table-tbody".
Here below is my Python script:
chromeOptions = Options()
chromeOptions.add_argument("headless")
driver = webdriver.Chrome(chrome_options=chromeOptions, executable_path=DRIVER_PATH)
driver.get('https://www.binance.com/en/my/orders/exchange/usertrade')
None of these below works (gives either unable to locate element or timeout):
elements = driver.find_element_by_class_name("rc-table-tbody")
elements = WebDriverWait(driver, 5).until(EC.presence_of_element_located((By.CLASS_NAME, "rc-table-tbody")))
elements = WebDriverWait(driver, 5).until(EC.visibility_of_all_elements_located((By.CLASS_NAME, "rc-table-tbody")))
elements = WebDriverWait(driver, 5).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "selector_copied_from_inspect")))
elements = WebDriverWait(driver, 5).until(EC.visibility_of_element_located((By.XPATH, "xpath_copied_from_inspect")))
I appreciate any help, thanks!
Edit: I guess the problem is related to cookies. The URL I was trying to fetch is a page after a login in Binance.com and I am already logged in on my Crome browser. Therefore, I assumed that driver will use the current cookies of the real Chrome browser and there wouldn't need for login. However, when I removed "headless" parameter, Selenium poped the login page, instead of the page I was trying to scrape.
How can I get the current cookies of the browser, in order to make Selenium access the exact page I am trying to scrape?
You would have to do some research but I believe you can set up a special Chrome profile for use with "remote debugging" and then start up chrome with remote debugging. See Remote debugging with Chrome Developer Tools. Then there is a way to start up your selenium chromedriver with the correct options (usually something like chromeOptions.add_argument('--remote-debugging-port=9222')) so as to hook into this special Chrome where you have already logged on.
But it might be far simpler to insert an input("Pausing for login completion...") statement following the driver.get call, then login manually and when you have finished successfully logging in, hit the enter key in reply to the input statement to resume execution of your program. For this option you could not use "headless" mode. Of course, this latter option is not a "hands-off", long-term solution. But I looked at the possibility of just adding logic to your existing code to send login credentials, but there seems to be on the next page a nasty captcha that would be very difficult to get past.

How to access Google Chrome Extensions (Session Buddy) via Python?

I want to access Session Buddy using Python.
In my case "access" means to get all currently opened URLs from Chrome.
Session Buddy allows you to save all opened URLs into a .csv file.
To do so you need to "setup" a few things (simplified: press buttons) and then all URLs are downloaded to Chromes /Downloads directory.
I would like to fully automate this process though. This means python needs to access Session Buddy, initiate the download and then save the file to the directory you want it to.
I can't use requests or something though since an extension won't work using an URL. This is what the extension calls: chrome-extension://edacconmaakjimmfgnblocblbcdcpbko/main.html
In general, I don't necessarily want to use Session Buddy to get all the URLs, it just seems to be the easiest way..
So, in summary, I just want to ask: How can I automatically use Python to fetch all currently opened URLs in my Chrome Browser (using Session Buddy)?
I'm thankful for any kind of help.
You can use selenium webdriver and load .crx(extension) file to automate
chrome_options = Options()
chrome_options.add_extension('path_to_extension')
driver = webdriver.Chrome(executable_path=executable_path,chrome_options=chrome_options)
driver.get("http://stackoverflow.com")
driver.quit()

Downloading file to specified location with Selenium and python

Ok so far i have my programing going to the website i want to download link from and selecting it, then the firefox dialogue box shows up and i don't know what to do. i want to save this file to a folder on my desktop. I am using this for a nightly build so i need this to work. Please help.
Here is my code that grabs the download link from the website:
driver = web driver.Firefox()
driver.implicitly_wait(5)
driver.get("Name of web site I'm grabbing from")
driver.find_element_by_xpath("//a[contains(text(), 'DEV.tgz')]".click()
You need to make Firefox save this particular file type automatically.
This can be achieved by setting browser.helperApps.neverAsk.saveToDisk preference:
from selenium import webdriver
profile = webdriver.FirefoxProfile()
profile.set_preference("browser.download.folderList", 2)
profile.set_preference("browser.download.manager.showWhenStarting", False)
profile.set_preference("browser.download.dir", 'PATH TO DESKTOP')
profile.set_preference("browser.helperApps.neverAsk.saveToDisk", "application/x-gzip")
driver = webdriver.Firefox(firefox_profile=profile)
driver.get("Name of web site I'm grabbing from")
driver.find_element_by_xpath("//a[contains(text(), 'DEV.tgz')]").click()
More explanation:
browser.download.folderList tells it not to use default Downloads directory
browser.download.manager.showWhenStarting turns of showing download progress
browser.download.dir sets the directory for downloads
browser.helperApps.neverAsk.saveToDisk tells Firefox to automatically download the files of the selected mime-types
You can view all these preferences at about:config in the browser. There is also a very detailed documentation page available here: About:config entries.
Besides, instead of using xpath approach, I would use find_element_by_partial_link_text():
driver.find_element_by_partial_link_text("DEV.tgz").click()
Also see:
Access to file download dialog in Firefox
Firefox + Selenium WebDriver and download a csv file automatically
If the application is generated dynamically (mime-types) using Chrome browser will be a better approach since the Chrome will not open the file download pop-up.But multiple download option should be enabled if you need multiple downloads.

Click Chrome prompt box to save password

I've just started using Selenium and implemented the ChromeDriver, but when going to the page I want, chrome gives it own prompt box, similar to " save password for this site always", it pretty much has the site asking to store data on my pc, and I have to verify that.. but it interferes with my script.
Is there anyway for Selenium to click " OK "? or am I able to import some sort of session ID so it's already allowed permission to save files rather than prompt me everytime?
First, I don't think this can interfere with the tests in any way, because it's a browser level thing. " it pretty much has the site asking to store data on my pc, and I have to verify that.", wrong. It has nothing to do with "save files". You don't need to worry about this prompt at all.
Second, I somehow can't reproduce your issue, so I can only provide the logic as below.
Chrome has a switch called "--enable-save-password-bubble", which enables save password prompt bubble. You can try set it to false when starting your Chrome.
# untested code, only the logic
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_argument("--enable-save-password-bubble=false")
driver = webdriver.Chrome(executable_path="path/to/chromedriver", chrome_options=chrome_options)

Categories

Resources