Keep browser window open while changing chrome options (Selenium Python) - python

I am attempting to change the proxy settings on a browser without restarting it for the settings to take effect. Is this possible, and if so, how?
EDIT: Figured out a solution that works for me, however, it's not doing what's in the question and still requires a lot more time to complete requests. (Stopping the browser whenever it encounters Cloudflare, swapping proxies, moving on, etc. Stuff I'd quite like to avoid, as this task is extremely time-sensitive and consistency reliant.)

from selenium import webdriver
option = webdriver.ChromeOptions()
option.add_argument('--proxy-server=socks5://ip:ports')
driver = Chrome(options=option)
Login_page(driver).login()
I think no need to restart

Related

Give head to an already headless Selenium Firefox/Chrome session

I am writing a test bot for an app. Generally I need it to be in headless mode. But at some arbitrary buggy occasions it is a comfort to see the browser window and decide if what a wrong issue is happening there. So I need a way to connect a browser window to an active headless session. In an other word to literally convert it to a head-full one.
P.S. Every thing is in Python. Both Firefox and Chrome solutions(or any sort of guiding) are welcomed.

Can we get Selenium to focus on an open browser window?

I have been using Selenium for a couple of years now. I learned a ton, thanks to all the amazing people here at SO. Now, I just encountered a new challenge/opportunity. It seems like Selenium always wants to open a new window, which is a new instance, as far as I can tell, and this new instance doesn't know that I am already logged into the system that I want to be logged into. Normally I would run some generic code like this.
import time
from selenium.webdriver import ActionChains
from selenium import webdriver
driver = webdriver.Chrome('C:\\Users\\chromedriver.exe')
driver.get ('https://corp-login_place')
driver.find_element_by_id('USERID').send_keys('myid')
driver.find_element_by_id ('PASSWORD').send_keys('mypw')
list = [145, 270, 207]
for i in list:
driver.find_element_by_id('InputKeys_LOCATION').send_keys(i)
driver.find_element_by_class('PSPUSHBUTTON').click()
button = driver.find_element_by_class_name("Button")
button.click()
time.sleep(5)
So, I think that automation script should work, but when I run the code, it always opens a new browser window, and asks me to login. At this point my login credential don't work. Very weird. Anyway, I'm already logged in, and everything works fine if I just use the mouse and keyboard. It seems like the open browser window, which is out of focus, works fine, but Selenium don't focus on ths open window, and it opens a new window, which doesn't allow me to login.
For some reason, the code doesn't work when I hit F9, but if I run the process manually, using the mouse and keyboard, everything works totally fine. I feel like
Any insight or illumination, as to what is happening here, would be greatly appreciated. Thanks to all!
What I got is that you have a manually opened browser window (which was not opened with Selenium) and you want to control this window with Selenium?
Apparently this is not supported, but there is some code here that maybe can help you.
Selenium always uses a new instance of the browser no matter how many other browser windows are open in your system. If your problem is related to logging into the website you want to test, then please share the website url.
Or else if you are talking about attaching a session of browser window already opened in your system then this article (http://tarunlalwani.com/post/reusing-existing-browser-session-selenium/) might help, albeit the whole purpose of automating is compromised.

Preventing Python webbrowser.open() from dominating screen and preventing other processes. Is it possible to open as minimised?

I am using webbroswer.open() in a loop to download multiple files at given intervals.
The issue I am having is that whenever the browser window opens, it becomes the primary window and thereby interrupts and disrupts my ability to use the computer. Downloading multiple files means this can last some time. The broswer continuously flashing open is obviously jarring.
Is there any way to instruct webbrowser to open the browser minimised by default or otherwise avoid this issue in some other ingenious way?
Much appreciated!
If you are open to using other modules I would urge you to look into selenium. This allows you to do many things, and one of them is to launch in headless mode (so as not to disturb you as it loads pages). The documentation is at:
https://selenium-python.readthedocs.io/
And you would be interested in the headless option
You would be advised though to make sure your script works without this enabled before you enable it though.
Sample code:
import selenium
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
my_options = Options()
my_options.headless = True # set to False for debugging!!
browser = webdriver.Chrome(options=my_options)
browser.get('http://www.google.com')
print('Done.')
You will need to download the proper drivers (just follow the instructions on the link I posted) for whatever browser you'd like. I picked Chrome, but they have Edge, Firefox, and Safari browsers as well!

Python with selenium: rerun on pre-existing browser

I'm using Python with Selenium 2.44. When the test fails, I can't just uncomment all the code before the failure when debugging it, because the driver will not be declared for the browser. Therefore, whenever I try fixing something, I always have to open a new browser in the test case. This is rather... slow since I have to login, which adds an additional 30 seconds (not devastating, but annoying). I want to know if there's a way for me to just continue a session, or do something that allows me to start the test midway through (so if I have the webpage open already, I can just immediately start clicking things rather than opening a new browser). Is this possible?
For example, if I had something along the lines of:
driver = webdriver.Firefox()
driver.get("google.com")
driver.find_element_by_xpath("//input[#id='gbqfq']").send_keys("cats" + Keys.RETURN)
This should open Firefox, go to google, and search for cats. Pretend like there's a ton of stuff you have to do before you can actually make it to the google page, though. Now if it were to fail on the search for cats, the only way I would be able to test to see if I fixed the code would be to rerun the test (webdriver.Firefox() would open a new browser). Rather than that, assuming I'd still have google open, I'd like the selenium test to just start off on the previous browser and google page (therefore saying the first step in the code would be the send_keys("cats")). Is this possible?
I think that this was a similar question, but it didn't get checked off as answered: How to resume browser session or use existing browser window with Selenium-Python?
This one also seems similar, only pertaining to Java: How do I rerun Selenium 2.0 (webdriver) tests on the same browser?
Thanks.
Look into pdb: https://docs.python.org/2/library/pdb.html
Placing this in your code will stop the progression of the test as is until you tell it to continue in your shell.
Using your code snippit:
from pdb import set_trace
driver = webdriver.Firefox()
driver.get("google.com")
set_trace()
driver.find_element_by_xpath("//input[#id='gbqfq']").send_keys("cats" + Keys.RETURN)
will stop your execution after getting the url, allow you to tinker, and then continue from where the test left off.
Alternatively, while debugging, you can just remove the driver.quit() statement, wherever it happens to be, which will keep the browser open wherever your assertion failed. But if you're using a framework like Django with the LiveTestServer Client, you won't have access to browse the site further. pdb will allow you to keep the test server active.

Why Selenium + Node + PhantomJS still running after Python script ends?

Im using PhantomJS to collect data about a Html page. My code it`s something like this:
from selenium import webdriver
class PageElements():
def __init__(self, url):
self.driver = webdriver.PhantomJS()
self.driver.get(url)
self.elements, self.attribute_types = self._load_elements(self.driver)
def _load_elements(self, self.driver)
""""This is not relevant"""
So, after I execute the code on IPython Notebook sometimes, to test things out. After a while, i get this on my Activity Monitor:
And this:
The processes still run even after i add a destroyer like:
def __del__(self):
self.driver.close()
What is happening? I would really appreciate a "why this is happening" answer, instead a "do this" one. Why my destroyer isn't working?
I opened #forivall links, and saw the Selenium code. The PhantomJS webdriver has it`s own destructor (thus making mine redundant). Why aren't they working in this case?
__del__() tends to be unreliable in python. Not only do you not know when it will be called, you don't even have any guarantees that it will ever be called. try/finally constructs, or (even better) with-blocks (a.k.a. context managers), are much more reliable.
That said, I had a similar issue even using context managers. phantomjs processes were left running all over the place. I was invoking phantomjs through selenium as follows:
from selenium import webdriver
from contextlib import closing
with closing(webdriver.PhantomJS()) as driver:
do_stuff(driver)
contextlib's closing() function ensures that the close() method of its argument gets called whatever happens, but as it turns out, driver.close(), while available, is the wrong method for cleaning up a webdriver session. driver.quit() is the proper way to clean up. So instead of the above, do one of the following:
from selenium import webdriver
from contextlib import contextmanager
#contextmanager
def quitting(quitter):
try:
yield quitter
finally:
quitter.quit()
with quitting(webdriver.PhantomJS()) as driver:
do_stuff(driver)
or
from selenium import webdriver
driver = webdriver.PhantomJS()
try:
do_stuff(driver)
finally:
driver.quit()
(The above two snippets are equivalent)
Credit goes to #Richard's comment on the original question for pointing me toward .quit().
As of July 2016, following the discussion on this GitHub issue, the best solution is to run:
import signal
driver.service.process.send_signal(signal.SIGTERM)
driver.quit()
Instead of driver.close(). Just running driver.quit() will kill the node process but not the phantomjs child process that it spawned.
self.driver = webdriver.PhantomJS()
This creates a web browser that is then used by Selenium to run the tests. Each time Selenium runs, it opens a new instance of the web browser, rather than looking to see if there is a previous one it could re-use. If you do not use .close at the end of the test, then the browser will continue to run in the background.
As you have seen, running the test multiple times leaves multiple browsers orphaned.
What the difference between this case, and objects that Python usually destroy automatically with it`s garbage collector?
The difference is that it's creating something outside of Python's domain: it's creating a new OS-level process. Perhaps webdriver.PhantomJS should have its own __del__ that will shut itself down Perhaps the behaviour should be more robust, but that's not the design decision that the selenium developers went with, probably because most of the other drivers are not headless (so it's obvious that the windows are open).
Unfortunately, neither the selenium (or unofficial) documentation has much clarification/best practices on this. (see comments below on __del__ behaviour).
links to source:
phantomjs/webdriver.py
remote/webdriver.py (superclass)
I was also struggling for the same problem and I solved it from this source link.
By replacing self.process.kill() in selenium/webdriver/phantomjs/service.py with self.process.send_signal(signal.SIGTERM).
By using driver.quit() will kill all process of phantomjs on completing program or cancel the program using CTR+C

Categories

Resources