Webdriver with Python bindings v2.39.0
Firefox 27.0 (but issue also reproduced with Firefox 'latest', Firefox 26.0 and Firefox 27.0)
In the code below, once the execution hits obj.click() for links on a certain page, the browser hangs. If the script is killed with Ctrl+C in the terminal (Windows), the browser stops hanging. If left to it's own devices, the browser will seem to hang indefinitely.
I'm not sure if I am allowed to post the HTML for the problem page but I may be able to negotiate it with my team.
This very same code used to work perfectly with the very same element that is causing problems now. I had suspected it was something to do with the auto upgrade Firefox but downgrading has not solved the issue (please see things I have tried section for more details and things I have tried)
def wait_and_click(obj_id, timeout=global_timeout, locator_attribute = 'ID'):
print('waiting for ' + obj_id)
obj = WebDriverWait(driver, timeout).until(EC.element_to_be_clickable((eval('By.' + locator_attribute), obj_id)))
print("about to click object")
obj.click()
print("about to return from wait_and_click")
return obj
Things I have tried:
manually replicating the issue (no browser hang)
tried downgrading to different versions of firefox and uninstalling all extensions
tried skipping the obj.click when if the obj_id equals the problem element - browser doesn't hang (but the script doesn't go anywhere :p because something needs to be clicked)
tried a more conventional WebDriverWait for an element on the page
and then find_element_by_xpath - same browser hang
tried locating the element by a different attribute (initially tried using LINK_TEXT, also tried via XPATH - no difference) and then clicking
tried finding different links on the same page - same browser hang
tried finding links on different pages of the same web app - no browser hang
tried saving the source of the page containing the link and of th page the link points to. I got webdriver to open the local copy of the page and click on the problem link - the destination page opened with no browser hang
Argh, this is such a frustrating solution - it seems it was because of the Skype toolbar that was automatically installed with the latest Firefox upgrade.
It's worth noting that it will not go away easily. If you go to Firefox -> Extensions, it only allows you to disable it. You have to go to 'Programs and Features' and uninstall it from there. After that, it worked like a charm!
FFS MICROSOFT!!
Related
Unlike the usual, I am having trouble while fetching elements from the website below.
The element hierarchy is as follows:
My goal is to fetch the rows (data-row-key='n') inside the class "rc-table-tbody".
Here below is my Python script:
chromeOptions = Options()
chromeOptions.add_argument("headless")
driver = webdriver.Chrome(chrome_options=chromeOptions, executable_path=DRIVER_PATH)
driver.get('https://www.binance.com/en/my/orders/exchange/usertrade')
None of these below works (gives either unable to locate element or timeout):
elements = driver.find_element_by_class_name("rc-table-tbody")
elements = WebDriverWait(driver, 5).until(EC.presence_of_element_located((By.CLASS_NAME, "rc-table-tbody")))
elements = WebDriverWait(driver, 5).until(EC.visibility_of_all_elements_located((By.CLASS_NAME, "rc-table-tbody")))
elements = WebDriverWait(driver, 5).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "selector_copied_from_inspect")))
elements = WebDriverWait(driver, 5).until(EC.visibility_of_element_located((By.XPATH, "xpath_copied_from_inspect")))
I appreciate any help, thanks!
Edit: I guess the problem is related to cookies. The URL I was trying to fetch is a page after a login in Binance.com and I am already logged in on my Crome browser. Therefore, I assumed that driver will use the current cookies of the real Chrome browser and there wouldn't need for login. However, when I removed "headless" parameter, Selenium poped the login page, instead of the page I was trying to scrape.
How can I get the current cookies of the browser, in order to make Selenium access the exact page I am trying to scrape?
You would have to do some research but I believe you can set up a special Chrome profile for use with "remote debugging" and then start up chrome with remote debugging. See Remote debugging with Chrome Developer Tools. Then there is a way to start up your selenium chromedriver with the correct options (usually something like chromeOptions.add_argument('--remote-debugging-port=9222')) so as to hook into this special Chrome where you have already logged on.
But it might be far simpler to insert an input("Pausing for login completion...") statement following the driver.get call, then login manually and when you have finished successfully logging in, hit the enter key in reply to the input statement to resume execution of your program. For this option you could not use "headless" mode. Of course, this latter option is not a "hands-off", long-term solution. But I looked at the possibility of just adding logic to your existing code to send login credentials, but there seems to be on the next page a nasty captcha that would be very difficult to get past.
After running my Selenium/Python program, browser opened with below message:
This is the initial start page for the WebDriver server
I have done below steps to resolve this:
In IE Options -> Security tab, Enable Protected Mode check box is ticked OFF in all zones: Internet,
Local Intranet, Trusted sites and Restricted sites. Also, in Advanced tab -> Security, ticked OFF the
check box: "Enable Enhanced Protected Mode" (Also, I tried with enabling this Protected Mode in all
zones and in Advanced tab too).
My IEdriver (version 3.1.4) and Selenium web driver (version 3.1.4) are compatible (both are on same
version)
I tried above two, still I am getting the same message.
I have added below content to ignore Protected mode:
caps = DesiredCapabilities.INTERNETEXPLORER
caps['ignoreProtectedModeSettings'] = True
driver = webdriver.Ie(executable_path='C:/Selenium/Drivers/IEDriverServer.exe',capabilities=caps)
Still, I am getting the same message after adding above code.
Any ideas? Please help.
This is as per design. When IEDriverServer.exe opens a new a new Browsing Context i.e. Internet Explorer browsing session it navigates first to this page.
Browser Snapshot
Once you initialize the browser through the line:
driver = webdriver.Ie(executable_path='C:/Selenium/Drivers/IEDriverServer.exe',capabilities=caps)
next you can invoke the get() command to access any url. As an example:
driver.get('https://www.google.com/')
Additional Consideration
Additionally you need to:
Upgrade Selenium to current levels Version 3.141.59.
Upgrade IEDriverServer to latest IEDriverServer v3.150.1 level.
Note: As per best practices as Selenium Client and InternetExplorerDriver are released in sync and you must try to use both the binaries from the same major release.
Clean your Project Workspace through your IDE and Rebuild your project with required dependencies only.
Execute your #Test.
Always invoke driver.quit() within tearDown(){} method to close & destroy the WebDriver and Web Client instances gracefully.
I didn't make any changes to my python selenium program and it worked fine 3 days ago. Now when i try to use it i get:
Browsing context has been discarded
Failed to decode response from marionette
Any idea what could have caused this outside the code? (since no changes were made)
I'm using firefox and geckodriver. After i got these errors i updated firefox, geckodriver, and selenium, but it didn't help.
This error message...
Browsing context has been discarded
.
Failed to decode response from marionette
...implies that the communication between GeckoDriver and Marionette was broken.
Some more information regarding the binary version interms of:
Selenium Server/Client
GeckoDriver
Firefox
Additionally, your code block and the error stack trace would have given us some clues about whats wrong happening. However this issue can happen due to multiple factors as follows:
As per Hang when navigation request removes the current browsing context if you have used driver.navigate().back(); when Selenium's focus was within an <iframe> this error is observed.
As per Crash during command execution results in "Internal Server Error: Failed to decode response from marionette" this issue can also occur due to ctypes checks for NULL pointer derefs.
You can find the Selenium testcase here. Perhaps instead of panicking, it would have been better to handle this more gracefully by clearing any state and returning geckodriver to accept new connections again.
As per Failed to decode response from marionette - Error to open Webdriver using python this issue can also occur if you are not using the complient version of the binaries.
GeckoDriver, Selenium and Firefox Browser compatibility chart
Reference
You can find a relevant detailed discussion in:
“Failed to decode response from marionette” message in Python/Firefox headless scraping script
I experienced the same error on a particular site, after performing a successful login and when I was redirected to the next page.
While Inspecting The Source of the new page code in my Firefox browser, I I noticed some bad format/HTML quality details that went away after a manual refresh. (I suspect related to lack of quality of that site in particular).
What I did in order to remediate this was to start every next step on a new page with a refresh on my drive:
def my_next_step(driver):
driver.refresh()
time.sleep(10)
element=driver.switch_to_frame('iframe')
.......
This helped me overcome the site quality issues.
On Ubuntu 22.10 using (apt, not snap) Firefox and Selenium in Python I also got this error after:
driver.switch_to.alert.accept()
The solution for me was to switch back to the context with:
def upload_file(driver, filepath):
"""Uploads a Ublock Origin backup (template) .txt file into the Ublock
Origin extension."""
driver.find_element("id", "restoreFilePicker").send_keys(filepath)
time.sleep(1)
driver.switch_to.alert.accept()
time.sleep(1)
# Switch back to first tab (and reload/restore it).
new_window = driver.window_handles[0]
driver.switch_to.window(new_window)
This answer was given in this question.
I removed size of window it is working without this error
I am trying to force selenium to ignore SSL errors, but haven't been able to figure it out. I have seen acceptSslCerts capability, but it does not seem to have any effect when using firefox
For your scenario, you can try something similar to this. It worked well for me on Firefox browser.
Create new firefox profile by following below step and accept SSL certificates there.
Close all your firefox windows
In the Run dialog box, type in: ‘firefox.exe -p' and then Click OK.
Click “Create Profile”
Create a name for your new profile(say Selenium)
Click “Choose Folder”
Pick something easy to find — like “C:\NewFirefoxProfile”
Click Finish
Now after selecting newly created profile, start Firefox. Open the specific url you were getting 'Secure Connection Issue', accept SSL certificates for this profile
Now use the newly created firefox profile to run your selenium test. Modify below code as per your requirement.
from selenium import webdriver
profile = webdriver.FirefoxProfile()
profile.accept_untrusted_certs = True
profile.assume_untrusted_cert_issuer=True
driver = webdriver.Firefox(firefox_profile='C:/NewFirefoxProfile)
driver.get('https://cacert.org/')
I am trying to write a script to identify if potential new homes have Verizon FiOS service available.
Unfortunately the site's extensive use of javascript has prevented everything from working. I'm using selenium (wrapped in the splinter module) to let the javascript execute, but I can't get past the second page.
Here is a simplified version of the script:
from splinter import Browser
browser = Browser()
browser.visit('https://www.verizon.com/FORYOURHOME/ORDERING/OrderNew/OrderAddressInfo.aspx')
nameAddress1 = "ctl00$body_content$txtAddress"
nameZip = "ctl00$body_content$txtZip"
formFill = {nameAddress1: '46 Demarest Ave',
nameZip: '10956'}
browser.fill_form(formFill)
browser.find_by_id('btnContinue').first.click()
if browser.is_element_present_by_id('rdoAddressOption0', wait_time=10):
browser.find_by_id('rdoAddressOption0').first.click()
browser.find_by_id('body_content_btnContinue').first.click()
In this example, it chooses the first option when it asks for confirmation of address.
It errors out with an ElementNotVisibleException. If I remove the is_element_present check, it errors out because it cannot find the element. The element is visible and clickable in the live browser that selenium is controlling, but it seems like selenium is not seeing an updated version of the page HTML.
As an alternative, I thought I might be able to do the POST request and process the response with requests or mechanize, but there is some kind of funny redirect that I can't wrap my head around.
How do I either get selenium to behave, or bypass the javascript/ajax and do it with GET and POST requests?
The problem is that the input you are clicking on is really hidden by setting display: none style.
To workaround it, execute javascript code to click on the input and set checked attribute:
browser.execute_script("""var element = document.getElementById('rdoAddressOption0');
element.click();
element.checked=true;""")
browser.find_by_id('body_content_btnContinue').first.click()