Import defined selenium function browser issue - python

Set-up
I use selenium for a variety of things and found myself defining the same functions over and over again.
I decided to define the functions in a separate file, and import these to my work files.
Simple Example
If I define functions and execute all in one file, things work fine. See the simple full_script.py below,
# import webdriver
from selenium import webdriver
# create browser
browser = webdriver.Firefox(
executable_path='/mypath/geckodriver')
# define short xpath function
def el_xp(x):
return browser.find_element_by_xpath(x)
# navigate to url
browser.get('https://nos.nl')
# obtain title first article
el_xp('/html/body/main/section[1]/div/ul/li[1]/a/div[2]/h3').text
This successfully returns the title of the first article on this news website.
Problem
Now, when I split the script in a xpath_function.py and a run_text.py, and save them in a test folder on my desktop, things don't work fine.
xpath_function.py
# import webdriver
from selenium import webdriver
# create browser
browser = webdriver.Firefox(
executable_path='/mypath/geckodriver')
# define short xpath function
def el_xp(x):
return browser.find_element_by_xpath(x)
run_test.py
import os
os.chdir('/my/Desktop/test')
import xpath_function as xf
# import webdriver
from selenium import webdriver
# create browser
browser = webdriver.Firefox(
executable_path='/Users/lucaspanjaard/Documents/RentIndicator/geckodriver')
browser.get('https://nos.nl')
xf.el_xp('/html/body/main/section[1]/div/ul/li[1]/a/div[2]/h3').text
Executing run_test.py results in 2 browsers opened, of which one navigates to the news website and the following error,
NoSuchElementException: Unable to locate element:
/html/body/main/section[1]/div/ul/li[1]/a/div[2]/h3
I suppose the issue is that in both xpath_function.py and run_test.py I'm defining a browser.
However, if I don't define a browser in xpath_function.py, I get an error in that file that no browser is defined.
How do I solve this?

You can easily fix it by changing the definition of el_exp to include the browser as an extra parameter:
def el_xp(browser, x):
return browser.find_element_by_xpath(x)
now in run_test.py you call it like this:
xf.el_xp(browser, '/html/body/main/section[1]/div/ul/li[1]/a/div[2]/h3').text

Related

Avoid opening an new browser for every instance in Selenium Python

I got a Python Selenium project that does what I want (yay!) but for every instance it opens a new browser window. Is there any way to prevent that?
I've went through the documentation of Selenium but they refer to driver.get(url). It's most likely because it's in the for...loop but I can't seem to get the URL to change with the queries and params if it's outside of the for...loop.
So, for example, I want to open these URLs:
https://www.google.com/search?q=site%3AParameter1+%22Query1%22
https://www.google.com/search?q=site%3AParameter2+%22Query1%22
https://www.google.com/search?q=site%3AParameter3+%22Query1%22
etc..
from selenium import webdriver
import time
from itertools import product
params = ['Parameter1', 'Parameter2', 'Parameter3', 'Parameter4']
queries = ['Query1', 'Query2', 'Query3', 'Query4',]
for (param, query) in product(params,queries):
url = f'https://www.google.com/search?q=site%3A{param}+%22{query}%22' # google as an example
driver = webdriver.Chrome('G:/Python Projects/venv/Lib/site-packages/chromedriver.exe')
driver.get(url)
#does stuff
You are declaring your path to Chrome in the loop. Declare it once and reuse:
from itertools import product
from selenium import webdriver
params = ['Parameter1', 'Parameter2', 'Parameter3', 'Parameter4']
queries = ['Query1', 'Query2', 'Query3', 'Query4',]
driver = webdriver.Chrome(executable_path='/snap/bin/chromium.chromedriver')
for (param, query) in product(params,queries):
url = f'https://www.google.com/search?q=site%3A{param}+%22{query}%22'
driver.get(url)
# driver.close()

how to load multiple urls in driver.get()?

How to load multiple urls in driver.get() ?
I am trying to load 3 urls in below code, but how to load the other 2 urls?
And afterwards the next challenge is to pass authentication for all the urls as well which is same.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome(executable_path=r"C:/Users/RYadav/AppData/Roaming/Microsoft/Windows/Start Menu/Programs/Python 3.8/chromedriver.exe")
driver.get("https://fleet.my.salesforce.com/reportbuilder/reportType.apexp")#put here the adress of your page
elem = driver.find_elements_by_xpath('//*[#id="ext-gen63"]')#put here the content you have put in Notepad, ie the XPath
button = driver.find_element_by_id('id="ext-gen63"')
print(elem.get_attribute("class"))
driver.close
submit_button.click()
Try below code :
def getUrls(targeturl):
driver = webdriver.Chrome(executable_path=r" path for chromedriver.exe")
driver.get("http://www."+targeturl+".com")
# perform your taks here
driver.quit()
for i in range(3):
webPage = ['google','facebook','gmail']
for i in webPage:
print i;
getUrls(i)
You can't load more than 1 url at a time for each Webdriver. If you want to do so, you maybe need some multiprocessing module. If you want to do an iterative solution, just create a list with every url you need and loop through it. With that you won't have the credential problem neither.

Python Selenium opening random empy webdrivers when I run my code, how do I stop this?

from selenium import webdriver
import random
url = "https://www.youtube.com/"
list_of_drivers = [webdriver.Firefox(), webdriver.Chrome(), webdriver.Edge()]
Driver = random.choice(list_of_drivers)
Driver.get(url)
I'm trying to cycle though a list of random webdrivers using selenium.
It does a good job at picking a random webdriver and opening the URL however, it also opens up other webdrivers with a blanck page.
How do I stop this from happening?
I am running python 2.7 in a virtualenv.
list_of_drivers = [webdriver.Firefox(), webdriver.Chrome(), webdriver.Edge()]
You created three instances already in this line, that's why all 3 browsers show up with a blank page at the very beginning.
Driver = random.choice(list_of_drivers)
Driver.get(url)
And then you randomly choose one to open a webpage, leaving the rest doing nothing.
Instead of creating three instances, just create one:
list_of_drivers = ['Firefox', 'Chrome', 'Edge']
Driver = getattr(webdriver, random.choice(list_of_drivers))()
Driver.get(url)

Use Python to go through Google Search Results for given Search Phrase and URL

Windows 10 Home 64 Bit
Python 2.7 (also tried in 3.3)
Pycharm Community 2006.3.1
Very new to Python so bear with me.
I want to write a script that will go to Google, enter a Search Phrase, click the Search button, look through the search results for a URL (or any string), if there is no result on that page, click the Next button and repeat on subsequent pages until it finds the URL, stops and Prints what page the result was found on.
I honestly don't care if it just runs in the background and gives me the result. At first I was trying to have it litterally open the browser, find the browser objects (search field and search button) via Xpath and execute that was.
You can see the modules I've installed and tried. And I have tried almost every code example I've found on StackOverflow for 2 days so listing everything I've tried would be quite wordy.
If anyone just tell me the modules that would work best and any other direction would be very much appreciated!
Specific modules I've tried for this were Selenim, clipboard, MechanicalSoup, BeautifulSoup, webbrowser, urllib, enter image description hereunittest and Popen.
Thank you in advance!
Chantz
import clipboard
import json as m_json
import mechanicalsoup
import random
import sys
import os
import mechanize
import re
import selenium
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
import unittest
import webbrowser
from mechanize import Browser
from bs4 import BeautifulSoup
from subprocess import Popen
######################################################
######################################################
# Xpath Google Search Box
# //*[#id="lst-ib"]
# Xpath Google Search Button
# //*[#id="tsf"]/div[2]/div[3]/center/input[1]
######################################################
######################################################
webbrowser.open('http://www.google.com')
time.sleep(3)
clipboard.copy("abc") # now the clipboard content will be string "abc"
driver = webdriver.Firefox()
driver.get('http://www.google.com/')
driver.find_element_by_id('//*[#id="lst-ib"]')
text = clipboard.paste("abc") # text will have the content of clipboard
print('text')
# browser = mechanize.Browser()
# url = raw_input("http://www.google.com")
# username = driver.find_element_by_xpath("//form[input/#name='username']")
# username = driver.find_element_by_xpath("//form[#id='loginForm']/input[1]")
# username = driver.find_element_by_xpath("//*[#id="lst-ib"]")
# elements = driver.find_elements_by_xpath("//*[#id="lst-ib"]")
# username = driver.find_element_by_xpath("//input[#name='username']")
# CLICK BUTTON ON PAGE
# http://stackoverflow.com/questions/27869225/python-clicking-a-button-on-a-webpage
Selenium would actually be a straightforward/good module to use for this script; you don't need anything else in this case. The easiest way to reach your goal is probably something like this:
from selenium import webdriver
import time
driver = webdriver.Firefox()
url = 'https://www.google.nl/'
linkList = []
driver.get(url)
string ='search phrase'
text = driver.find_element_by_xpath('//*[#id="lst-ib"]')
text.send_keys(string)
time.sleep(2)
linkBox = driver.find_element_by_xpath('//*[#id="nav"]/tbody/tr')
links = linkBox.find_elements_by_css_selector('a')
for link in links:
linkList.append(link.get_attribute('href'))
print linkList
This code will open your browser, enter your search phrase and then gets the links for the different page numbers. From here you only need to write a loop that enters every link in your browser and looks whether the search phrase is there.
I hope this helps; if you have further questions let me know.

Intercept when url changes before the page is completely loaded

Is it possible to catch the event when the url is changed inside my browser using selenium?
Here is my scenario:
I load my website test.com
After all the static files are loaded, when executing one of the js file, I am redirected (not sure how) to another page redirect-one.test.com/blah
My browser gets the url redirect-one.test.com/blah and gets a 307 response to go to redirect-two.test.com/blahblah
Here my browser receives a final 302 to go to final.test.com/
The page of final.test.com/ is loaded and at the end of this, selenium enables me to search for elements and so on...
I'd like to be able to intercept (and time the moment it happens) each time I am redirected.
After that, I still need to do some other steps for which selenium is more suitable:
Enter my username and password
Test some functionnalities
Log out
Here a sample of how I tried to intercept the first redirect:
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.support.ui import WebDriverWait
def url_contains(url):
def check_contains_url(driver):
return (url in driver.current_url)
return check_contains_url
driver = webdriver.Remote(
command_executor='http://127.0.0.1:4444/wd/hub',
desired_capabilities=DesiredCapabilities.FIREFOX)
driver.get("http://test.com/")
try:
url = "redirect-one.test.com"
first_redirect = WebDriverWait(driver, 20).until(url_contains(url))
print("found first redirect")
finally:
print("move on to the next redirect...."
Is this even possible using selenium?
I cannot change the behavior of the website and the reason it is built like this is because of an SSO mechanism I cannot bypass.
I realize I specified python but I am open to tools in other languages.
Selenium is not the tool for this. All the redirects that the browser encounters are handled by the browser in a way that Selenium does not allow you to check.
You can perform the checks using urllib2, or if you prefer a sane interface, using requests.

Categories

Resources