Python Selenium - Close Driver When Text is no longer found - python

Hi I am using selenium to automate test on web pages. I am using selenium and python and would like to have answers in this framework only. I run looping script to check if text is still found or not, if not then close the browser.I have tried my script but only working when text has link on it
while True:
try:
element = WebDriverWait(driver, 5).until(
EC.presence_of_element_located((By.PARTIAL_LINK_TEXT, "My Text"))
)
except:
break
driver.close()
driver.quit()
that script is working when text has link, The problem is my text was pure text without any link. i cannot use css selector because the text changing after certain minutes, so i need to locate text not xpath or other. Hope someone can help. thank you

You are using By.PARTIAL_LINK_TEXT which is designed to only find text inside links.
If you want to lookup text in all elements, you need to use XPATH:
EC.presence_of_element_located((By.XPATH, "//*[contains(text(), 'My Text')]"))
See for reference:
https://stackoverflow.com/a/18701085/14241710

try that out, might help:
def check_no_longer_present(text):
try:
if driver.find_element_by_xpath("//[contains(text(), '{}')".format(text)).is_displayed():
return False
except NoSuchElementException:
return True
if check_no_longer_present('you_text'):
driver.quit()

Related

Printing web search results won't work in Selenium script, but works when I type it into the shell

I'm very new and learning web scraping in python by trying to get the search results from the website below after a user types in some information, and then print the results. Everything works great up until the very last 2 lines of this script. When I include them in the script, nothing happens. However, when I remove them and then just try typing them into the shell after the script is done running, they work exactly as I'd intended. Can you think of a reason this is happening? As I'm a beginner I'm also super open if you see a much easier solution. All feedback is welcome. Thank you!
#Setup
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
import time
#Open Chrome
driver = webdriver.Chrome()
driver.get("https://myutilities.seattle.gov/eportal/#/accountlookup/calendar")
#Wait for site to load
time.sleep(10)
#Click on street address search box
elem = driver.find_element(By.ID, 'sa')
elem.click()
#Get input from the user
addr = input('Please enter part of your address and press enter to search.\n')
#Enter user input into search box
elem.send_keys(addr)
#Get search results
elem = driver.find_element(By.XPATH, ('/html/body/app-root/main/div/div/account-lookup/div/section/div[2]/div[2]/div/div/form/div/ul/li/div/div[1]'))
print(elem.text)
I haven't used Selenium in a while, so I can only point you in the right direction. It seems to me you need to iterate over the individual entries, and print those, as opposed to printing the entire div as one element.
You should remove the parentheses from the xpath expression
You can shorten the xpath expression as follows:
Code:
elems = driver.find_element(By.XPATH, '//*[#class="addressResults"]/div')
for elem in elems:
print(elem.text)
You are using an absolute XPATH, what you should be looking into are relative XPATHs
Something like this should do it:
elems = driver.find_elements(By.XPATH, ("//*[#id='addressResults']/div"))
for elem in elems:
...
I ended up figuring out my problem - I just needed to add in a bit that waits until the search results actually load before proceeding on with the script. tossing in a time.sleep(5) did the trick. Eventually I'll add a bit that checks that an element has loaded before proceeding with the script, but this lets me continue for now. Thanks everyone for your answers!

Message: no such element: Unable to locate element python selenium

i have this error in selenium python
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"/html/body/div[24]/div[1]/div/div/div[3]/div[1]/div/button[1]"}
i want the element with this xpath:
driver.find_element_by_xpath("/html/body/div[24]/div[1]/div/div/div[3]/div[1]/div/button[1]").click()
if doesn't exist, then click other element with other xpath like:
driver.find_element_by_xpath("/html/body/div[24]/div[1]/div/div/div[3]/div[1]/div/button[2]").click()
everything is clear if element 1 doesn't exists to click, click to element 2
how can I do this in selenium python
thanks
NoSuchElementException generally occurs because of one the below reasons
Element locator is incorrect
Element is not rendered
Page is not rendered or loaded
Element is inside a frame
You can have a look, do a debug and based on that write your code. :)
This is not the exact solution, but this is the idea
if elem1.is_displayed():
elem1.click()
else:
elem2.click()
I'd suggest you using time.sleep(10) and a nested try block to catch your errors while it accomplishes what you want to do. For example:
time.sleep(10)
try:
driver.find_element_by_xpath("/html/body/div[24]/div[1]/div/div/div[3]/div[1]/div/button[1]").click()
except:
print("button1 could not be found... Now trying to retreive button2")
try:
driver.find_element_by_xpath("/html/body/div[24]/div[1]/div/div/div[3]/div[1]/div/button[2]").click()
except:
print("button1 nor button2 were not found...")
Note that the time.sleep(10) delays your code from executing anything after this line for 10 seconds, and this allows the page to load properly for elements to be located easier.

Python selenium presence_of_element_located with only href

I am trying to click on a link in a forum using Selenium, but I need to wait until the page load, so i thought the better way was to use WebDriverWait. This is my code I used to test it:
driver.get("https://testocolo.forumcommunity.net")
#First click, working
driver.find_element_by_xpath('//a[#href="'+"/?f=9087616"+'"]').click()
try :
element = WebDriverWait(driver, 2).until(
EC.presence_of_element_located(By.XPATH, '//a[#href="'+"/?t=61904616"+'"]')
)
element.click()
except :
print("NO")
This is the element
Brotha
The try except cycle ends up every time printing "NO".
Before that I tried locating by LINK_TEXT instead, with 'Brotha' but in neither way works. Where am I doing wrong?
xpath you can try
//a[contains(#tittle,'discussione inviata il')]
or
//*[text()='Brotha']
Next option you can check is if that element is in iframe?
WebDriverWait(driver, 30).until(
EC.element_to_be_clickable((By.XPATH, "//*[text()='Brotha']")))

How to click on the first google search result with python selenium?

I am trying to write program that will click on the first google search link that appears. My code is:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
driver = webdriver.Firefox()
driver.get("https://www.google.com/")
search = driver.find_element_by_name("q")
search.clear()
search.send_keys("bee movie script")
search.send_keys(Keys.RETURN)
time.sleep(3)
assert "No results found." not in driver.page_source
result = driver.find_element_by_xpath('/html/body/div[6]/div[3]/div[8]/div[1]/div[2]/div/div[2]/div[2]/div/div/div[1]/div/div[1]/a/h3')
result.click()
I've tried a variation of things for result, but the automation is unable to find the element. I copied the xpath from inspect element, but I get an error that:
NoSuchElementException: Message: Unable to locate element: /html/body/div[6]/div[3]/div[8]/div[1]/div[2]/div/div[2]/div[2]/div/div/div[1]/div/div[1]/a/h3
Am I doing this html incorrectly and how can I fix it? Thank you.
I found a solution with:
results = driver.find_elements_by_xpath('//div[#class="r"]/a/h3') # finds webresults
results[0].click(). # clicks the first one
You can use the below xpath and css to select the nth link.
xpath:
Using the index
driver.find_element_by_xpath('(//div[#class="r"]/a)[1]').click()
If you want to access the first matching element you can simply use .find_element_xpath and script will pick the first element though there are multiple elements matching with the given locator strategy be it xpath, css or anything.
driver.find_element_by_css_selector("div.r a h3").click()

Finding a specific text in a page using selenium python

I'm trying to find a specific text from a page https://play.google.com/store/apps/details?id=org.codein.filemanager&hl=en using selenium python. I'm looking for the element name - current version from the above url. I used the below code
browser = webdriver.Firefox() # Get local session of firefox
browser.get(sampleURL) # Load page
elem = browser.find_elements_by_clsss_name("Current Version") # Find the query box
print elem;
time.sleep(2) # Let the page load, will be added to the API
browser.close()
I don't seem to get the output printed. Am I doing anything wrong here?
There is no class with name "Current Version". If you want to capture the version number that is below the "Current Version" text, the you can use this xpath expression:
browser.find_element_by_xpath("//div[#itemprop='softwareVersion']")

Categories

Resources