driver find element if else statement - python

On here , My if else statement is not working
whenever i run the script
if "IF" condition matches , its working
if its matches "ELSE" condition , its not working
Here is the code
if driver.find_element(By.XPATH, "//bdi[normalize-space()='Close']"):
driver.find_element(By.XPATH, "//bdi[normalize-space()='Close']").click()
else:
WebDriverWait(driver, 300).until(
EC.element_to_be_clickable((By.XPATH, "//bdi[normalize-space()='OK']")))
driver.find_element(By.XPATH, "//bdi[normalize-space()='OK']").click()
print((sh.cell(row=r,column=2).value), r)
r = r + 1
enter image description here

The problem is , When the element is not found, it throws 'NoSuchElementException'
so for the proper working
try:
driver.find_element(By.XPATH, "//bdi[normalize-space()='Close']").click()
except NoSuchElementException:
WebDriverWait(driver, 300).until(
EC.element_to_be_clickable((By.XPATH, "//bdi[normalize-space()='OK']")))
driver.find_element(By.XPATH, "//bdi[normalize-space()='OK']").click()
print((sh.cell(row=r,column=2).value), r)
r = r + 1
In this code, the try-except block is used to handle exceptions that might occur
And also there is a another way which uses simple if else statement
if driver.find_element(By.XPATH, "//bdi[normalize-space()='Close']").is_displayed():
driver.find_element(By.XPATH, "//bdi[normalize-space()='Close']").click()
else:
WebDriverWait(driver, 300).until(
EC.element_to_be_clickable((By.XPATH, "//bdi[normalize-space()='OK']")))
driver.find_element(By.XPATH, "//bdi[normalize-space()='OK']").click()
print((sh.cell(row=r,column=2).value), r)
r = r + 1
Note: I added the is_displayed() method to check if the element with the text "Close" is displayed on the page before attempting to click it.

Related

get locator from a hover element

I want to get the locator of this element (5,126,
601) but seem cant get it normally.
I think it will have to hover the mouse to the element and try to get the xpath but still I cant hover my mouse into it because it an SVG element . Any one know a way to get the locator properly?
here is the link to the website: https://fundingsocieties.com/progress
Well, this element is updated only by hovering over the chart.
This is the unique XPath locator for this element:
"//*[name()='text']//*[name()='tspan' and(contains(#style,'bold'))]"
The entire Selenium command can be:
total_text = driver.find_element(By.XPATH, "//*[name()='text']//*[name()='tspan' and(contains(#style,'bold'))]").text
This can also be done with this CSS Selector: text tspan[style*='bold'], so the Selenium command could be
total_text = driver.find_element(By.CSS_SELECTOR, "text tspan[style*='bold']").text
Well, CSS Selector looks much shorter :)
Clicking on each node in turn will lead to the accompanying text being placed in the highcharts-label element. This text can then be retrieved and the Quarter (1st tspan) be linked to the Total value (4th tspan) that you desire.
url="https://fundingsocieties.com/progress"
driver.get(url)
chart = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH, "//div[#data-highcharts-chart='0']"))
)
markers = chart.find_elements(By.XPATH, "//*[local-name()='g'][contains(#class,'highcharts-markers')]/*[local-name()='path']")
for m in markers:
m.click()
try:
element = WebDriverWait(driver, 2).until(
EC.presence_of_element_located((By.XPATH, "//*[local-name()='g'][contains(#class,'highcharts-label')]/*[local-name()='text']"))
)
tspans = element.find_elements(By.XPATH, "./*[local-name()='tspan']")
if len(tspans) > 3:
print ("%s = %s" % (tspans[0].text, tspans[3].text))
except TimeoutException:
pass
The output is as follows:
Q2-2015 = 2
Q3-2015 = 12
....
Q1-2022 = 5,076,978
Q2-2022 = 5,109,680
Q3-2022 = 5,122,480
Q4-2022 = 5,126,601

How to right-click or move mouse to a element used selenium with python?

For Example: Get icon of google
try:
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.TAG_NAME,'body'))
)
gmail = driver.find_element(By.LINK_TEXT, 'Gmail')
google_icon = driver.find_element(By.CLASS_NAME, "lnXdpd")
action = ActionChains(driver)
action.context_click(google_icon)
finally:
pass
the context_click don't work.so I try
# test second
action.move_to_element(google_icon)
# action.context_click(on_element=None)
action.context_click(google_icon)
It's don't work too. but
gmail.click()
this is can work.what should i do.

selenium stale element reference: element is not attached to the page document error

I have an e-commerce page and there are multiple products on a page. I need to click the link of a product then return on the main page and click the link of the next product, but when I return, the elements can't be found anymore.
Path = "C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome(Path)
driver.get("https://www.emag.ro/")
search_bar = driver.find_element_by_id("searchboxTrigger")
search_bar.send_keys("laptopuri")
search_bar.send_keys(Keys.RETURN)
main = None
try:
main = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "main-container"))
)
print("Page loaded,main retrived succesfully")
print(" ")
except:
driver.quit()
products = main.find_elements_by_css_selector("div.card-item.js-product-data")
for product in products:
raw_name = product.text
raw_price = product.find_element_by_css_selector("p.product-new-price").text
link = product.find_element_by_tag_name("a")
#clicking the link
link.click()
spec_page = None
try:
spec_page = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.CLASS_NAME, "col-md-12"))
)
except:
driver.quit()
print(spec_page)
driver.back()
After the first iteration, I get the following error :
selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document on line raw_name = product.text,basically at the beginning of the loop.
I assume the page is not loading properly or something like that, I tried using time.sleep before going through the loop but nothing
When you are writing driver.back(), it will go back to previous page and by the time it will reach to original page all the defined elements will become stale. You need to redefined them like below :-
This should handle the exception.
products = len(main.find_elements_by_css_selector("div.card-item.js-product-data"))
j = 0
for product in range(products):
elements = main.find_elements_by_css_selector("div.card-item.js-product-data")
raw_name = elements[j].text
raw_price = elements[j].find_element_by_css_selector("p.product-new-price").text
link = elements[j].find_element_by_tag_name("a")
# clicking the link
link.click()
spec_page = None
try:
spec_page = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.CLASS_NAME, "col-md-12"))
)
except:
driver.quit()
print(spec_page)
j = j + 1
driver.back()

In selenium how to find out the exact number of XPATH links with different ids?

With Python3 and selenium I want to automate the search on a public information site. In this site it is necessary to enter the name of a person, then select the spelling chosen for that name (without or with accents or name variations), access a page with the list of lawsuits found and in this list you can access the page of each case.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException, NoSuchElementException
from selenium.webdriver.common.keys import Keys
import time
import re
Name that will be searched
name = 'JOSE ROBERTO ARRUDA'
Create path, search start link, and empty list to store information
firefoxPath="/home/abraji/Documentos/Code/geckodriver"
link = 'https://ww2.stj.jus.br/processo/pesquisa/?aplicacao=processos.ea'
processos = []
Call driver and go to first search page
driver = webdriver.Firefox(executable_path=firefoxPath)
driver.get(link)
Position cursor, fill and click
WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.CSS_SELECTOR, '#idParteNome'))).click()
time.sleep(1)
driver.find_element_by_xpath('//*[#id="idParteNome"]').send_keys(name)
time.sleep(6)
WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.CSS_SELECTOR, '#idBotaoPesquisarFormularioExtendido'))).click()
Mark all spelling possibilities for searching
WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.CSS_SELECTOR, '#idBotaoMarcarTodos'))).click()
WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.CSS_SELECTOR, '#idBotaoPesquisarMarcados'))).click()
time.sleep(1)
Check how many pages of data there are - to be used in "for range"
capta = driver.find_element_by_xpath('//*[#id="idDivBlocoPaginacaoTopo"]/div/span/span[2]').text
print(capta)
paginas = int(re.search(r'\d+', capta).group(0))
paginas = int(paginas) + 1
print(paginas)
Capture routine
for acumula in range(1, paginas):
# Fill the field with the page number and press enter
driver.find_element_by_xpath('//*[#id="idDivBlocoPaginacaoTopo"]/div/span/span[2]/input').send_keys(acumula)
driver.find_element_by_xpath('//*[#id="idDivBlocoPaginacaoTopo"]/div/span/span[2]/input').send_keys(Keys.RETURN)
time.sleep(2)
# Captures the number of processes found on the current page - qt
qt = driver.find_element_by_xpath('//*[#id="idDivBlocoMensagem"]/div/b').text
qt = int(qt) + 2
print(qt)
# Iterate from found number of processes
for item in range(2, qt):
# Find the XPATH of each process link - start at number 2
vez = '//*[#id="idBlocoInternoLinhasProcesso"]/div[' + str(item) + ']/span[1]/span[1]/span[1]/span[2]/a'
print(vez)
# Access the direct link and click
element = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, vez)))
element.click()
# Run tests to get data
try:
num_unico = driver.find_element_by_xpath('//*[#id="idProcessoDetalhesBloco1"]/div[6]/span[2]/a').text
except NoSuchElementException:
num_unico = "sem_numero_unico"
try:
nome_proc = driver.find_element_by_xpath('//*[#id="idSpanClasseDescricao"]').text
except NoSuchElementException:
nome_proc = "sem_nome_encontrado"
try:
data_autu = driver.find_element_by_xpath('//*[#id="idProcessoDetalhesBloco1"]/div[5]/span[2]').text
except NoSuchElementException:
data_autu = "sem_data_encontrada"
# Fills dictionary and list
dicionario = {"num_unico": num_unico,
"nome_proc": nome_proc,
"data_autu": data_autu
}
processos.append(dicionario)
# Return a page to click on next process
driver.execute_script("window.history.go(-1)")
# Close driver
driver.quit()
In this case I captured the number of link pages (3) and the total number of links (84). So my initial idea was to do the "for" three times and within them split the 84 links
The direct address of each link is in XPATH (//*[#id="idBlocoInternoLinhasProcesso"]/div[41]/span[1]/span[1]/span[1]/span[2]/a) which I replace with the "item" to click
For example, when it arrives at number 42 I have an error because the first page only goes up to 41
My problem is how to go to the second page and then restart only "for" secondary
I think the ideal would be to know the exact number of links on each of the three pages
Anyone have any ideas?
Code below is "Capture routine":
wait = WebDriverWait(driver, 20)
#...
while True:
links = wait.until(EC.presence_of_all_elements_located((By.XPATH, "//span[contains(#class,'classSpanNumeroRegistro')]")))
print("links len", len(links))
for i in range(1, len(links) + 1):
# Access the direct link and click
.until(EC.element_to_be_clickable((By.XPATH, f"(//span[contains(#class,'classSpanNumeroRegistro')])[{i}]//a"))).click()
# Run tests to get data
try:
num_unico = driver.find_element_by_xpath('//*[#id="idProcessoDetalhesBloco1"]/div[6]/span[2]/a').text
except NoSuchElementException:
num_unico = "sem_numero_unico"
try:
nome_proc = driver.find_element_by_xpath('//*[#id="idSpanClasseDescricao"]').text
except NoSuchElementException:
nome_proc = "sem_nome_encontrado"
try:
data_autu = driver.find_element_by_xpath('//*[#id="idProcessoDetalhesBloco1"]/div[5]/span[2]').text
except NoSuchElementException:
data_autu = "sem_data_encontrada"
# Fills dictionary and list
dicionario = {"num_unico": num_unico,
"nome_proc": nome_proc,
"data_autu": data_autu
}
processos.append(dicionario)
# Return a page to click on next process
driver.execute_script("window.history.go(-1)")
# wait.until(EC.presence_of_element_located((By.CLASS_NAME, "classSpanPaginacaoImagensDireita")))
next_page = driver.find_elements_by_css_selector(".classSpanPaginacaoProximaPagina")
if len(next_page) == 0:
break
next_page[0].click()
You can try run the loop until next button is present on the screen. the logic will look like this,
try:
next_page = driver.find_element_by_class_name('classSpanPaginacaoProximaPagina')
if(next_page.is_displayed()):
next_page.click()
except NoSuchElementException:
print('next page does not exists')

How to find a row and click a table element across the table with selenium?

I want to click the link tied to the label for this td.
I can use the onclick to find one item link,but the name changes from HemoGlobin A1C, to HGB A1c, etc and the onclick has no unique ID to search for everytime.
using this now:
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//td[contains(#onclick, '%s' )]" % testname))).click()
testname = 'A1c'
Please try this:
testname= "a1c"
try:
elem = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//td[contains(translate(text(), "AC", "ac"), {})]/following-sibling::td[3]//td'.format(testname))))
except TimeoutException:
print("Element not found")
else:
elem.click()
Explanation:
//td[contains(translate(text(), "AC", "ac"), testname)]: First find a td element which contains text 'A1C' or 'A1c' (or 'a1C' or 'a1c'). Here transalte() is a xpath function which will replace all 'A' & 'C' with 'a' & 'c'.
/following-sibling::td[3]//td Then we have to go to a sibling of that td element, which in your case is a third sibling of same type, and then we find the child element td in it.
Please try this one to check whether works.
testname = "A1c"
element = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//td[text()[contains(.,'" + testname + "')]]")))
element.click

Categories

Resources