Why For loop stop when When the case is except? - python

for loop stop In case of except When using the line :
page.close()
from selenium import webdriver
page = webdriver.Chrome("chromedriver")
page.maximize_window()
def test():
for i in range(10):
page.execute_script("window.open()")
page.switch_to.window(page.window_handles[i + 1])
page.get(f"https://haraj.com.sa/119174396{i}")
try:
Object = page.find_element_by_class_name("contact")
Object.click()
except:
page.close()
print("Not find element ")
test()
If the find element ("contact") click on it, And the page stays open in Browser Tab
If the element ("contact") is not find, the page will be closed And for loop continues
If commented #page.close() the for loop will continue And the page that I want to close will stays open in Browser Tab and print("Not find element ") function will be executed
Are there other ways to close the page that does not contain element ("contact") and continue for loop?

Main problem in your code is you are closing browser and then you want to locate element so it will generate error.
There are two solution below
Solution 1:
from selenium import webdriver
page = webdriver.Chrome("chromedriver")
page.maximize_window()
def test():
for i in range(10):
page.execute_script("window.open()")
page.switch_to.window(page.window_handles[i + 1])
page.get(f"https://haraj.com.sa/119174396{i}")
try:
Object = page.find_element_by_class_name("contact")
Object.click()
except:
page.close()
print("Not find element ")
page = webdriver.Chrome("chromedriver")
page.maximize_window()
test()
When element('contact') is not find Above code close the browser and reopen browser again and continue execution
Solution 2:
from selenium import webdriver
page = webdriver.Chrome("chromedriver")
page.maximize_window()
def test():
for i in range(10):
page.execute_script("window.open()")
page.switch_to.window(page.window_handles[i + 1])
page.get(f"https://haraj.com.sa/119174396{i}")
try:
Object = page.find_element_by_class_name("contact")
Object.click()
except:
print("Not find element ")
test()
Above code will not close browser so it will remain same as before exception and continue execution

Related

How do I click on "Next" button until it disappears in playwright (python)

Here is the code I am using to click next button the problem is after the first page is loded it closes the browser rather than clicking on the next button until it disappears. (I know it is html website but I am learning Playwright so starting light.)
I am using get_by_text() function, I have used this loop to achieve similar results but with selenium python.
Any suggestion how to make this happen?
with sync_playwright() as p:
browser = p.firefox.launch(headless=False)
page = browser.new_page()
page.goto("https://books.toscrape.com/")
while True:
try:
next = page.get_by_text("Next") ## next clicker
next.click()
except:
break
Maybe if you put a break in each loop:
from playwright.sync_api import sync_playwright
from time import sleep
with sync_playwright() as p:
browser = p.firefox.launch(headless=False)
page = browser.new_page()
page.goto("https://books.toscrape.com/")
while True:
try:
next = page.get_by_text("Next") # next clicker
next.click()
sleep(2)
except Exception:
break

How do I convert this selenium next button clicking code into loop so I can get url of all the pages until next button disappears

Hello I wrote this selenium code to click Next button and give me url of the next page.
The Question is
I want to convert this code in a loop so I can click on the next button & collect all URLs until next button disappears.
How do I out all the collected URLs in a list?
next = driver.find_element(By.LINK_TEXT, "Next")
next.click()
urrl = driver.current_url
print(urrl)
driver.quit()
I tried While True loop for this.
while True:
try:
urrl = driver.current_url **## I tried this line after clicking the next button as well**
next = driver.find_element(By.LINK_TEXT,"Next")
next.click()
except:
break
I was able to click on the next button until the end but I can not figure out how to collect url of the webpage and how to append them into a list.
Tried append but I think I am doing something wrong.
You can write a function to test if the element exists:
def is_element_exists(xpath, id_flag=False):
try:
if id_flag:
driver.find_element_by_id(xpath)
else:
driver.find_element_by_xpath(xpath)
return True
except Exception as e:
# print("Excpetion:[%s][%s]" % (e, traceback.format_exc()))
print('do not find the node')
return False
You can define a list object and append the collected URLs there as following.
The list should be defined before and out of the loop.
urls = []
while True:
try:
urrl = driver.current_url
urls.append(urrl)
next = driver.find_element(By.LINK_TEXT,"Next")
next.click()
except:
break
print(urls)
The code above is generic. Probably you will need to scroll to the "next" button and wait for it to become clickable etc.

Handling website errors with selenium python

I am scraping a website with selenium and send an alert, if something specific happens. Generally, my code works fine, but sometimes the website doesn't load the elements or the website has an error message like: "Sorry, something went wrong! Please refresh the page and try again!" Both times, my script waits until elements are loaded, but they don't and then my program doesn't do anything. I usually use requests and Beautifulsoup for web scraping, so I am not that familiar with selenium and I am not sure how to handle these errors, because my code doesn't send an error message and just waits, until the elements load, which will likely never happen. If I manually refresh the page, the program continues to work. My idea would be something like: If it takes more than 10 seconds to load, refresh the page and try again.
My code looks somewhat like this:
def get_data():
data_list = []
while len(data_list) < 3:
try:
data = driver.find_elements_by_class_name('text-color-main-secondary.text-sm.font-bold.text-left')
count = len(data)
data_list.append(data)
driver.implicitly_wait(2)
time.sleep(.05)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
WebDriverWait(driver, 3).until(EC.visibility_of_element_located((By.CLASS_NAME,
'text-color-main-secondary.text-sm.font-bold.text-left'.format(
str(
count + 1)))))
except TimeoutException:
break
text = []
elements = []
for i in range(len(data_list)):
for j in range(len(data_list[i])):
t = data_list[i][j].text
elements.append(data_list[i][j])
for word in t.split():
if '#' in word:
text.append(word)
return text, elements
option = webdriver.ChromeOptions()
option.add_extension('')
path = ''
driver = webdriver.Chrome(executable_path=path, options=option)
driver.get('')
login(passphrase)
driver.switch_to.window(driver.window_handles[0])
while True:
try:
infos, elements = get_data()
data, message = check_data(infos, elements)
if data:
send_alert(message)
time.sleep(600)
driver.refresh()
except Exception as e:
exception_type, exception_object, exception_traceback = sys.exc_info()
line_number = exception_traceback.tb_lineno
print("an exception occured - {}".format(e) + " in line: " + str(line_number))
You can use try and except to overcome this problem. First, let's locate the element with a 10s waiting time if the element is not presented you can refresh the page. here is the basic version of the code
try:
# wait for 10s to load element if it did not load then it will redirect to except block
WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.CLASS_NAME,'text-color-main-secondary.text-sm.font-bold.text-left'.format(str(count + 1)))))
except:
driver.refresh()
# locate the elemnt here again

Unable to fetch all the necessary links during Iteration - Selenium Python

I am newbie to Selenium Python. I am trying to fetch the profile URLs which will be 10 per page. Without using while, I am able to fetch all 10 URLs but for only the first page alone. When I use while, it iterates, but fetches only 3 or 4 URLs per page.
I need to fetch all the 10 links and keep iterating through pages. I think, I must do something with StaleElementReferenceException
Kindly help me solve this problem.
Given the code below.
def test_connect_fetch_profiles(self):
driver = self.driver
search_data = driver.find_element_by_id("main-search-box")
search_data.clear()
search_data.send_keys("Selenium Python")
search_submit = driver.find_element_by_name("search")
search_submit.click()
noprofile = driver.find_elements_by_xpath("//*[text() = 'Sorry, no results containing all your search terms were found.']")
self.assertFalse(noprofile)
while True:
wait = WebDriverWait(driver, 150)
try:
profile_links = wait.until(EC.presence_of_all_elements_located((By.XPATH,"//*[contains(#href,'www.linkedin.com/profile/view?id=')][text()='LinkedIn Member'or contains(#href,'Type=NAME_SEARCH')][contains(#class,'main-headline')]")))
for each_link in profile_links:
page_links = each_link.get_attribute('href')
print(page_links)
driver.implicitly_wait(15)
appendFile = open("C:\\Users\\jayaramb\\Documents\\profile-links.csv", 'a')
appendFile.write(page_links + "\n")
appendFile.close()
driver.implicitly_wait(15)
next = wait.until(EC.visibility_of(driver.find_element_by_partial_link_text("Next")))
if next.is_displayed():
next.click()
else:
print("End of Page")
break
except ValueError:
print("It seems no values to fetch")
except NoSuchElementException:
print("No Elements to Fetch")
except StaleElementReferenceException:
print("No Change in Element Location")
else:
break
Please let me know if there are any other effective ways to fetch the required profile URL and keep iterating through pages.
I created a similar setup which works alright for me. I've had some problems with selenium trying to click on the next-button but it throwing a WebDriverException instead, likely because the next-button is not in view. Hence, instead of clicking the next-button I get its href-attribute and load the new page up with driver.get() and thus avoiding an actual click making the test more stable.
def test_fetch_google_links():
links = []
# Setup driver
driver = webdriver.Firefox()
driver.implicitly_wait(10)
driver.maximize_window()
# Visit google
driver.get("https://www.google.com")
# Enter search query
search_data = driver.find_element_by_name("q")
search_data.send_keys("test")
# Submit search query
search_button = driver.find_element_by_xpath("//button[#type='submit']")
search_button.click()
while True:
# Find and collect all anchors
anchors = driver.find_elements_by_xpath("//h3//a")
links += [a.get_attribute("href") for a in anchors]
try:
# Find the next page button
next_button = driver.find_element_by_xpath("//a[#id='pnnext']")
location = next_button.get_attribute("href")
driver.get(location)
except NoSuchElementException:
break
# Do something with the links
for l in links:
print l
print "Found {} links".format(len(links))
driver.quit()

web element not detecting in selenium in a FOR LOOP

I'm trying to fetch some information from specific web elements. The problem is that when i try to fetch the information without for loop the program works like a charm. But the same when i put it in a for loop and try it does not detect the web elements in the loop. Here's the code i have been trying:
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
import time
from lxml import html
import requests
import xlwt
browser = webdriver.Firefox() # Get local session of firefox
# 0 wait until the pages are loaded
browser.implicitly_wait(3) # 3 secs should be enough. if not, increase it
browser.get("http://ae.bizdirlib.com/taxonomy/term/1493") # Load page
links = browser.find_elements_by_css_selector("h2 > a")
def test():#test function
elems = browser.find_elements_by_css_selector("div.content.clearfix > div > fieldset> div > ul > li > span")
print elems
for elem in elems:
print elem.text
elem1 = browser.find_elements_by_css_selector("div.content.clearfix>div>fieldset>div>ul>li>a")
for elems21 in elem1:
print elems21.text
return 0
for link in links:
link.send_keys(Keys.CONTROL + Keys.RETURN)
link.send_keys(Keys.CONTROL + Keys.PAGE_UP)
time.sleep(5)
test() # Want to call test function
link.send_keys(Keys.CONTROL + 'w')
The output i get when i print the object is a empty array as the output []. Can somebody help me enhance it. Newbie to selenium.
In the previous question i had asked about printing. But the problem lies is that it self is that the element is not detecting by itself. This way question is totally different.
I couldnt open the page but as I understand you want to open links sequencially and do something. With link.send_keys(Keys.CONTROL + 'w') you are closing the newly opened tab so your links open in a new tab. In this condition must switch to new window so that you can reach the element in new window. You can query windows by driver.window_handles and switch to last window by driver.switch_to_window(driver.window_handles[-1]) and after you closed the window you must switch back to the first window by driver.switch_to_window(driver.window_handles[0])
for link in links:
link.send_keys(Keys.CONTROL + Keys.RETURN)
# switch to new window
driver.switch_to_window(driver.window_handles[-1])
link.send_keys(Keys.CONTROL + Keys.PAGE_UP) # dont know why
time.sleep(5)
test() # Want to call test function
link.send_keys(Keys.CONTROL + 'w')
#switch back to the first window
driver.switch_to_window(driver.window_handles[0])

Categories

Resources