I have some selenium code to input various search terms into a website's search field using the following code:
browser = webdriver.Chrome()
browser.get(url)
search_box1 = browser.find_element_by_id('searchText-0')
search_box2 = browser.find_element_by_id('searchText-2-dateInput')
search_box1.send_keys("Foobars")
search_box1.send_keys("2013")
search_box1.submit()
and then I have more code written to grab the number of hits that result from the given search query. However, for some values of "Foobars" in particular years, there are no hits and the query results in a page like this:
<body class="search">
<div id="skip">...</div>
<div style = "display:non;">...</div>
<div id="container" class="js">
<div id="header">...</div>
<div id="search">...</div>
<div id="helpContent">...</div>
<div id="main-body" class="noBg">
<div class="error">
<div>Sorry. There are no articles that contain all the keywords you entered.</div>
<p>Possible reasons:
</p>
<ul></ul>
<p></p>
<p></p>
</div>
</div>
How can I check that the search query is this rather than the page I get when there are hits from the search query? I was going to implement an if statement to check that the search query returned something, but I can't seem to figure out the right syntax to get the error element I need to do this. I've tried things like:
Error=browser.find_element_by_name('error')
or
Error=browser.find_element_by_xpath("//div[#class='error']")
But I keep getting the error:
selenium.common.exceptions.NoSuchElementException: Message: u'no such element\n
I want to identify the error element so I can do something like
if Error == "There are no articles that contain all the keywords you entered":
do something
else:
do something else
or even better, something that will tell me if the error exists to use for the conditional. Any help would be much appreciated.
Perhaps you are getting there too fast? Try
from selenium.webdriver.support import expected_conditions as EC
....
Error= WebDriverWait(ff, 10).until(EC.presence_of_element_located((By.XPATH, "//div[#class='error']")))
I found a solution that seems to work pretty well. You need to import the selenium common exceptions module then use try/execept. In the example code I am navigating to a page given by url and inputing Foobars and 2013 into two search fields. Then, I am recovering the number of hits that result, which is stored in navBreadcrumb.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import NoSuchElementException
browser = webdriver.Chrome()
browser.get(url)
search_box1 = browser.find_element_by_id('searchText-0')
search_box2 = browser.find_element_by_id('searchText-2-dateInput')
search_box1.send_keys("Foobars")
search_box1.send_keys("2013")
search_box1.submit()
try:
Hits = browser.find_element_by_id('navBreadcrumb').text
Hits = int(Hits)
except NoSuchElementException:
Hits = int(0)
browser.quit()
Related
Trying to identify a javascript button on a website and press it to extend the page.
The website in question is the tencent appstore after performing a basic search. At the bottom of the page is a button titled "div.load-more-new" where upon pressing will extend the page with more apps.
the html is as follows
<div data-v-33600cb4="" class="load-more-btn-new" style="">
<a data-v-33600cb4="" href="javascript:void(0);">加载更多
<i data-v-33600cb4="" class="load-more-icon">
</i>
</a>
</div>
At first I thought I could identify the button using BeautifulSoup but all calls to find result as empty.
from selenium import webdriver
import BeautifulSoup
import time
url = 'https://webcdn.m.qq.com/webapp/homepage/index.html#/appSearch?kw=%25E7%2594%25B5%25E5%25BD%25B1'
WebDriver = webdriver.Chrome('/chromedriver')
WebDriver.get(url)
time.sleep(5)
# Find using BeuatifulSoup
soup = BeautifulSoup(WebDriver.page_source,'lxml')
button = soup.find('div',{'class':'load-more-btn-new'})
[0] []
After looking around here, it became apparent that even if I could it in BeuatifulSoup, it would not help in pressing the button. Next I tried to find the element in the driver and use .click()
driver.find_element_by_class_name('div.load-more-btn-new').click()
[1] NoSuchElementException
driver.find_element_by_css_selector('.load-more-btn-new').click()
[2] NoSuchElementException
driver.find_element_by_class_name('a.load-more-new.load-more-btn-new[data-v-33600cb4]').click()
[3] NoSuchElementException
but all return with the same error: 'NoSuchElementException'
Your selections wont work, cause they do not point on the <a>.
This one selects by class name and you try to click the <div> that holds your <a>:
driver.find_element_by_class_name('div.load-more-btn-new').click()
This one is very close but is missing the a in selection:
driver.find_element_by_css_selector('.load-more-btn-new').click()
This one try to find_element_by_class_name but is a wild mix of tag, attribute and class:
driver.find_element_by_class_name('a.load-more-new.load-more-btn-new[data-v-33600cb4]').click()
How to fix?
Select your element more specific and nearly like in your second apporach:
driver.find_element_by_css_selector('.load-more-btn-new a').click()
or
driver.find_element_by_css_selector('a[data-v-33600cb4]').click()
Note:
While working with newer selenium versions you will get DeprecationWarning: find_element_by_ commands are deprecated. Please use find_element()*
from selenium.webdriver.common.by import By
driver.find_element(By.CSS_SELECTOR, '.load-more-btn-new a').click()
I'm trying to get the content from a tag, but it raised NoSuchElement even though getting it from an another tag with the same level is successful.
This is the link to website: https://soundcloud.com/pastlivesonotherplanets/sets/spell-jars-from-mars
This is the html code that I access to:
<div class="fullHero__tracksSummary">
<div class="playlistTrackCount sc-font">
<div class="genericTrackCount sc-font large m-active" title="16 tracks">
<div class="genericTrackCount__title">16</div>
<div class="genericTrackCount__subtitle"> Tracks </div>
<div class="genericTrackCount__duration sc-type-small sc-text-body sc-type-light sc-text-secondary">56:07</div>
</div>
</div>
</div>
I'm trying to get the playlist's duration with this code:
try:
tmp=driver.find_element_by_xpath("//div[#class='fullHero__tracksSummary']")
duration=tmp.find_element_by_class_name("genericTrackCount__duration sc-type-small sc-text-body
sc-type-light sc-text-secondary").get_attribute('textContent')
print(duration)
except:
print("None")
It raised error NoSuchElement even though the other two tags was successful.
What is the problem and how can I fix it?
Thank your for your time.
I think you can try directly using xpath //div[contains(#class, 'duration')] OR
//div[contains(#class, 'playlistTrackCount')]/descendant::div[contains(#class, 'duration')]
Without looking at the page, you probably need to wait for the element to load.
You can use either time.sleep(5) 5 being the number of seconds to wait or WebDriverWait(driver, 20) with an expected condition
so your code would look like
import expected_conditions as EC
WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CLASS_NAME, '"genericTrackCount__duration sc-type-small sc-text-body
sc-type-light sc-text-secondary"))).text
Also maybe the get_attribute('textContent') is failing, you can just use .text
I'm trying to iterate through a certain column of rows on a table/grid of an HTML page with I assume is a dynamic angular element.
I have tried to iterate through the rows by creating a list of common xpaths between each row. This only help me achieve 32 rows and not the full amount which is 332. I also tried waiting to see if the webpage would load and then have the full amount of web-elements. Then I tried to run a loop on searching for similar xpaths by scrolling down to the last element in the list. None of these ways helped me to iterate through the rows. Also I will not be able to share the website since the website is private.
python
webelement = []
driver.implicitly_wait(20)
ranSleep()
for webelement in driver.find_elements_by_xpath('//a[#class="ng-pristine ng-untouched ng-valid ng-binding ng-scope ng-not-empty"]'):
driver.implicitly_wait(20)
html for the rows
<a ng-model="row.entity.siteCode"
ng-click="grid.appScope.openSite(row.entity)"
style="cursor:pointer"
class="ng-pristine ng-untouched ng-valid ng-binding ng-scope ng-not-empty">
Albuquerque
<span title="Open defect(s) on site"
ng-show="row.entity.openDeficiencies"
style="background-color:yellow; color:#000;"
class="ng-hide">
!
</span>
</a>
I expect to be able to click all the links in each row once this is solved
Here is the snippet of the html code
<div id="table1" class="container-fluid">
<div ui-i18n="en"
class="grid advanceSearch ui-grid ng-isolate-scope grid1554731599680"
id="grid1" ui-grid="gridOptions"
ui-grid-expandable="" ui-grid-rowedit=""
ui-grid-resize-columns="" ui-grid-selection=""
ui-grid-edit="" ui-grid-move-columns="">
<!-- TODO (c0bra): add "scoped" attr here, eventually? -->
<style ui-grid-style="" class="ng-binding">
.grid1554731599680 {
/* Styles for the grid */
}
here is how the page looks with the table format
Here is the rows that I want to click through all of them
You might still be able to increment through each link by appending to the class name, as they seem to be a little unique in nature and using the last number as a char from the alphabet. Perhaps something like below could work :) Expanding on the classname's last character, in-case there's an increase, should solve the problem of there being more than 26.
Steps taken: increment classnames >append successes to list >move to link within list >click link >List item
import string
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
alpha = string.ascii_uppercase
successfulIncs = []
for char in alpha:
className = 'ng-pristine.ng-scope.ui-grid-coluiGrid-000' + char
try:
driver.find_elements_by_class_name(className)
successfullIncs.append(className)
except NoSuchElementException:
print("Element not found")
### First move to our element
for line in successfulIncs:
link = WebDriverWait(driver, 3).until(EC.visibility_of_element_located
(By.CLASS_NAME, line))
ActionChains(driver).move_to_element(link).perform()
#Click
ActionChains(driver).move_to_element(link).click(link).perform()
What is wrong in the below code
import os
import time
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("http://x.x.x.x/html/load.jsp")
elm1 = driver.find_element_by_link_text("load")
time.sleep(10)
elm1.click()
time.sleep(30)
driver.close()
The page source is
<body>
<div class="formcenterdiv">
<form class="form" action="../load" method="post">
<header class="formheader">Loader</header>
<div align="center"><button class="formbutton">load</button></div>
</form>
</div>
</body></html>
I want to click on button load. when I ran the above code getting this error
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: load
As the documentation says, find_elements_by_link_text only works on a tags:
Use this when you know link text used within an anchor tag. With this
strategy, the first element with the link text value matching the
location will be returned. If no element has a matching link text
attribute, a NoSuchElementException will be raised.
The solution is to use a different selector like find_element_by_class_name:
elm1 = driver.find_element_by_class_name('formbutton')
Did you try using Xpath?
As the OP said, find_elements_by_link_text works on a tags only:
Below code might help you out
driver.get_element_by_xpath("/html/body/div/form/div/button")
This question already has an answer here:
How to get data off from a web page in selenium webdriver [closed]
(1 answer)
Closed 7 years ago.
I'm trying to fetch data off from a Link. I want to fetch name/email/location/etc content from the web page and paste it into the webpage. I have written the code for it always when i run this code it just stores a blank list.
Please help me to copy these data from the web page.
I want to fetch company name, email, phone number from this Link and put these contents in an excel file. I want to do the same for the all pages of the website. I have got the logic to fetch the the links in the browser and switch in between them. I'm unable to fetch the data from the website. Can anybody provide me an enhancement to the code i have written.
Below is the code i have written:
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
import time
from lxml import html
import requests
import xlwt
browser = webdriver.Firefox() # Get local session of firefox
# 0 wait until the pages are loaded
browser.implicitly_wait(3) # 3 secs should be enough. if not, increase it
browser.get("http://ae.bizdirlib.com/taxonomy/term/1493") # Load page
links = browser.find_elements_by_css_selector("h2 > a")
#print link
for link in links:
link.send_keys(Keys.CONTROL + Keys.RETURN)
link.send_keys(Keys.CONTROL + Keys.PAGE_UP)
#tree = html.fromstring(link.text)
time.sleep(5)
companyNameElement = browser.find_elements_by_css_selector(".content.clearfix>div>fieldset>div>ul>li").text
companyName = companyNameElement
print companyNameElement
The Html code is given below
<div class="content">
<div id="node-946273" class="node node-country node-promoted node-full clearfix">
<div class="content clearfix">
<div itemtype="http://schema.org/Corporation" itemscope="">
<fieldset>
<legend>Company Information</legend>
<div style="width:100%;">
<div style="float:right; width:340px; vertical-align:top;">
<br/>
<ul>
<li>
<strong>Company Name</strong>
:
<span itemprop="name">Sabbro - F.Z.C</span>
</li>
</ul>
when i use it it gives me a error that list' object has no attribute 'text'. Can somebody help me to enhance the code and make it work. I'm kind of like stuck forever on this issue.
companyNameElement = browser.find_elements_by_css_selector(".content.clearfix>div>fieldset>div>ul>li").text
companyName = companyNameElement
print companyNameElement
find_elements_by... return a list, you can either access first element of that list or use equivalent find_element_by... method that would get just the first element.