I'm trying to scrape the "10 Year Australian bond" prices table on this website:
https://www2.asx.com.au/markets/trade-our-derivatives-market/derivatives-market-prices/bond-derivatives
The following code works fine:
url='https://www2.asx.com.au/markets/trade-our-derivatives-market/derivatives-market-prices/bond-derivatives'
driver.get(url)
time.sleep(2)
id='onetrust-accept-btn-handler'
driver.find_element_by_id(id).click()
time.sleep(2)
driver.find_element_by_xpath('/html/body/div[1]/div/div[3]/div/div[1]/div/ul/li[3]').click()
id='tab-panel_17'
time.sleep(2)
tbl = driver.find_element_by_id(id).get_attribute('outerHTML')
soup = BeautifulSoup(tbl, 'html.parser')
aus_10y_future = pd.read_html(str(soup))[0]
In order to click on the "10 Year Australian bond" tab, I tried to use relative xpath insted of absolute one.
So, insted of
driver.find_element_by_xpath('/html/body/div[1]/div/div[3]/div/div[1]/div/ul/li[3]').click()
I tried:
driver.find_element_by_xpath('//*[contains(#class_name,"cmp-tabs__tabtitle")]/li[3]').click()
but I get an error. What am I doing wrong?
Thanks
class_name is not a valid element attribute. It should be class.
Also I see you are using wrong element.
This should work
driver.find_element_by_xpath("//ul[contains(#class,'cmp-tabs__tablist')]//li[3]").click()
A css selector is far better locator than xpath, You can read about it more here
You can furthur use this css_selector :
li[data-f2-context-code='XT']
in code :
driver.find_element_by_css_selector("li[data-f2-context-code='XT']").click()
or if you can construct a way better xpath like this :
//li[#data-f2-context-code='XT']
and use it like this :
driver.find_element_by_xpath("//li[#data-f2-context-code='XT']").click()
Related
I am trying to print by ID using Selenium. As far as I can tell, "a" is the tag and "title" is the attribute. See HTML below.
When I run the following code:
print(driver.find_elements(By.TAG_NAME, "a")[0].get_attribute('title'))
I get the output:
Zero Tolerance
So I'm getting the first attribute correctly. When I increment the code above:
print(driver.find_elements(By.TAG_NAME, "a")[1].get_attribute('title'))
My expected output is:
Aaliyah Love
However, I'm just getting blank. No errors. What am I doing incorrectly? Pls don't suggest using xpath or css, I'm trying to learn Selenium tags.
HTML:
<a class=" Link ScenePlayer-ChannelName-Link styles_1lHAYbZZr4 Link ScenePlayer-ChannelName-Link styles_1lHAYbZZr4" href="/en/channel/ztfilms" title="Zero Tolerance" rel="">Zero Tolerance</a>
...
<a class=" Link ActorThumb-ActorImage-Link styles_3dXcTxVCON Link ActorThumb-ActorImage-Link styles_3dXcTxVCON" href="/[RETRACTED]/Aaliyah-Love/63565" title="Aaliyah Love"
Selenium locators are a toolbox and you're saying you only want to use a screwdriver (By.TAG_NAME) for all jobs. We aren't saying that you shouldn't use By.TAG_NAME, we're saying that you should use the right tool for the right job and sometimes (most times) By.TAG_NAME is not the right tool for the job. CSS selectors are WAY more powerful locators because they can search for not only tags but also classes, properties, etc.
It's hard to say for sure what's going on without access to the site/page. It could be that the entire page isn't loaded and you need to add a wait for the page to finish loading (maybe count links expected on the page?). It could be that your locator isn't specific enough and is catching other A tags that don't have a title attribute.
I would start by doing some debugging.
links = driver.find_elements(By.TAG_NAME, "a")
for link in links:
print(link.get_attribute('title'))
What does this print?
If it prints some blank lines sprinkled throughout the actual titles, your locator is probably not specific enough. Try a CSS selector
links = driver.find_elements(By.CSS_SELECTOR, "a[title]")
for link in links:
print(link.get_attribute('title'))
If instead it returns some titles and then nothing but blank lines, the page is probably not fully loaded. Try something like
count = 20 # the number of expected links on the page
link_locator = (By.TAG_NAME, "a")
WebDriverWait(driver, 10).until(lambda wd: len(wd.find_elements(link_locator)) == count)
links = driver.find_elements(link_locator)
for link in links:
print(link.get_attribute('title'))
I'm trying to click a Javascript link, but I can't get it to work.
First I'm getting list of Links using this code:
links = driver.find_elements_by_xpath("(//div[#class='market-box-wp collapse in'])[1]//a[#class='truncate']")
then trying to click some of them
links[3].click() #Doesn't work
I found this solution online for Javascript links, but it's using xPath, not sure how to pass links[3] to it:
WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH,"Xpath of Element"))).click()
You can use xpath indexing :-
see this is the xpath
(//div[#class='market-box-wp collapse in'])[1]//a[#class='truncate']
Now to locate the 3rd item, you could do this :
((//div[#class='market-box-wp collapse in'])[1]//a[#class='truncate'])[3]
and use it like this :
WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH,"((//div[#class='market-box-wp collapse in'])[1]//a[#class='truncate'])[3]"))).click()
I am trying to create a bot to pay some bills automatically. The issue is I can't extract the amount(text) under div class. The error is element not found.
Used driver.find_element_by_xpath and WebDriverWait. Can you please indicate how to get the highlighted text-see the attached link? Thanks in advance.Page_inspect
I believe there was some issue with your xpath. Try below it should work:
amount = WebDriverWait(self.driver, self.timeout).until( EC.presence_of_element_located((By.XPATH, '//div[starts-with(#class,"bill-summary-total")]//div[contains(#data-ng-bind-html,"vm.productList.totalAmt")]')))
print('Your amount is: {}'.format(amount.text))
return float(amount.text)
You can use -
driver.find_element_by_xpath("//div[#data-ng-bind-html='vm.productList.totalAmt']").text
I have written XPath on the basis of your attached image. Use list indexing to get a target div. For Example -
driver.find_element_by_xpath("(//div[#data-ng-bind-html='vm.productList.totalAmt'])[1]").text
I'm using Selenium with python to make a spider.
A part of the web page is like this:
text - <span class="sr-keyword">name</span> text
I need to find the href and click.
I've tried as below:
target = driver.find_element_by_class_name('_j_search_link')
target.click()
target is not None but it doesn't seem that it could be clicked because after target.click(), nothing happened.
Also, I've read this question: Click on hyperlink using Selenium Webdriver
But it can't help me because in my case, there is a <span class>, not just a simple text Google.
You can look for an element with class _j_search_link that contains text Wat Chedi Luang
driver.find_element_by_xpath('//a[#class="_j_search_link" and contains(., "Wat Chedi Luang")]')
driver.find_elements_by_partial_link_text("Wat Chedi Luang").click()
I think you didn't find right element. Use CSS or Xpath to target this element then click it, like:
//a[#class='_j_search_link']
or
//a[#class='_j_search_link']/span[#class='sr-keyword']
I am trying to get the Nasdaq "Most Advanced" list of stocks from here: http://www.nasdaq.com/extended-trading/premarket-mostactive.aspx (click on Most Advanced tab)
What is the best way using Selenium to loop through all the Symbols and put them into a Python list? I have figured out the XPATH to the first Symbol:
/html/body/div[4]/div[3]/div/div[7]/div[2]/table/tbody/tr[2]/td/div/h3/a
but am not sure where to go from there.. I tried:
element=driver.find_elements_by_xpath("/html/body/div[4]/div[3]/div/div[7]/div[2]/table/tbody/tr[2]/td/div/h3/a")
print element.text
..as a start just to see if I can get a value but it obviously doesn't work. Sorry for the stupid question :(
These xpaths containing the full absolute path to the element are very fragile.
Rely on the class name (//div[#class="symbol_links"]):
from selenium.webdriver.firefox import webdriver
driver = webdriver.WebDriver()
driver.get('http://www.nasdaq.com/extended-trading/premarket-mostactive.aspx')
# choose "Most Advanced" tab
advanced_link = driver.find_element_by_id('most-advanced')
advanced_link.click()
# get the symbols
print [symbol.text for symbol in driver.find_elements_by_xpath('//div[#class="symbol_links"]') if symbol.text]
driver.close()
prints:
[u'RNA', u'UBIC', u'GURE', u'DRTX', u'DSLV', u'YNDX', u'QIWI', u'NXPI', u'QGEN', u'ZGNX']
Hope that helps.