beautifulsoup unable to extract href link - python

So i am using selenium , phantomjs as my webdriver, and beautifulsoup.
currently i want to extract all the links which are underneath the attribute title.
The site i want to extract
However, it seems to be not picking up these links at all ! What is going on ?
# The standard library modules
import os
import sys
import re
# The wget module
import wget
# The BeautifulSoup module
from bs4 import BeautifulSoup
# The selenium module
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
def getListLinks(link):
#setup drivers
driver = webdriver.PhantomJS(service_args=['--ignore-ssl-errors=true'])
driver.get(link) # load the web page
src = driver.page_source
#Get text and split it
soup = BeautifulSoup(src, 'html5lib')
print soup
links = soup.find_all('a')
print links
driver.close()
getListLinks("http://www.bursamalaysia.com/market/listed-companies/company-announcements/#/?category=FA&sub_category=FA1&alphabetical=All&company=9695&date_from=01/01/2012&date_to=31/12/2016")
Here is an example of a link i want to extract
Quarterly rpt on consolidated results for the financial period ended 31/03/2017

What I really don't understand is why you are mixing beautifullsoup with selenium. Selenium has it's own api for extract dom element. You don't need to bring BS4 into the picture. Besides BS4 can only work with static HTML and ignores dynamically generated HTML which your selenium instance is capable of handling.
Just do
driver.find_element_by_tag_name('a')

You want the links under the title column which is the 4th column of the table. You can use an nth-of-type selector to restrict to table cells (td elements) within 4 column of each row of target table. A wait condition is added to ensure elements are present.
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
d = webdriver.Chrome()
url = 'http://www.bursamalaysia.com/market/listed-companies/company-announcements/#/?category=all'
d.get(url)
links = [link.get_attribute('href') for link in WebDriverWait(d, 10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, 'tr td:nth-of-type(4) a')))]
print(links)
d.quit()

Related

web scraping table with selenium gets only html elements but no content

I am trying to scrape tables using selenium and beautifulsoup from this 3 websites:
https://www.erstebank.hr/hr/tecajna-lista
https://www.otpbanka.hr/tecajna-lista
https://www.sberbank.hr/tecajna-lista/
For all 3 websites result is HTML code for the table but without text.
My code is below:
import requests
from bs4 import BeautifulSoup
import pyodbc
import datetime
from selenium import webdriver
PATH = r'C:\Users\xxxxxx\AppData\Local\chromedriver.exe'
driver = webdriver.Chrome(PATH)
driver.get('https://www.erstebank.hr/hr/tecajna-lista')
driver.implicitly_wait(10)
soup = BeautifulSoup(driver.page_source, 'lxml')
table = soup.find_all('table')
print(table)
driver.close()
Please help what am I missing?
Thank you
The Website is taking time to load the data in the table.
Either Apply time.sleep
import time
driver.get('https://www.erstebank.hr/hr/tecajna-lista')
time.sleep(10)...
Or apply Explicit wait such that the rows are loaded in the tabel.
import requests
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
driver = webdriver.Chrome(executable_path="path to chromedriver.exe")
driver.maximize_window()
driver.get('https://www.erstebank.hr/hr/tecajna-lista')
wait = WebDriverWait(driver,30)
wait.until(EC.presence_of_all_elements_located((By.XPATH,"//table/tbody/tr[#class='ng-scope']")))
# driver.find_element_by_id("popin_tc_privacy_button_2").click() # Cookie setting pop-up. Works fine even without dealing with this pop-up.
soup = BeautifulSoup(driver.page_source, 'html5lib')
table = soup.find_all('table')
print(table)
BeautifulSoup will not find the table as it doesn't exist from it's reference point. Here, you tell Selenium to pause the Selenium driver matcher if it notices that an element is not present yet:
# This only works for the Selenium element matcher
driver.implicitly_wait(10)
Then, right after that, you get the current HTML state (table still does not exist) and put it into BeautifulSoup's parser. BS4 will not be able to see the table, even if it loads in later, because it will use the current HTML code you just gave it:
# You now move the CURRENT STATE OF THE HTML PAGE to BeautifulSoup's parser
soup = BeautifulSoup(driver.page_source, 'lxml')
# As this is now in BS4's hands, it will parse it immediately (won't wait 10 seconds)
table = soup.find_all('table')
# BS4 finds no tables as, when the page first loads, there are none.
To fix this, you can ask Selenium to try and get the HTML table itself. As Selenium will use the implicitly_wait you specified earlier, it will wait until it exists, and only then allow the rest of the code execution to persist. At that point, when BS4 receives the HTML code, the table will be there.
driver.implicitly_wait(10)
# Selenium will wait until the element is found
# I used XPath, but you can use any other matching sequence to get the table
driver.find_element_by_xpath("/html/body/div[2]/main/div/section/div[2]/div[1]/div/div/div/div/div/div/div[2]/div[6]/div/div[2]/table/tbody/tr[1]")
soup = BeautifulSoup(driver.page_source, 'lxml')
table = soup.find_all('table')
However, this is a bit overkill. Yes, you can use Selenium to parse the HTML, but you could also just use the requests module (which, from your code, I see you already have imported) to get the table data directly.
The data is asynchronously loaded from this endpoint (you can use the Chrome DevTools to find it yourself). You can pair this with the json module to turn it into a nicely formatted dictionary. Not only is this method faster, but it is also much less resource intensive (Selenium has to open a whole browser window).
from requests import get
from json import loads
# Get data from URL
data_as_text = get("https://local.erstebank.hr/rproxy/webdocapi/fx/current").text
# Turn to dictionary
data_dictionary = loads(data_as_text)
You can use this as the foundation for further work:-
from bs4 import BeautifulSoup as BS
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
TDCLASS = 'ng-binding'
options = webdriver.ChromeOptions()
options.add_argument('--headless')
with webdriver.Chrome(options=options) as driver:
driver.get('https://www.erstebank.hr/hr/tecajna-lista')
try:
# There may be a cookie request dialogue which we need to click through
WebDriverWait(driver, 5).until(EC.presence_of_element_located(
(By.ID, 'popin_tc_privacy_button_2'))).click()
except Exception:
pass # Probably timed out so ignore on the basis that the dialogue wasn't presented
# The relevant <td> elements all seem to be of class 'ng-binding' so look for those
WebDriverWait(driver, 5).until(
EC.presence_of_element_located((By.CLASS_NAME, TDCLASS)))
soup = BS(driver.page_source, 'lxml')
for td in soup.find_all('td', class_=TDCLASS):
print(td)

Scrape href/URL

My code goes into a webpage which contains multiple entries, gets their URL and then puts them into a list.
Then it navigates through each list of URL 1 by 1, and then does a scape per presentation.
Right now I scrape each title of each presentation (you can see if you run the code), but within the title, there is another URL/href that I would want.
Is there a way to scrape this?
Thanks
from selenium import webdriver
import pandas as pd
from bs4 import BeautifulSoup
import requests
import time
val=[]
driver = webdriver.Chrome()
for x in range (1,3):
driver.get(f'https://www.abstractsonline.com/pp8/#!/9325/sessions/#sessiontype=Advances%20in%20Diagnostics%20and%20Therapeutics/{x}')
time.sleep(9)
page_source = driver.page_source
eachrow = ["https://www.abstractsonline.com/pp8/#!/9325/session/" + x.get_attribute('data-id') for x in driver.find_elements_by_xpath('//*[#id="results"]/li//h1[#class="name"]')]
for row in eachrow:
val.append(row)
print(row)
for b in val:
driver.get(b)
time.sleep(3)
page_source1=driver.page_source
soup=BeautifulSoup(page_source1,'html.parser')
productlist=soup.find_all('a',class_='title color-primary')
for item in productlist:
presentationTitle=item.text.strip()
print(presentationTitle)
I think you want some wait conditions in there and then to extract the href attribute for each presentation within a page
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome()
base = 'https://www.abstractsonline.com/pp8/#!/9325/session/'
for x in range (1, 3):
driver.get(f'https://www.abstractsonline.com/pp8/#!/9325/sessions/#sessiontype=Advances%20in%20Diagnostics%20and%20Therapeutics/{x}')
links = [base + i.get_attribute('data-id') for i in WebDriverWait(driver,10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "li .name")))]
for link in links:
driver.get(link)
print(WebDriverWait(driver,10).until(EC.presence_of_element_located((By.ID, "spnSessionTitle"))).text)
for presentation in driver.find_elements_by_css_selector('.title'):
print(presentation.text.strip())
print('https://www.abstractsonline.com/pp8' + presentation.get_attribute('href'))
links = driver.find_elements_by_partial_link_text('https://yourlinks.com/?action=')
for link in links:
print(link.get_attribute("href"))

Selenium and BeautifulSoup can't fetch all HTML content

I'm scraping the bottom table labeled "Capacity : Operationally Available - Evening" on https://lngconnection.cheniere.com/#/ccpl
I am able to get all the HTML and everything shows up when I prettify() print the HTML but the parsers can't find it when I give a command to find the specific information I need.
Here's my script:
cc_driver = webdriver.Chrome('/Users/.../Desktop/chromedriver')
cc_driver.get('https://lngconnection.cheniere.com/#/ccpl')
cc_html = cc_driver.page_source
cc_content = soup(cc_html, 'html.parser')
cc_driver.close()
cc_table = cc_content.find('table', class_='k-selectable')
#print(cc_content.prettify())
print(cc_table.prettify())
now when I do the
print(cc_table.prettify())
The output is everything except the actual table data. Is there some error in my code or in their HTML that is hiding the actual table values? I'm able to see it when I print everything Selenium captures on the page. The HTML also doesn't have specific ID tags for any of the cell values.
You are looking into the HTML which is not yet complete. All the elements have not yet returned from the javascript. So you can do a webdriver wait.
from selenium import webdriver
from bs4 import BeautifulSoup as soup
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
cc_driver = webdriver.Chrome(r"path for driver")
cc_driver.get('https://lngconnection.cheniere.com/#/ccpl')
WebDriverWait(cc_driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR,
'#capacityGrid > table > tbody')))
cc_html = cc_driver.page_source
cc_content = soup(cc_html, 'html.parser')
cc_driver.close()
cc_table = cc_content.find('table', class_='k-selectable')
#print(cc_content.prettify())
print(cc_table.prettify())
This will wait for the element to be present.
This should help you getting table html
from selenium import webdriver
from bs4 import BeautifulSoup as bs
cc_driver = webdriver.Chrome('../chromedriver_win32/chromedriver.exe')
cc_driver.get('https://lngconnection.cheniere.com/#/ccpl')
cc_html = cc_driver.page_source
cc_content = bs(cc_html, 'html.parser')
cc_driver.close()
cc_table = cc_content.find('table', attrs={'class':'k-selectable'})
#print(cc_content.prettify())
print(cc_table.prettify())

Locating an element in bs4

Trying to scrape all the information of every item dozer on this page.
I have just started and have only fair idea about scraping but not sure of how to doing that.
driver=webdriver.Firefox()
driver.get('https://www.rbauction.com/dozers?keywords=&category=21261693092')
soup=BeautifulSoup(driver.page_source,'html.parser')
#trying all d/f ways buh getting oly nonetype or no element
get= soup.findAll('div' , attrs={'class' : 'sc-gisBJw eHFfwj'})
get2= soup.findAll('div' , attrs={'id' : 'searchResultsList'})
get3= soup.find('div.searchResultsList').find_all('a')
I have to get into each class/id and loop a['href'] and get information of each dozer.
Please help.
You need to wait for the data you are looking for to load before reading it into
the BeautifulSoup object. Use WebDriverWait in selenium to wait for the page to load as it takes a while to render fully:
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
driver = webdriver.Firefox()
driver.get('https://www.rbauction.com/dozers?keywords=&category=21261693092')
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, 'searchResultsList')))
soup = BeautifulSoup(driver.page_source,'html.parser')
This line should return the hrefs from the page then:
hrefs = [el.attrs.get('href') for el in soup.find('div', attrs={'id': 'searchResultsList'}).find_all('a')]
You can just use requests
import requests
headers = {'Referrer':'https://www.rbauction.com/dozers?keywords=&category=21261693092'}
data = requests.get('https://www.rbauction.com/rba-msapi/search?keywords=&searchParams=%7B%22category%22%3A%2221261693092%22%7D&page=0&maxCount=48&trackingType=2&withResults=true&withFacets=true&withBreadcrumbs=true&catalog=ci&locale=en_US', headers = headers).json()
for item in data['response']['results']:
print(item['name'],item['url'])

BeautifulSoup, Selenium and Python, parsing by a tag

I'm trying to parse data from this website
https://findrulesoforigin.org/home/compare?reporter=392&partner=036&product=020130010
In particular, I am trying to get the data under Criterion(ITC). The text I want says CC+ECT
The information I want in html appears to be
<a class= js-glossary data-leg= "CC+ECT">
I'm new to web scraping and I tried the techniques taught in the tutorial but they didn't work. I heard about Selenium and tried this out too. However, this code didn't work either.
from selenium import webdriver
from bs4 import BeautifulSoup
import requests
driver = webdriver.Firefox(executable_path = r"D:\Python work\driver\geckodriver.exe")
driver.get(r"https://findrulesoforigin.org/home/compare?reporter=392&partner=036&product=020130010")
html = driver.page_source
soup = BeautifulSoup(html, 'lxml')
data = soup.find_all("a", attrs= {"class":"js-glossary"})
The code results in an empty list. I also read that I can pull out the data by treating the soup tag like a dictionary. in this case
data["data-leg"]
Am I on the right track or am I way off?
The text you're trying to get generated dynamically by JavaScript. To get it you need to wait for its appearance:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
driver = webdriver.Firefox(executable_path = r"D:\Python work\driver\geckodriver.exe")
driver.get(r"https://findrulesoforigin.org/home/compare?reporter=392&partner=036&product=020130010")
text = WebDriverWait(driver, 5).until(lambda driver: driver.find_element_by_xpath('//div[.="criterion(itc)"]/following-sibling::div').text)
print(text)
# 'CC + ECT'
Seems you were pretty close. You may not even require Beautiful Soup if you are using Selenium. Using Selenium you need to induce WebDriverwait for the desired element to be visible and you can use the following solution:
Code Block:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Firefox(executable_path = r'C:\Utility\BrowserDrivers\geckodriver.exe')
driver.get(r"https://findrulesoforigin.org/home/compare?reporter=392&partner=036&product=020130010")
print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//div[#class='lbl' and text()='criterion(itc)']//following::div[1]/a"))).get_attribute("innerHTML"))
Console Output:
CC + ECT

Categories

Resources