Python script extract from HTML - python

I'm writing a script that scans through a set of links. Within each link the script searches a table for a row. Once found, it increments the variable total_rank which is the sum ranks found on each web page. The rank is equal to the row number.
The code looks like this and is outputting zero:
import requests
from bs4 import BeautifulSoup
import time
url_to_scrape = 'https://www.teamrankings.com/ncb/stats/'
r = requests.get(url_to_scrape)
soup = BeautifulSoup(r.text, "html.parser")
stat_links = []
for a in soup.select(".chooser-list ul"):
list_entry = a.findAll('li')
relative_link = list_entry[0].find('a')['href']
link = "https://www.teamrankings.com" + relative_link
stat_links.append(link)
total_rank = 0
for link in stat_links:
r = requests.get(link)
soup = BeautifulSoup(r.text, "html.parser")
team_rows = soup.select(".tr-table.datatable.scrollable.dataTable.no-footer table")
for row in team_rows:
if row.findAll('td')[1].text.strip() == 'Oklahoma':
rank = row.findAll('td')[0].text.strip()
total_rank = total_rank + rank
# time.sleep(1)
print total_rank
debugging team_rows is empty after the select() call thing is, I've also tried different tags. For example I've tried soup.select(".scroll-wrapper div") I've tried soup.select("#DataTables_Table_0_wrapper div") all are returning nothing

The selector
".tr-table datatable scrollable dataTable no-footer tr"
Selects a <tr> element anywhere under a <no-footer> element anywhere under a <dataTable> element....etc.
I think really "datatable scrollable dataTable no-footer" are classes on your .tr-table? So in that case, they should be joined with the first class with a period. So I believe the final correct selector is:
".tr-table.datatable.scrollable.dataTable.no-footer tr"
UPDATE: the new selector looks like this:
".tr-table.datatable.scrollable.dataTable.no-footer table"
The problem here is that the first part, .tr-table.datatable... refers to the table itself. Assuming you're trying to get the rows of this table:
<table class="tr-table datatable scrollable dataTable no-footer" id="DataTables_Table_0" role="grid">
The proper selector remains the one I originally suggested.

The #audiodude's answer is correct though the suggested selector is not working for me.
You don't need to check every single class of the table element. Here is the working selector:
team_rows = soup.select("table.datatable tr")
Also, if you need to find Oklahoma inside the table - you don't have to iterate over every row and cell in the table. Just directly search for a specific cell and get the previous containing the rank:
rank = soup.find("td", {"data-sort": "Oklahoma"}).find_previous_sibling("td").get_text()
total_rank += int(rank) # it is important to convert the row number to int
Also note that you are extracting more stats links than you should - looks like the Player Stats links should not be followed since you are focused specifically on the Team Stats. Here is one way to get Team Stats links only:
links_list = soup.find("h2", text="Team Stats").find_next_sibling("ul")
stat_links = ["https://www.teamrankings.com" + a["href"]
for a in links_list.select("ul.expand-content li a[href]")]

Related

Finding first row of a table with Beautiful Soup

I'm working on an assignment for class. I need to write something that will return the first row in the table on this webpage (the Barr v. Lee) row: https://www.supremecourt.gov/opinions/slipopinion/19
I've seen other questions that some might consider similar. But they don't look like they're answering my same question. Most other questions it looks like they already have the table on head, rather than pulling it down from a website already.
Or, maybe I just can't see the resemblance. I've been scraping for about a week now.
Right now, I'm trying to build a loop that will go through all the div elements with an increment counter, and have the counter return a number that tells what the div is for that row so I can drill into it.
This is what I have so far:
for divs in soup_doc:
div_counter = 0
soup_doc.find_all('div')[div_counter]
div_counter = div_counter + 1
print(div_counter)
But right now, it's only returning 1 which I know isn't right. What should I do to fix this? Or is there a better way to go about getting this information?
My output should be:
63
7/14/20
20A8
Barr v. Lee
PC
591/2
In your example the div_counter = 0 has to go in front of your loop like this:
div_counter = 0
for divs in soup_doc:
soup_doc.find_all('div')[div_counter]
div_counter = div_counter + 1
print(div_counter)
You always get 1 because you set div_counter to 0 inside of you for-loop at a beginning of each iteration and than add 1.
To get the first row, you can use a CSS Selector .in tr:nth-of-type(2) td:
import requests
from bs4 import BeautifulSoup
URL = "https://www.supremecourt.gov/opinions/slipopinion/19"
soup = BeautifulSoup(requests.get(URL).content, "html.parser")
for tag in soup.select('.in tr:nth-of-type(2) td'):
print(tag.text)
Output:
63
7/14/20
20A8
Barr v. Lee
 
PC
591/2

Scrapy download data from links where certain other condition is fulfilled

I am extracting data from Imdb lists and it is working fine. I provide a link for all lists related to an imdb title, the code opens all lists and can pretty extract data what I want.
class lisTopSpider(scrapy.Spider):
name= 'ImdbListsSpider'
allowed_domains = ['imdb.com']
start_urls = [
'https://www.imdb.com/lists/tt2218988'
]
#lists related to given title
def parse(self, response):
#Grab list link section
listsLinks = response.xpath('//div[2]/strong')
for link in listsLinks:
list_url = response.urljoin(link.xpath('.//a/#href').get())
yield scrapy.Request(list_url, callback=self.parse_list, meta={'list_url': list_url})
Now what is the issue, is that I want this code to skip all lists that have more than 50 titles and get data where lists have less than 50 titles.
Problem with it is that list link is in separate block of xpath and number of titles is in another block.
So I tried the following.
for link in listsLinks:
list_url = response.urljoin(link.xpath('.//a/#href').get())
numOfTitlesString = response.xpath('//div[#class="list_meta"]/text()[1]').get()
numOfTitles = int(''.join(filter(lambda i: i.isdigit(), numOfTitlesString)))
print ('numOfTitles' , numOfTitles)
if numOfTitles < 51:
yield scrapy.Request(list_url, callback=self.parse_list, meta={'list_url': list_url})
But it gives me empty csv file. When I try to print numOfTitles in for loop, it gives me result of very first xpath found for all rounds of the loop.
Please suggest a solution for this.
As Gallaecio mentioned, it's just an xpath issue. It's normal you keep getting the same number, because you're executing the exact same xpath to the exact same response object. In the below code we get the whole block (instead of just the part that contains the url), and for every block we get the url and the number of titles.
list_blocks = response.xpath('//*[has-class("list-preview")]')
for block in list_blocks:
list_url = response.urljoin(block.xpath('./*[#class="list_name"]//#href').get())
number_of_titles_string = block.xpath('./*[#class="list_meta"]/text()').get()
number_of_titles = int(''.join(filter(lambda i: i.isdigit(), number_of_titles_string)))

How to access a specific object in a class HTML while web scraping with python

Now, I understand that this may be a simple question, but I don't know anything about HTML and I'm new to web scraping with python. I was wondering if anyone could tell me how to access this specific object in this class on this website (https://sky.lea.moe/stats/Igris/Apple). The specific object I want to access is in HTML below.
'''
Average Skill Level:
32.5 == $0
'''
My current code looks like this and prints out an empty list, and even if it did print, I only want it to print out everything from this specific line of code shown above.
import bs4
res = requests.get('https://sky.lea.moe/stats/Igris/Apple')
soup = bs4.BeautifulSoup(res.text, 'lxml')
type(soup)
skillAverageList = []
for i in soup.select('.stat-value'):
skillAverageList.append(i.text)
Any help would be appreciated, hopefully this will further help me understand HTML and python as a whole. Thanks in advance.
import requests
from bs4 import BeautifulSoup
res = requests.get('https://sky.lea.moe/stats/Igris/Apple')
soup = BeautifulSoup(res.text, 'lxml')
print(soup.find("div", {"id":"additional_stats_container"}).find_all("div",class_="additional-stat")[-2].get_text(strip=True))
Output:
Average Skill Level:32.5
elements = soup.find_all("span", class_="stat-name")
skill = [i for i in elements if "Average Skill" in i.text] #getting element that has "Average Skill" in its text
idx = elements.index(skill) #getting its index to get the value of same index from values
values = soup.find_all("span", class_="stat-value")
value = values[idx] #as told earlier index of name would be same for value
print(skill[0].text + value.text)

Get value next row based on value current row Selenium

Set-up
I need to obtain the population data for all NUTS3 regions on this Wikipedia page.
I have obtained all URLs per NUTS3 region and will let Selenium loop over them to obtain each region's population number as displayed on its page.
That is to say, for each region I need to get the population displayed in its infobox geography vcard element. E.g. for this region, the population would be 591680.
Code
Before writing the loop, I'm trying to obtain the population for one individual region,
url = 'https://en.wikipedia.org/wiki/Arcadia'
browser.get(url)
vcard_element = browser.find_element_by_css_selector('#mw-content-text > div > table.infobox.geography.vcard').find_element_by_xpath('tbody')
for row in vcard_element.find_elements_by_xpath('tr'):
try:
if 'Population' in row.find_element_by_xpath('th').text:
print(row.find_element_by_xpath('th').text)
except Exception:
pass
Issue
The code works. That is, it prints the row containing the word 'Population'.
Question: How do I tell Selenium to get next row – the row containing the actual population number?
Use ./following::tr[1] or ./following-sibling::tr[1]
url = 'https://en.wikipedia.org/wiki/Arcadia'
browser=webdriver.Chrome()
browser.get(url)
vcard_element = browser.find_element_by_css_selector('#mw-content-text > div > table.infobox.geography.vcard').find_element_by_xpath('tbody')
for row in vcard_element.find_elements_by_xpath('tr'):
try:
if 'Population' in row.find_element_by_xpath('th').text:
print(row.find_element_by_xpath('th').text)
print(row.find_element_by_xpath('./following::tr[1]').text) #whole word
print(row.find_element_by_xpath('./following::tr[1]/td').text) #Only number
except Exception:
pass
Output on Console:
Population (2011)
• Total 86,685
86,685
While you can certainly do this with selenium, I would personally recommend using requests and lxml, as they are much lighter weight than selenium and can get the job done just as well. I found the below to work for a few regions I tested:
try:
response = requests.get(url)
infocard_rows = html.fromstring(response.content).xpath("//table[#class='infobox geography vcard']/tbody/tr")
except:
print('Error retrieving information from ' + url)
try:
population_row = 0
for i in range(len(infocard_rows)):
if infocard_rows[i].findtext('th') == 'Population':
population_row = i+1
break
population = infocard_rows[population_row].findtext('td')
except:
print('Unable to find population')
In essence, the html.fromstring().xpath() is getting all of the rows from the infobox geography vcard table on the path. The next try-catch then just tries to locate the th whose inner text is Population and then pulls the text from the next td (which is the population number).
Hopefully this is helpful, even if it isn't selenium like you were asking! Usually you'd use Selenium if you want to recreate browser behavior or inspect javascript elements. You can certainly use it here as well though.

Using Beautiful Soup to Find Links Before a Certain Letter

I have a BeautifulSoup problem that hopefully you can help me out with.
Currently, I have a website with a lot of links on it. The links lead to pages that contain the data of that item that is linked. If you want to check it out, it's this one: http://ogle.astrouw.edu.pl/ogle4/ews/ews.html. What I ultimately want to accomplish is to print out the links of the data that are labeled with an 'N'. It may not be apparent at first, but if you look closely on the website, some of the data have 'N' after their Star No, and others do not. Afterwards, I use that link to download a file containing the information I need on that data. The website is very convenient because the download URLs only change a bit from data to data, so I only need to change a part of the URL, as you'll see in the code below.
I currently have accomplished the data downloading part. However, this is where you come in. Currently, I need to put in the identification number of the BLG event that I desire. (This will become apparent after you view the code below.) However, the website is consistently updating over time, and having to manually search for 'N' events takes up unnecessary time. I want the Python code to be able to do it for me. My original thoughts on the subject were that I could have BeautifulSoup search through the text for all N's, but I ran into some issues on accomplishing that. I feel like I am not familiar enough with BeautifulSoup to get done what I wish to get done. Some help would be appreciated.
The code I have currently is below. I have put in a range of BLG events that have the 'N' label as an example.
#Retrieve .gz files from URLs
from urllib.request import urlopen
import urllib.request
from bs4 import BeautifulSoup
#Access website
URL = 'http://ogle.astrouw.edu.pl/ogle4/ews/ews.html'
soup = BeautifulSoup(urlopen(URL))
#Select the desired data numbers
numbers = list(range(974,998))
x=0
for i in numbers:
numbers[x] = str(i)
x += 1
print(numbers)
#Get all links and put into list
allLinks = []
for link in soup.find_all('a'):
list_links = link.get('href')
allLinks.append(list_links)
#Remove None datatypes from link list
while None in allLinks:
allLinks.remove(None)
#print(allLinks)
#Remove all links but links to data pages and gets rid of the '.html'
list_Bindices = [i for i, s in enumerate(allLinks) if 'b' in s]
print(list_Bindices)
bLinks = []
for x in list_Bindices:
bLinks.append(allLinks[x])
bLinks = [s.replace('.html', '') for s in bLinks]
#print(bLinks)
#Create a list of indices for accessing those pages
list_Nindices = []
for x in numbers:
list_Nindices.append([i for i, s in enumerate(bLinks) if x in s])
#print(type(list_Nindices))
#print(list_Nindices)
nindices_corrected = []
place = 0
while place < (len(list_Nindices)):
a = list_Nindices[place]
nindices_corrected.append(a[0])
place = place + 1
#print(nindices_corrected)
#Get the page names (without the .html) from the indices
nLinks = []
for x in nindices_corrected:
nLinks.append(bLinks[x])
#print(nLinks)
#Form the URLs for those pages
final_URLs = []
for x in nLinks:
y = "ftp://ftp.astrouw.edu.pl/ogle/ogle4/ews/2017/"+ x + "/phot.dat"
final_URLs.append(y)
#print(final_URLs)
#Retrieve the data from the URLs
z = 0
for x in final_URLs:
name = nLinks[z] + ".dat"
#print(name)
urllib.request.urlretrieve(x, name)
z += 1
#hrm = urllib.request.urlretrieve("ftp://ftp.astrouw.edu.pl/ogle/ogle4/ews/2017/blg-0974.tar.gz", "practice.gz")
This piece of code has taken me quite some time to write, as I am not a professional programmer, nor an expert in BeautifulSoup or URL manipulation in any way. In fact, I use MATLAB more than Python. As such, I tend to think in terms of MATLAB, which translates into less efficient Python code. However, efficiency is not what I am searching for in this problem. I can wait the extra five minutes for my code to finish if it means that I understand what is going on and can accomplish what I need to accomplish. Thank you for any help you can offer! I realize this is a fairly muti-faceted problem.
This should do it:
from urllib.request import urlopen
import urllib.request
from bs4 import BeautifulSoup
#Access website
URL = 'http://ogle.astrouw.edu.pl/ogle4/ews/ews.html'
soup = BeautifulSoup(urlopen(URL), 'html5lib')
Here, I'm using the html5lib to parse the url content.
Next, we'll look through the table, extracting links if the star names have a 'N' in them:
table = soup.find('table')
links = []
for tr in table.find_all('tr', {'class' : 'trow'}):
td = tr.findChildren()
if 'N' in td[4].text:
links.append('http://ogle.astrouw.edu.pl/ogle4/ews/' + td[1].a['href'])
print(links)
Output:
['http://ogle.astrouw.edu.pl/ogle4/ews/blg-0974.html', 'http://ogle.astrouw.edu.pl/ogle4/ews/blg-0975.html', 'http://ogle.astrouw.edu.pl/ogle4/ews/blg-0976.html', 'http://ogle.astrouw.edu.pl/ogle4/ews/blg-0977.html', 'http://ogle.astrouw.edu.pl/ogle4/ews/blg-0978.html',
...
]

Categories

Resources