I want grab the age, place of birth and previous occupation of senators.
Information for each individual senator is available on Wikipedia, on their respective pages, and there is another page with a table that lists all senators by the name.
How can I go through that list, follow links to the respective pages of each senator, and grab the information I want?
Here is what I've done so far.
1 . (no python) Found out that DBpedia exists and wrote a query to search for senators. Unfortunately DBpedia hasn't categorized most (if any) of them:
SELECT ?senator, ?country WHERE {
?senator rdf:type <http://dbpedia.org/ontology/Senator> .
?senator <http://dbpedia.org/ontology/nationality> ?country
}
Query results are unsatisfactory.
2 . Found out that there is a python module called wikipedia that allows me to search and retrieve information from individual wiki pages. Used it to get a list of senator names from the table by looking at the hyperlinks.
import wikipedia as w
w.set_lang('pt')
# Grab page with table of senator names.
s = w.page(w.search('Lista de Senadores do Brasil da 55 legislatura')[0])
# Get links to senator names by removing links of no interest
# For each link in the page, check if it's a link to a senator page.
senators = [name for name in s.links if not
# Senator names don't contain digits nor ,
(any(char.isdigit() or char == ',' for char in name) or
# And full names always contain spaces.
' ' not in name)]
At this point I'm a bit lost. Here the list senators contains all senator names, but also other names, e.g., party names. The wikipidia module (at least from what I could find in the API documentation) also doesn't implement functionality to follow links or search through tables.
I've seen two related entries here on StackOverflow that seem helpful, but they both (here and here) extract information from a single page.
Can anyone point me towards a solution?
Thanks!
Ok, so I figured it out (thanks to a comment pointing me to BeautifulSoup).
There is actually no big secret to achieve what I wanted. I just had to go through the list with BeautifulSoup and store all the links, and then open each stored link with urllib2, call BeautifulSoup on the response, and.. done. Here is the solution:
import urllib2 as url
import wikipedia as w
from bs4 import BeautifulSoup as bs
import re
# A dictionary to store the data we'll retrieve.
d = {}
# 1. Grab the list from wikipedia.
w.set_lang('pt')
s = w.page(w.search('Lista de Senadores do Brasil da 55 legislatura')[0])
html = url.urlopen(s.url).read()
soup = bs(html, 'html.parser')
# 2. Names and links are on the second column of the second table.
table2 = soup.findAll('table')[1]
for row in table2.findAll('tr'):
for colnum, col in enumerate(row.find_all('td')):
if (colnum+1) % 5 == 2:
a = col.find('a')
link = 'https://pt.wikipedia.org' + a.get('href')
d[a.get('title')] = {}
d[a.get('title')]['link'] = link
# 3. Now that we have the links, we can iterate through them,
# and grab the info from the table.
for senator, data in d.iteritems():
page = bs(url.urlopen(data['link']).read(), 'html.parser')
# (flatten list trick: [a for b in nested for a in b])
rows = [item for table in
[item.find_all('td') for item in page.find_all('table')[0:3]]
for item in table]
for rownumber, row in enumerate(rows):
if row.get_text() == 'Nascimento':
birthinfo = rows[rownumber+1].getText().split('\n')
try:
d[senator]['birthplace'] = birthinfo[1]
except IndexError:
d[senator]['birthplace'] = ''
birth = re.search('(.*\d{4}).*\((\d{2}).*\)', birthinfo[0])
d[senator]['birthdate'] = birth.group(1)
d[senator]['age'] = birth.group(2)
if row.get_text() == 'Partido':
d[senator]['party'] = rows[rownumber + 1].getText()
if 'Profiss' in row.get_text():
d[senator]['profession'] = rows[rownumber + 1].getText()
Pretty simple. BeautifulSoup works wonders =)
Related
I am trying to scrape the content of this website: https://public.era.nih.gov/pubroster/roster.era?CID=102353 and I am able to do it for the names beginning with ANANDASABAPATHY which are contained inside a "p" tag:
driver.get(url)
content = driver.page_source.encode('utf-8').strip()
soup = BeautifulSoup(content,"html.parser")
column = soup.find_all("p")
and then playing with the length of the element:
for bullet in column:
if len(bullet.find_all("br"))==4:
person = {}
person["NAME"]=bullet.contents[0].strip()
person["PROFESSION"]=bullet.contents[2].strip()
person["DEPARTMENT"]=bullet.contents[4].strip()
person["INSTITUTION"]=bullet.contents[6].strip()
person["LOCATION"]=bullet.contents[8].strip()
However, I have 2 issues.
I am unable to scrape the information for the chairperson (GUDJONSSON) which is not contained inside a "p" tag. I was trying something like:
soup.find("b").findNext('br').findNext('br').findNext('br').contents[0].strip()
but it is not working
I am unable to differentiate between the last 2 persons (WONDRAK and GERSCH) because they are both contained inside the same "p" tag.
Any help would be extremely useful! Thanks in advance!
This is a case where it may be easier to handle processing the data more as plain text than as HTML, after initially extracting the element you're looking for. The reason is that the HTML is not very well formatted for parsing / it doesn't follow a very uniform pattern. The html5lib package generally handles poorly formatted html better than html.parser, but it didn't help significantly in this case.
import re
from typing import Collection, Iterator
from bs4 import BeautifulSoup
def iter_lines(soup: BeautifulSoup, ignore: Collection[str] = ()) -> Iterator[str]:
for sibling in soup.find('b').next_siblings:
for block in sibling.stripped_strings:
block_str = ' '.join(filter(None, (line.strip() for line in block.split('\n'))))
if block_str and block_str not in ignore:
yield block_str
def group_people(soup: BeautifulSoup, ignore: Collection[str] = ()) -> list[list[str]]:
zip_code_pattern = re.compile(r', \d+$')
people = []
person = []
for line in iter_lines(soup, ignore):
person.append(line)
if zip_code_pattern.search(line):
people.append(person)
person = []
return people
def normalize_person(raw_person: list[str]) -> dict[str, str | None]:
return {
'NAME': raw_person[0],
'PROFESSION': raw_person[1] if len(raw_person) > 4 else None,
'DEPARTMENT': next((line for line in raw_person if 'DEPARTMENT' in line), None),
'INSTITUTION': raw_person[-2],
'LOCATION': raw_person[-1],
}
raw_people = group_people(soup, ignore={'SCIENTIFIC REVIEW OFFICER'})
normalized = [normalize_person(person) for person in raw_people]
This works with both BeautifulSoup(content, 'html.parser') and BeautifulSoup(content, 'html5lib').
The iter_lines function finds the first <b> tag like you did before, and then yields a single string for each line that is displayed in a browser.
The group_people function groups the lines into separate people, using the zip code at the end to indicate that that person's entry is complete. It may be possible to combine this function with iter_lines and skip the regex, but this was slightly easier. Better formatted html would be more conducive to that approach.
The ignore parameter was used to skip the SCIENTIFIC REVIEW OFFICER header above the last person on that page.
Lastly, the normalize_person function attempts to interpret what each line for a given person means. The name, institution, and location appear to be fairly consistent, but I took some liberties with profession and department to use None when it seemed like a value did not exist. Those decisions were only made based on the particular page you linked to - you may need to adjust those for other pages. It uses negative indexes for the institution and location because the number of lines that existed for each person's data was variable.
final edit: so here's the solution -
list_c = [[x, y] for x, y in zip(titleList, linkList)]
Original post: I used bs4 to scrape a recipe website where the title to each recipe is not saved within the link tag. so I've extracted the titles of the recipes from one part of the code, and extracted the links from the other part and I've got these two lists (recipes, links) but I'm not sure the best way to pair each title to its corresponding link.
(The end goal is to have the titles be hyperlinked in an HTML file that I will put on my eventual recipe aggregator website).
I was considering saving them to a dictionary as key value pairs, or something else(?), so that I can call them into the HTML doc later on.
suggestions?
EDIT:
here's the code, works fine
soup = BeautifulSoup(htmlText, 'lxml')
links = soup.find_all('article')
linkList = []
titleList = []
for link in links[0:12]:
hyperL = link.find('header', class_ = 'entry-header').a['href']
linkList.append(hyperL)
for title in links:
x = title.get('aria-label')
titleList.append(x)
linkList prints out something like
['www.recipe.com/ham', 'www.recipe.com/curry', 'www.recipe.com/etc']
and
titleList is ['Ham', 'Curry', 'etc']
I want to print a list from these 2 like this:
[['Ham', 'www.recipe.com/ham'],['Curry', 'www.recipe.com/curry']]
The final goal for my website, I would want to have the following for each pair:
<a href='www.recipe.com/ham'>Ham<a/>
If you only anticipate looking up titles, and then using the result links, dictionaries are great for that.
I'm attempting to scrape actor/actress IDs from an IMDB movie page. I only want actors and actresses (I don't want to get any of the crew), and this question is specifically about getting the person's internal ID. I already have peoples' names, so I don't need help getting those. I'm starting with this webpage (https://www.imdb.com/title/tt0084726/fullcredits?ref_=tt_cl_sm#cast) as a hard-coded url to get the code right.
On examination of the links I was able to find that the links for the actors look like this.
William Shatner
Leonard Nimoy
Nicholas Guest
while the ones for other contributors look like this
Nicholas Meyer
Gene Roddenberry
This should allow me to differentiate actors/actresses from crew like the director or writer by checking for the end of the href being "t[0-9]+$" rather than the same but with "dr" or "wr".
Here's the code I'm running.
import urllib.request
from bs4 import BeautifulSoup
import re
movieNumber = 'tt0084726'
url = 'https://www.imdb.com/title/' + movieNumber + '/fullcredits?ref_=tt_cl_sm#cast'
def clearLists(n):
return [[] for _ in range(n)]
def getSoupObject(urlInput):
page = urllib.request.urlopen(urlInput).read()
soup = BeautifulSoup(page, features="html.parser")
return(soup)
def getPeopleForMovie(soupObject):
listOfPeopleNames, listOfPeopleIDs, listOfMovieIDs = clearLists(3)
#get all the tags with links in them
link_tags = soupObject.find_all('a')
#get the ids of people
for linkTag in link_tags:
link = str(linkTag.get('href'))
#print(link)
p = re.compile('t[0-9]+$')
q = p.search(link)
if link.startswith('/name/') and q != None:
id = link[6:15]
#print(id)
listOfPeopleIDs.append(id)
#return the names and IDs
return listOfPeopleNames, listOfPeopleIDs
newSoupObject = getSoupObject(url)
pNames, pIds = getPeopleForMovie(newSoupObject)
The above code returns an empty list for the IDs, and if you uncomment the print statement you can see that it's because the value that gets put in the "link" variable ends up being what's below (with variations for the specific people)
/name/nm0583292/
/name/nm0000638/
That won't do. I want the IDs only for the actors and actresses so that I can use those IDs later.
I've tried to find other answers on stackoverflow; I haven't been able to find this particular issue.
This question (Beautifulsoup: parsing html – get part of href) is close to what I want to do, but it gets the info from the text part between tags rather than from the href part in the tag attribute.
How can I make sure I get only the name IDs that I want (just the actor ones) from the page?
(Also, feel free to offer suggestions to tighten up the code)
It appears that the links you are trying to match have either been modified by JavaScript after loading, or perhaps get loaded differently based on other variables than the URL alone (like cookies or headers).
However, since you're only after links of people in the cast, an easier way would be to simply match the ids of people in the cast section. This is actually fairly straightforward, since they are all in a single element, <table class="cast_list">
So:
import urllib.request
from bs4 import BeautifulSoup
import re
# it's Python, so use Python conventions, no uppercase in function or variable names
movie_number = 'tt0084726'
# The f-string is often more readable than a + concatenation
url = f'https://www.imdb.com/title/{movieNumber}/fullcredits?ref_=tt_cl_sm#cast'
# this is overly fancy for something as simple as initialising some variables
# how about:
# a, b, c = [], [], []
# def clearLists(n):
# return [[] for _ in range(n)]
# in an object-oriented program, assuming something is an object is the norm
def get_soup(url_input):
page = urllib.request.urlopen(url_input).read()
soup = BeautifulSoup(page, features="html.parser")
# removed needless parentheses - arguably, even `soup` is superfluous:
# return BeautifulSoup(page, features="html.parser")
return soup
# keep two empty lines between functions, it's standard and for good reason
# it's easier to spot where a function starts and stops
# try using an editor or IDE that highlights your PEP8 mistakes, like PyCharm
# (that's just my opinion there, other IDEs than PyCharm will do as well)
def get_people_for_movie(soup_object):
# removed unused variables, also 'list_of_people_ids' is needlessly verbose
# since they go together, why not return people as a list of tuples, or a dictionary?
# I'd prefer a dictionary as it automatically gets rid of duplicates as well
people = {}
# (put a space at the start of your comment blocks!)
# get all the anchors tags inside the `cast_list` table
link_tags = soup_object.find('table', class_='cast_list').find_all('a')
# the whole point of compiling the regex is to only have to do it once,
# so outside the loop
id_regex = re.compile(r'/name/nm(\d+)/')
# get the ids and names of people
for link_tag in link_tags:
# the href attributes is a strings, so casting with str() serves no purpose
href = link_tag.get('href')
# matching and extracting part of the match can all be done in one step:
match = id_regex.search(href)
if match:
# don't shadow Python keywords like `id` with variable names!
identifier = match.group(1)
name = link_tag.text.strip()
# just ignore the ones with no text, they're the thumbs
if name:
people[identifier] = name
# return the names and IDs
return people
def main():
# don't do stuff globally, it'll just cause problems when reusing names in functions
soup = get_soup(url)
people = get_people_for_movie(soup)
print(people)
# not needed here, but a good habit, allows you to import stuff without running the main
if __name__ == '__main__':
main()
Result:
{'0000638': 'William Shatner', '0000559': 'Leonard Nimoy', '0001420': 'DeForest Kelley', etc.
And the code with a few more tweaks and without the commentary on your code:
import urllib.request
from bs4 import BeautifulSoup
import re
def get_soup(url_input):
page = urllib.request.urlopen(url_input).read()
return BeautifulSoup(page, features="html.parser")
def get_people_for_movie(soup_object):
people = {}
link_tags = soup_object.find('table', class_='cast_list').find_all('a')
id_regex = re.compile(r'/name/nm(\d+)/')
# get the ids and names of the cast
for link_tag in link_tags:
match = id_regex.search(link_tag.get('href'))
if match:
name = link_tag.text.strip()
if name:
people[match.group(1)] = name
return people
def main():
movie_number = 'tt0084726'
url = f'https://www.imdb.com/title/{movie_number}/fullcredits?ref_=tt_cl_sm#cast'
people = get_people_for_movie(get_soup(url))
print(people)
if __name__ == '__main__':
main()
I'm looking for a function that given a pair os HTML tags, returns the text inside them. Ideally I would like it to be recursive:
Examples:
Given
Asset management
returns
Asset management
Given
<p>Recursive Asset management</p>
returns
Recursive Asset management
Given
<p>Again Asset management</p>
returns
Again Asset management
Here is the code I have:
list_of_table_rows = tbl.findAll('tr')
for tr in list_of_table_rows[1:]:
th_list = tr.find("th")
td_list = tr.find("td")
if th_list is None or td_list is None:
continue
th_str = th.text
td_str = td.contents
# NOW THE PROBLEM IS td_str IS A LIST OF A BUNCH OF THINGS.
#PLAIN TEXT, BR TAG, LINKS, PARAGRAPHS, ETC.
#I WANT TO BE ABLE TO GET THAT PLAIN TEXT FOR LINKS AND PARAGRAPHS
for element in td_str:
if element == "<br/":
continue
# here...
The input should be a String, not a Tag or any other object. My trouble is the recursion.
UPDATE: This is an example of the data I am actually working with. The goal is to pull information from Wikipedia Infoboxes. The problem is some of the information in the Infobox are links or paragraphs. For example, in this page: https://en.wikipedia.org/wiki/Goldman_Sachs
<tr><th scope="row" style="padding-right:0.5em;">Founders</th><td
class="agent" style="line-height:1.35em;">Marcus Goldman .
<br /><a href="/wiki/Samuel_Sachs" title="Samuel Sachs">Samuel
Sachs</a></td></tr><tr>
Let's say we want to find who the Founders are. I only want the text in the elements. In this case a list containing Marcus Goldman and Samuel Sachs. I have also tried read_html from Pandas, but that concatenates the strings together and I don't want that to happen (its output is "Marcus GoldmanSamuel Sachs")
Here's an example of using .findChildren. It's not the full solution, but you can possibly use this to add on to #Bitto Bennichan solution
import bs4
html = '''<tr><th scope="row" style="padding-right:0.5em;">Founders</th><td
class="agent" style="line-height:1.35em;">Marcus Goldman .
<br /><a href="/wiki/Samuel_Sachs" title="Samuel Sachs">Samuel
Sachs</a></td></tr><tr>'''
soup = bs4.BeautifulSoup(html,'html.parser')
rows = soup.find_all('tr')
founders = []
for row in rows:
children = row.findChildren("a" , recursive=True, text=True)
for child in children:
child_text = child.text.split('\n')
child_text = [ x.strip() for x in child_text ]
child_text = ' '.join(child_text)
founders.append(child_text)
I am trying to scrap the BBC football results website to get teams, shots, goals, cards and incidents.
I writing the script in Python and using the Beautiful soup package. The code provided only retrieves the first entry of the table in incidents. When the incidents table is printed to screen, the full table will all the data is there.
The table I am scraping from is stored in incidents:
from bs4 import BeautifulSoup
import urllib2
url = 'http://www.bbc.co.uk/sport/football/result/partial/EFBO815155?teamview=false'
inner_page = urllib2.urlopen(url).read()
soupb = BeautifulSoup(inner_page, 'lxml')
for incidents in soupb.find_all('table', class_="incidents-table"):
print incidents.prettify()
home_inc_tag = incidents.find('td', class_='incident-player-home')
home_inc = home_inc_tag and ''.join(home_inc_tag.stripped_strings)
type_inc_tag = incidents.find('td', 'span', class_='incident-type goal')
type_inc = type_inc_tag and ''.join(type_inc_tag.stripped_strings)
time_inc_tag = incidents.find('td', class_='incident-time')
time_inc = time_inc_tag and ''.join(time_inc_tag.stripped_strings)
away_inc_tag = incidents.find('td', class_='incident-player-away')
away_inc = away_inc_tag and ''.join(away_inc_tag.stripped_strings)
print home_inc, time_inc, type_inc, away_inc
I am just focusing one one match at the moment to get this correct (EFBO815155) before i add a regular expression into the URL to get all matches details.
So, the incidents for loop is not getting all the data, just the first entry in the table.
Thanks in advance, I am new to stack overflow, if anything is wrong with this post, formatting etc please let me know.
Thanks!
First, get the incidents table:
incidentsTable = soupb.find_all('table', class_='incidents-table')[0]
Then loop through all 'tr' tags within that table.
for incidents in incidentsTable.find_all('tr'):
# your code as it is
print incidents.prettify()
home_inc_tag = incidents.find('td', class_='incident-player-home')
home_inc = home_inc_tag and ''.join(home_inc_tag.stripped_strings)
.
.
.
Gives Output:
Bradford Park Avenue 1-2 Boston United
None None
2' Goal J.Rollins
36' None C.Piergianni
N.Turner 42' None
50' Goal D.Southwell
C.King 60' Goal
This is close to what you want. Hope this helps!