I am trying to scrap the BBC football results website to get teams, shots, goals, cards and incidents.
I writing the script in Python and using the Beautiful soup package. The code provided only retrieves the first entry of the table in incidents. When the incidents table is printed to screen, the full table will all the data is there.
The table I am scraping from is stored in incidents:
from bs4 import BeautifulSoup
import urllib2
url = 'http://www.bbc.co.uk/sport/football/result/partial/EFBO815155?teamview=false'
inner_page = urllib2.urlopen(url).read()
soupb = BeautifulSoup(inner_page, 'lxml')
for incidents in soupb.find_all('table', class_="incidents-table"):
print incidents.prettify()
home_inc_tag = incidents.find('td', class_='incident-player-home')
home_inc = home_inc_tag and ''.join(home_inc_tag.stripped_strings)
type_inc_tag = incidents.find('td', 'span', class_='incident-type goal')
type_inc = type_inc_tag and ''.join(type_inc_tag.stripped_strings)
time_inc_tag = incidents.find('td', class_='incident-time')
time_inc = time_inc_tag and ''.join(time_inc_tag.stripped_strings)
away_inc_tag = incidents.find('td', class_='incident-player-away')
away_inc = away_inc_tag and ''.join(away_inc_tag.stripped_strings)
print home_inc, time_inc, type_inc, away_inc
I am just focusing one one match at the moment to get this correct (EFBO815155) before i add a regular expression into the URL to get all matches details.
So, the incidents for loop is not getting all the data, just the first entry in the table.
Thanks in advance, I am new to stack overflow, if anything is wrong with this post, formatting etc please let me know.
Thanks!
First, get the incidents table:
incidentsTable = soupb.find_all('table', class_='incidents-table')[0]
Then loop through all 'tr' tags within that table.
for incidents in incidentsTable.find_all('tr'):
# your code as it is
print incidents.prettify()
home_inc_tag = incidents.find('td', class_='incident-player-home')
home_inc = home_inc_tag and ''.join(home_inc_tag.stripped_strings)
.
.
.
Gives Output:
Bradford Park Avenue 1-2 Boston United
None None
2' Goal J.Rollins
36' None C.Piergianni
N.Turner 42' None
50' Goal D.Southwell
C.King 60' Goal
This is close to what you want. Hope this helps!
Related
I'm practicing with BeautifulSoup and HTML requests in general for the first time. The goal of the programme is to load a webpage and it's HTML, then search through the webpage (in this case a recipe, to get a sub string of it's ingredients). I've managed to get it working with the following code:
url = "https://www.bbcgoodfood.com/recipes/healthy-tikka-masala"
result = requests.get(url)
myHTML = result.text
index1 = myHTML.find("recipeIngredient")
index2 = myHTML.find("recipeInstructions")
ingredients = myHTML[index1:index2]
But when I try and use BeautifulSoup here:
url = "https://www.bbcgoodfood.com/recipes/healthy-tikka-masala"
result = requests.get(url)
doc = BeautifulSoup(result.text, "html.parser")
ingredients = doc.find(text = "recipeIngredient")
print(ingredients)
I understand that the code above (even if I could get it working) would produce a different output of just ["recipeIngredient"] but that's all I'm focused on for now whilst I get to grips with BS. Instead the code above just outputs None. I printed "doc" to the terminal and it would only output what appears to be the second half of the HTML (or at least : not all of it). Whereas , the text file does contain all HTML, so I assume that's where the problem lies but i'm not sure how to fix it.
Thank you.
You need to use:
class_="recipe__ingredients"
For example:
import requests
from bs4 import BeautifulSoup
url = "https://www.bbcgoodfood.com/recipes/healthy-tikka-masala"
doc = (
BeautifulSoup(requests.get(url).text, "html.parser")
.find(class_="recipe__ingredients")
)
ingredients = "\n".join(
ingredient.getText() for ingredient in doc.find_all("li")
)
print(ingredients)
Output:
1 large onion , chopped
4 large garlic cloves
thumb-sized piece of ginger
2 tbsp rapeseed oil
4 small skinless chicken breasts, cut into chunks
2 tbsp tikka spice powder
1 tsp cayenne pepper
400g can chopped tomatoes
40g ground almonds
200g spinach
3 tbsp fat-free natural yogurt
½ small bunch of coriander , chopped
brown basmati rice , to serve
It outputs None because it's looking for where the content within html tags is 'recipeIngredient', whci does not exist (there is no text in the html content. That string is an attribute of an html tag).
What you are actually trying to get with bs4 is find specific tags and/or attributes of the data/content you want. For example, #baduker points out, the ingredients in the html are within the tag with a class attribute = "recipe__ingredients".
The string 'recipeIngredient', that you pull out in that first block of code, is actually from within the <script> tag in the html, that has the ingredients in json format.
from bs4 import BeautifulSoup
import requests
import json
url = "https://www.bbcgoodfood.com/recipes/healthy-tikka-masala"
result = requests.get(url)
doc = BeautifulSoup(result.text, "html.parser")
ingredients = doc.find('script', {'type':'application/ld+json'}).text
jsonData = json.loads(ingredients)
print(jsonData['recipeIngredient'])
Dec 2020 update:
I have:
Achieved full automation, minute level data collection for entire FnO universe.
Auto adapts to changing FnO universe, exits and new entries.
Shuts down in non-market hours.
Shuts down on holidays, including newly declared holidays.
Starts automatically during yearly Muhurat Trading data.
I am a bit new to web scraping and not used to 'tr' & 'td' stuff and thus this doubt. I am trying to replicate this Python 2.7 code in my Python 3 from this thread 'https://www.quantinsti.com/blog/option-chain-extraction-for-nse-stocks-using-python'.
This old code uses .ix for indexing which I can correct using .iloc easily. However, the line <tr = tr.replace(',' , '')> show up error 'a bytes-like object is required, not 'str'' even if I write it before <tr = utf_string.encode('utf8')>.
I have checked this other link from stackoverflow and couldn't solve my problem
I think I have spotted why this is happening. It's because of the previous for loop used previously to define variable tr. If I omit this line, then I get a DataFrame with the numbers with some attached text. I can filter this with a loop over the entire DataFrame, but a better way must be by properly using the replace() function. I can't figure this bit out.
Here is my full code. I have marked the critical sections of the code I have referred using ######################### exclusively in a line so that the line can be found out quickly (even by Ctrl + F):
import requests
import pandas as pd
from bs4 import BeautifulSoup
Base_url = ("https://nseindia.com/live_market/dynaContent/"+
"live_watch/option_chain/optionKeys.jsp?symbolCode=2772&symbol=UBL&"+
"symbol=UBL&instrument=OPTSTK&date=-&segmentLink=17&segmentLink=17")
page = requests.get(Base_url)
#page.status_code
#page.content
soup = BeautifulSoup(page.content, 'html.parser')
#print(soup.prettify())
table_it = soup.find_all(class_="opttbldata")
table_cls_1 = soup.find_all(id = "octable")
col_list = []
# Pulling heading out of the Option Chain Table
#########################
for mytable in table_cls_1:
table_head = mytable.find('thead')
try:
rows = table_head.find_all('tr')
for tr in rows:
cols = tr.find_all('th')
for th in cols:
er = th.text
#########################
ee = er.encode('utf8')
col_list.append(ee)
except:
print('no thread')
col_list_fnl = [e for e in col_list if e not in ('CALLS', 'PUTS', 'Chart', '\xc2\xa0')]
#print(col_list_fnl)
table_cls_2 = soup.find(id = "octable")
all_trs = table_cls_2.find_all('tr')
req_row = table_cls_2.find_all('tr')
new_table = pd.DataFrame(index=range(0,len(req_row)-3),columns = col_list_fnl)
row_marker = 0
for row_number, tr_nos in enumerate(req_row):
if row_number <= 1 or row_number == len(req_row)-1:
continue # To insure we only choose non empty rows
td_columns = tr_nos.find_all('td')
# Removing the graph column
select_cols = td_columns[1:22]
cols_horizontal = range(0,len(select_cols))
for nu, column in enumerate(select_cols):
utf_string = column.get_text()
utf_string = utf_string.strip('\n\r\t": ')
#########################
tr = tr.replace(',' , '') # Commenting this out makes code partially work, getting numbers + text attached to the numbers in the table
# That is obtained by commenting out the above line with tr variable & running the entire code.
tr = utf_string.encode('utf8')
new_table.iloc[row_marker,[nu]] = tr
row_marker += 1
print(new_table)
For the first section:
er = th.text should be er = th.get_text()
Link to get_text documentation
For the latter section:
Looking at it, your "tr" variable at this point is the last tr tag found in the soup using for tr in rows. This means the tr you are trying to call replace on is a navigable string, not a string.
tr = tr.get_text().replace(',' , '') should work for the first iteration, however as you have overwritten it in the first iteration it will break in the next iteration.
Additionally, thank you for the depth of your question. While you did not pose it as a question, the length you went to describe the trouble you are having as well as the code you have tried is greatly appreciated.
If you replace the below lines of codes
tr = tr.replace(',' , '')
tr = utf_string.encode('utf8')
new_table.iloc[row_marker,[nu]] = tr
with the following code then it should work.
new_table.iloc[row_marker,[nu]] = utf_string.replace(',' , '')
As the replace function doesn't work with the Unicode. You can also consider using below code to decode the column names
col_list_fnl = [e.decode('utf8') for e in col_list if e not in ('CALLS', 'PUTS', 'Chart', '\xc2\xa0')]
col_list_fnl
I hope this helps.
I'm fairly new to web scraping in Python; and after reading most of the tutorials on the topic online I decided to give it a shot. I finally got one site working but the output is not formatted properly.
import requests
import bs4
from bs4 import BeautifulSoup
import pandas as pd
import time
page = requests.get("https://leeweebrothers.com/our-food/lunch-boxes/#")
soup = BeautifulSoup(page.text, "html.parser")
for div in soup.find_all('h2'): #prints the name of the food"
print(div.text)
for a in soup.find_all('span', {'class' : 'amount'}): #prints price of the food
print(a.text)
Output
I want both the name of the food to be printed side by side with the corresponding price of the food, concatenated by a "-" ... Would appreciate any help given, thanks!
Edit: After #Reblochon Masque comments below - I've run into another problem; As you can see there is a $0.00 which is a value from the inbuilt shopping cart on the website, how would i exclude this as an outlier and continue moving down the loop while ensuring that the other items in the price "move up" to correspond to the correct food?
Best practice is to use zip function in the for loop, but we can do that this way also. This is to just to show we can do by using indexing the two lists.
names = soup.find_all('h2')
rest = soup.find_all('span', {'class' : 'amount'})
for index in range(len(names)):
print('{} - {}'.format(names[index].text, rest[index].text))
You could maybe zip the two results:
names = soup.find_all('h2')
rest = soup.find_all('span', {'class' : 'amount'})
for div, a in zip(names, rest):
print('{} - {}'.format(div.text, a.text))
# print(f"{div.text} - {a.text}") # for python > 3.6
I need to scrape the answers to the questions from the following link, including the check boxes.
Here's what I have so far:
from bs4 import BeautifulSoup
import selenium.webdriver as webdriver
url = 'https://www.adviserinfo.sec.gov/IAPD/content/viewform/adv/Sections/iapd_AdvPrivateFundReportingSection.aspx?ORG_PK=161227&FLNG_PK=05C43A1A0008018C026407B10062D49D056C8CC0'
driver = webdriver.Firefox()
driver.get(url)
soup = BeautifulSoup(driver.page_source)
The following gives me all the written answers, if there are any:
soup.find_all('span', {'class':'PrintHistRed'})
and I think I can piece together all the checkbox answers from this:
soup.find_all('img')
but these aren't going to be ordered correctly, because this doesn't pick up the "No Information Filed" answers that aren't written in red.
I also feel like there's a much better way to be doing this. Ideally I want (for the first 6 questions) to return:
['APEX INVESTMENT FUND, V, L.P',
'805-2054766781',
'Delaware',
'United States',
'APEX MANAGEMENT V, LLC',
'X',
'O',
'No Information Filed',
'NO',
'NO']
EDIT
Martin's answer below seems to do the trick, however when I put it in a loop, the results begin to change after the 3rd iteration. Any ideas how to fix this?
from bs4 import BeautifulSoup
import requests
import re
for x in range(5):
url = 'https://www.adviserinfo.sec.gov/IAPD/content/viewform/adv/Sections/iapd_AdvPrivateFundReportingSection.aspx?ORG_PK=161227&FLNG_PK=05C43A1A0008018C026407B10062D49D056C8CC0'
html = requests.get(url)
soup = BeautifulSoup(html.text, "lxml")
tags = list(soup.find_all('span', {'class':'PrintHistRed'}))
tags.extend(list(soup.find_all('img', alt=re.compile('Radio|Checkbox')))[2:]) # 2: skip "are you an adviser" at the top
tags.extend([t.parent for t in soup.find_all(text="No Information Filed")])
output = []
for entry in sorted(tags):
if entry.name == 'img':
alt = entry['alt']
if 'Radio' in alt:
output.append('NO' if 'not selected' in alt else 'YES')
else:
output.append('O' if 'not checked' in alt else 'X')
else:
output.append(entry.text)
print output[:9]
The website does not generate any of the required HTML via Javascript, so I have chosen to use just requests to get the HTML (which should be faster).
One approach to solving your problem is to store all the tags for your three different types into a single array. If this is then sorted, it will result in the tags being in tree order.
The first search simply uses your PrintHistRed to get the matching span tags. Secondly it finds all img tags that have alt text containing either the word Radio or Checkbox. Lastly it searches for all locations where No Information Filed is found and returns the parent tag.
The tags can now be sorted and a suitable output array built containing the information in the required format:
from bs4 import BeautifulSoup
import requests
import re
url = 'https://www.adviserinfo.sec.gov/IAPD/content/viewform/adv/Sections/iapd_AdvPrivateFundReportingSection.aspx?ORG_PK=161227&FLNG_PK=05C43A1A0008018C026407B10062D49D056C8CC0'
html = requests.get(url)
soup = BeautifulSoup(html.text, "lxml")
tags = list(soup.find_all('span', {'class':'PrintHistRed'}))
tags.extend(list(soup.find_all('img', alt=re.compile('Radio|Checkbox')))[2:]) # 2: skip "are you an adviser" at the top
tags.extend([t.parent for t in soup.find_all(text="No Information Filed")])
output = []
for entry in sorted(tags):
if entry.name == 'img':
alt = entry['alt']
if 'Radio' in alt:
output.append('NO' if 'not selected' in alt else 'YES')
else:
output.append('O' if 'not checked' in alt else 'X')
else:
output.append(entry.text)
print output[:9] # Display the first 9 entries
Giving you:
[u'APEX INVESTMENT FUND V, L.P.', u'805-2054766781', u'Delaware', u'United States', 'X', 'O', u'No Information Filed', 'NO', 'YES']
I've looked fairly carefully at the HTML. I doubt there is an utterly simple way of scraping pages like this.
I would begin with an analysis, looking for similar questions. For instance, 11 through 16 inclusive can likely be handled in the same way. 19 and 21 appear to be similar. There may or may not be others.
I would work out how to handle each type of similar question as given by the rows containing them. For example, how would I handle 19 and 21? Then I would write code to identify the rows for the questions noting the question number for each. Finally I would use the appropriate code using the row number to winkle out information from it. In other words, when I encountered question 19 I'd use the code meant for either 19 or 21.
I want grab the age, place of birth and previous occupation of senators.
Information for each individual senator is available on Wikipedia, on their respective pages, and there is another page with a table that lists all senators by the name.
How can I go through that list, follow links to the respective pages of each senator, and grab the information I want?
Here is what I've done so far.
1 . (no python) Found out that DBpedia exists and wrote a query to search for senators. Unfortunately DBpedia hasn't categorized most (if any) of them:
SELECT ?senator, ?country WHERE {
?senator rdf:type <http://dbpedia.org/ontology/Senator> .
?senator <http://dbpedia.org/ontology/nationality> ?country
}
Query results are unsatisfactory.
2 . Found out that there is a python module called wikipedia that allows me to search and retrieve information from individual wiki pages. Used it to get a list of senator names from the table by looking at the hyperlinks.
import wikipedia as w
w.set_lang('pt')
# Grab page with table of senator names.
s = w.page(w.search('Lista de Senadores do Brasil da 55 legislatura')[0])
# Get links to senator names by removing links of no interest
# For each link in the page, check if it's a link to a senator page.
senators = [name for name in s.links if not
# Senator names don't contain digits nor ,
(any(char.isdigit() or char == ',' for char in name) or
# And full names always contain spaces.
' ' not in name)]
At this point I'm a bit lost. Here the list senators contains all senator names, but also other names, e.g., party names. The wikipidia module (at least from what I could find in the API documentation) also doesn't implement functionality to follow links or search through tables.
I've seen two related entries here on StackOverflow that seem helpful, but they both (here and here) extract information from a single page.
Can anyone point me towards a solution?
Thanks!
Ok, so I figured it out (thanks to a comment pointing me to BeautifulSoup).
There is actually no big secret to achieve what I wanted. I just had to go through the list with BeautifulSoup and store all the links, and then open each stored link with urllib2, call BeautifulSoup on the response, and.. done. Here is the solution:
import urllib2 as url
import wikipedia as w
from bs4 import BeautifulSoup as bs
import re
# A dictionary to store the data we'll retrieve.
d = {}
# 1. Grab the list from wikipedia.
w.set_lang('pt')
s = w.page(w.search('Lista de Senadores do Brasil da 55 legislatura')[0])
html = url.urlopen(s.url).read()
soup = bs(html, 'html.parser')
# 2. Names and links are on the second column of the second table.
table2 = soup.findAll('table')[1]
for row in table2.findAll('tr'):
for colnum, col in enumerate(row.find_all('td')):
if (colnum+1) % 5 == 2:
a = col.find('a')
link = 'https://pt.wikipedia.org' + a.get('href')
d[a.get('title')] = {}
d[a.get('title')]['link'] = link
# 3. Now that we have the links, we can iterate through them,
# and grab the info from the table.
for senator, data in d.iteritems():
page = bs(url.urlopen(data['link']).read(), 'html.parser')
# (flatten list trick: [a for b in nested for a in b])
rows = [item for table in
[item.find_all('td') for item in page.find_all('table')[0:3]]
for item in table]
for rownumber, row in enumerate(rows):
if row.get_text() == 'Nascimento':
birthinfo = rows[rownumber+1].getText().split('\n')
try:
d[senator]['birthplace'] = birthinfo[1]
except IndexError:
d[senator]['birthplace'] = ''
birth = re.search('(.*\d{4}).*\((\d{2}).*\)', birthinfo[0])
d[senator]['birthdate'] = birth.group(1)
d[senator]['age'] = birth.group(2)
if row.get_text() == 'Partido':
d[senator]['party'] = rows[rownumber + 1].getText()
if 'Profiss' in row.get_text():
d[senator]['profession'] = rows[rownumber + 1].getText()
Pretty simple. BeautifulSoup works wonders =)