Edit content in BeautifulSoup ResultSet - python

My goal in the end is to add up the number within the BeautifulSoup ResultSet down here:
[<span class="u">1,677</span>, <span class="u">114</span>, <span class="u">15</span>]
<class 'BeautifulSoup.ResultSet'>
Therefore, end up with:
sum = 1806
But it seems like the usual techniques for iterating through a list do not work here.
In the end I know that I have to extract the numbers, delete the commas, and then add them up. But I am kinda stuck, especially with extracting the numbers.
Would really appreciate some help. Thanks

Seems like the usual iterating techniques are working for me. Here is my code:
from bs4 import BeautifulSoup
# or `from BeautifulSoup import BeautifulSoup` if you are using BeautifulSoup 3
text = "<html><head><title>Test</title></head><body><span>1</span><span>2</span></body></html>"
soup = BeautifulSoup(text)
spans = soup.findAll('span')
total = sum(int(span.string) for span in spans)
print(total)
# 3
What have you tried? Do you have any error message we might be able to work with?

Related

How to pull quotes and authors from a website in Python?

I have written the following code to pull the quotes from the webpage:
#importing python libraries
from bs4 import BeautifulSoup as bs
import pandas as pd
pd.set_option('display.max_colwidth', 500)
import time
import requests
import random
from lxml import html
#collect first page of quotes
page = requests.get("https://www.kdnuggets.com/2017/05/42-essential-quotes-data-science-thought-leaders.html")
#create a BeautifulSoup object
soup=BeautifulSoup(page.content, 'html.parser')
soup
print(soup.prettify())
#find all quotes on the page
soup.find_all('ol')
#pull just the quotes and not the superfluous data
Quote=soup.find(id='post-')
Quote_list=Quote.find_all('ol')
quote_list
At this point, I now want to just show the text in a list and not see < li > or < ol > tags
I've tried using the .get_text() attribute but I get an error saying
ResultSet object has no attribute 'get_text'
How can I get only the text to return?
This is only for the first page of quotes - there is a second page which I am going to need to pull the quotes from. I will also need to present the data in a table with a column for the quotes and a column for the author from both pages.
Help is greatly appreciated... I'm still new to learning python and I've been working to this point for 8 hours on this code and feel so stuck/discouraged.
The 'html.parser' seems to be having bit of a problem even with what I think is now the correct code. But after switching to using 'lxml', which this was not using, it now seems to be working:
from bs4 import BeautifulSoup as bs
import requests
page = requests.get("https://www.kdnuggets.com/2017/05/42-essential-quotes-data-science-thought-leaders.html")
soup=bs(page.content, 'lxml')
quotes = []
post_id = soup.find(id='post-')
ordered_lists = post_id.find_all('ol')
quotes.extend([li.get_text()
for li in ordered_list.find_all('li')
for ordered_list in ordered_lists
])
print(len(quotes))
print(quotes[0]) # Get the first quote
print('-' * 80)
print(quotes[-1]) #print last quote
Prints:
22
“By definition all scientists are data scientists. In my opinion, they are half hacker, half analyst, they use data to build products and find insights. It’s Columbus meet Columbo―starry-eyed explorers and skeptical detectives.”
--------------------------------------------------------------------------------
“Once you have a certain amount of math/stats and hacking skills, it is much better to acquire a grounding in one or more subjects than in adding yet another programming language to your hacking skills, or yet another machine learning algorithm to your math/stats portfolio…. Clients will rather work with some data scientist A who understands their specific field than with another data scientist B who first needs to learn the basics―even if B is better in math/stats/hacking.”
Alternate Coding
from bs4 import BeautifulSoup as bs
import requests
page = requests.get("https://www.kdnuggets.com/2017/05/42-essential-quotes-data-science-thought-leaders.html")
soup=bs(page.content, 'lxml')
quotes = []
post_id = soup.find(id='post-')
ordered_lists = post_id.find_all('ol')
for ordered_list in ordered_lists:
for li in ordered_list.find_all('li'):
quotes.append(li.get_text())
print(len(quotes))
print(quotes[0]) # Get the first quote
print('-' * 80)
print(quotes[-1]) #print last quote
The find_all() method can take a list of elements to search for. It basically takes a function which determines what elements should be returned. For printing the result from the tags, you need get_text(). But it only works on single entity so you have to loop over the entire find_all() to get each element and then apply get_text() to extract text from each element.
Use this code to get all your quotes :- (This is updated and working)
from bs4 import BeautifulSoup as bs
import requests
from lxml import html
page = requests.get("https://www.kdnuggets.com/2017/05/42-essential-quotes-data-science-thought-leaders.html")
soup=bs(page.text, 'html.parser')
# Q return all of your quotes with HTML tagging but you can loop over it with `get_text()`
Q = soup.find('div',{"id":"post-"}).find_all('ol')
quotes=[]
for every_quote in Q:
quotes.append(every_quote.get_text())
print(quotes[0]) # Get the first quote
Use quotes[0], quotes[1], ... to get 1st, 2nd, and so on quotes!

Parsing figure name on Beautiful Soup

This is my first time posting, so please be gentle.
I'm extracting data from trip advisor. The reviews are interpreted with a figure that is represented like this.
<span class="ui_bubble_rating bubble_40"></span>
As you can see, there is a "40" in the end that represents 4 stars. The same happens with "20" (2 stars) etc...
How can I obtain the "ui_bubble_rating bubble_40"?
Thank you in advance...
I'm not sure if this is the most efficient way of doing that, but here's how I'd do it:
tags = soup.find_all(class=re.compile("bubble_\d\d"))
The tags variable will then include every tag in the page that matches the regex bubble_\d\d. After that, you just need to extract the number, like so:
stars = tags[0].split("_")[1]
If you want to be fancy, you can use list comprehensions to extract the numbers from every tag:
stars = [tag.split("_")[1] for tag in tags]
I am not sure what kind of data you are trying to scrape,
but you can obtain that span tag like so (I tested it and left some prints in):
from urllib import urlopen
from bs4 import BeautifulSoup
import re
html = urlopen("YOUR_REVIEWS_URL")
bs1=BeautifulSoup(html, 'lxml')
for s in bs1.findAll("span", {"class":"ui_bubble_rating bubble_40"}):
print(s)
More generic way (scrape all ratings (bubble_[0-9]{2})):
toFind = re.compile("(bubble_[0-9]{2})+")
for s in bs1.findAll("span", {"class":toFind}):
print(s)
Hope that answers your question

Python Web Scraping Problems

I am using Python to scrape AAPL's stock price from Yahoo finance. But the program always returns []. I would appreciate if someone could point out why the program is not working. Here is my code:
import urllib
import re
htmlfile=urllib.urlopen("https://ca.finance.yahoo.com/q?s=AAPL&ql=0")
htmltext=htmlfile.read()
regex='<span id=\"yfs_l84_aapl\" class="">(.+?)</span>'
pattern=re.compile(regex)
price=re.findall(pattern,htmltext)
print price
The original source is like this:
<span id="yfs_l84_aapl" class>112.31</span>
Here I just want the price 112.31. I copy and paste the code and find 'class' changes to 'class=""'. I also tried code
regex='<span id=\"yfs_l84_aapl\" class="">(.+?)</span>'
But it does not work either.
Well, the good news is that you are getting the data. You were nearly there. I would recommend that you work our your regex problems in a tool that helps, e.g. regex101.
Anyway, here is your working regex:
regex='<span id="yfs_l84_aapl">(\d*\.\d\d)'
You are collecting only digits, so don't do the general catch, be specific where you can. This is multiple digits, with a decimal literal, with two more digits.
When I went to the yahoo site you provided, I saw a span tag without class attribute.
<span id="yfs_l84_aapl">112.31</span>
Not sure what you are trying to do with "class." Without that I get 112.31
import urllib
import re
htmlfile=urllib.urlopen("https://ca.finance.yahoo.com/q?s=AAPL&ql=0")
htmltext=htmlfile.read()
regex='<span id=\"yfs_l84_aapl\">(.+?)</span>'
pattern=re.compile(regex)
price=re.findall(pattern,htmltext)
print price
I am using BeautifulSoup to get the text from span tag
import urllib
from BeautifulSoup import BeautifulSoup
response =urllib.urlopen("https://ca.finance.yahoo.com/q?s=AAPL&ql=0")
html = response.read()
soup = BeautifulSoup(html)
# find all the spans have id = 'yfs_l84_aapl'
target = soup.findAll('span',{'id':"yfs_l84_aapl"})
# target is a list
print(target[0].string)

Python: Reading a webpage and extracting text from that page

I'm writing in Python to try and get exchange rates from the website:
xe.com/currency/converter (I can't post another link, sorry - I'm at limit)
I want to be able to get rates from this file, for example, for the conversion between GBP and USD:
Therefore, I would search the url: "http://www.xe.com/currencyconverter/convert/?Amount=1&From=GBP&To=USD" , then get the value printed "1.56371 USD" (the rates at the time I was writing this message), and assign that value as an int to a variable, like rate_usd.
At the moment, I was thinking about using the BeautifulSoup module and urllib.request module, and request the url ("http://www.xe.com/currencyconverter/convert/?Amount=1&From=GBP&To=USD") and search through it using BeautifulSoup. At the moment, I'm at this stage in the coding:
import urllib.request
import bs4 from BeautifulSoup
def rates_fetcher(url):
html = urllib.request.urlopen(url).read()
soup = BeautifulSoup(html)
# code to search through soup and fetch the converted value
# e.g. 1.56371
# How would I extract this value?
# I have inspected the page element and found the value I want to be in the class:
# <td width="47%" align="left" class="rightCol">1.56371
# I'm thinking about searching through the class: class="rightCol"
# and extracting the value that way, but how?
url1 = "http://www.xe.com/currencyconverter/convert/?Amount=1&From=GBP&To=USD"
rates_fetcher(url1)
Any help would be much appreciated, and thank you whoever took the time to read this.
p.s. Sorry in advance if I have made any typos, I'm kinda' in a hurry :s
It sounds like you've got the right idea.
def rates_fetcher(url):
html = urllib.request.urlopen(url).read()
soup = BeautifulSoup(html)
return [item.text for item in soup.find_all(class_='rightCol')]
That should do it... This will return a list of the text inside any tag with the class 'rightCol'.
If you haven't read through the Beautiful Soup documentation, you really oughtta. It's straightforward and very useful.
Try pyquery. It's a lot better than Soup.
PS: For urllib, try Requests: Http for humans
PS2: Actually I use Node and jQuery/jQuery-like for html scrapping at last.

Python 3 Beautiful Soup Data type incompatibility issue

Hello there stack community!
I'm having an issue that I can't seem to resolve since it looks like most of the help out there is for Python 2.7.
I want to pull a table from a webpage and then just get the linktext and not the whole anchor.
Here is the code:
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
url = 'http://www.craftcount.com/category.php?cat=5'
html = urlopen(url).read()
soup = BeautifulSoup(html)
alltables = soup.findAll("table")
## This bit captures the input from the previous sequence
results=[]
for link in alltables:
rows = link.findAll('a')
## Find just the names
top100 = re.findall(r">(.*?)<\/a>",rows)
print(top100)
When I run it, I get: "TypeError: expected string or buffer"
Up to the second to the last line, it does everything correctly (when I swap out 'print(top100)' for 'print(rows)').
As an example of the response I get:
michellechangjewelry
And I just need to get:
michellechangjewelry
According to pythex.org, my (ir)regular expression should work, so I wanted to see if anyone out there knew how to do that. As an additional issue, it looks like most people like to go the other way, that is, from having the full text and only wanting the URL part.
Finally, I'm using BeautifulSoup out of "convenience", but I'm not beholden to it if you can suggest a better package to narrow down the parsing to the linktext.
Many thanks in advance!!
BeautifulSoup results are not strings; they are Tag objects, mostly.
Look for the text of the <a> tags, use the .string attribute:
for table in alltables:
link = table.find('a')
top100 = link.string
print(top100)
This finds the first <a> link in a table. To find all text of all links:
for table in alltables:
links = table.find_all('a')
top100 = [link.string for link in links]
print(top100)

Categories

Resources