Python text parsing between two words - python

I'm using beautifulsoup and want to extract all text from between two words on a webpage.
Ex, imagine the following website text:
This is the text of the webpage. It is just a string of a bunch of stuff and maybe some tags in between.
I want to pull out everything on the page that starts with text and ends with bunch.
In this case I'd want only:
text of the webpage. It is just a string of a bunch
However, there's a chance there could be multiple instances of this on a page.
What is the best way to do this?
This is my current setup:
#!/usr/bin/env python
from mechanize import Browser
from BeautifulSoup import BeautifulSoup
mech = Browser()
urls = [
http://ca.news.yahoo.com/forget-phoning-business-app-sends-text-instead-100143774--sector.html
]
for url in urls:
page = mech.open(url)
html = page.read()
soup = BeautifulSoup(html)
text= soup.prettify()
texts = soup.findAll(text=True)
def visible(element):
if element.parent.name in ['style', 'script', '[document]', 'head', 'title']:
# If the parent of your element is any of those ignore it
return False
elif re.match('<!--.*-->', str(element)):
# If the element matches an html tag, ignore it
return False
else:
# Otherwise, return True as these are the elements we need
return True
visible_texts = filter(visible, texts)
# Filter only returns those items in the sequence, texts, that return True.
# We use those to build our final list.
for line in visible_texts:
print line

since you're just parsing the text you just need the regex:
import re
result = re.findall("text.*?bunch", text_from_web_page)

Related

Parsing HTML data with BeautifulSoup - cannot extract the 'href' out in one string

I'm trying to parse out the html to get the - 'href' link;
My code is parsing the 'href link' into separate string, but I'm hoping to get a complete string.
Here is my code:
data = requests.get("https://www.chewy.com/b/food_c332_p2",
auth = ('user', 'pass'),
headers = {'User-Agent': user_agent})
with open("dogfoodpage/dg2.html","w+") as f:
f.write(data.text)
with open("dogfoodpage/dg2.html") as f:
page = f.read()
soup = BeautifulSoup(page,"html.parser")
test = soup.find('a',class_= "kib-product-title")
productlink = []
for items in test:
for link in items.get("href"):
productlink.append(link)
Here is my output:
Here is the html structure for test:
productlink = []
for items in test:
for link in items.find_all('a', class_="kib-product-title"):
productlink.append(link.get('href'))
This should work. find method returns a single href as a string and while looping over the string you are getting URL as a list divided into characters. find_all method will get all the links and we can iterate over it to get the links.

How do I get the first 3 sentences of a webpage in python?

I have an assignment where one of the things I can do is find the first 3 sentences of a webpage and display it. Find the webpage text is easy enough, but I'm having problems figuring out how I find the first 3 sentences.
import requests
from bs4 import BeautifulSoup
url = 'https://www.troyhunt.com/the-773-million-record-collection-1-data-reach/'
res = requests.get(url)
html_page = res.content
soup = BeautifulSoup(html_page, 'html.parser')
text = soup.find_all(text=True)
output = ''
blacklist = [
'[document]',
'noscript',
'header',
'html',
'meta',
'head',
'input',
'script'
]
for t in text:
if (t.parent.name not in blacklist):
output += '{} '.format(t)
tempout = output.split('.')
for i in range(tempout):
if (i >= 3):
tempout.remove(i)
output = '.'.join(tempout)
print(output)
Finding sentences out of text is difficult. Normally you would look for characters that might complete a sentence, such as '.' and '!'. But a period ('.') could appear in the middle of a sentence as in an abbreviation of a person's name, for example. I use a regular expression to look for a period followed by either a single space or the end of the string, which works for the first three sentences, but not for any arbitrary sentence.
import requests
from bs4 import BeautifulSoup
import re
url = 'https://www.troyhunt.com/the-773-million-record-collection-1-data-reach/'
res = requests.get(url)
html_page = res.content
soup = BeautifulSoup(html_page, 'html.parser')
paragraphs = soup.select('section.article_text p')
sentences = []
for paragraph in paragraphs:
matches = re.findall(r'(.+?[.!])(?: |$)', paragraph.text)
needed = 3 - len(sentences)
found = len(matches)
n = min(found, needed)
for i in range(n):
sentences.append(matches[i])
if len(sentences) == 3:
break
print(sentences)
Prints:
['Many people will land on this page after learning that their email address has appeared in a data breach I\'ve called "Collection #1".', "Most of them won't have a tech background or be familiar with the concept of credential stuffing so I'm going to write this post for the masses and link out to more detailed material for those who want to go deeper.", "Let's start with the raw numbers because that's the headline, then I'll drill down into where it's from and what it's composed of."]
To scrape the first three sentences, just add these lines to ur code:
section = soup.find('section',class_ = "article_text post") #Finds the section tag with class "article_text post"
txt = section.p.text #Gets the text within the first p tag within the variable section (the section tag)
print(txt)
Output:
Many people will land on this page after learning that their email address has appeared in a data breach I've called "Collection #1". Most of them won't have a tech background or be familiar with the concept of credential stuffing so I'm going to write this post for the masses and link out to more detailed material for those who want to go deeper.
Hope that this helps!
Actually using beautify soup you can filter by the class "article_text post" seeing source code:
myData=soup.find('section',class_ = "article_text post")
print(myData.p.text)
And get the inner text of p element
Use this instead of soup = BeautifulSoup(html_page, 'html.parser')

Extracting data from HTML bulleted lists in Python

I have an html document with the following bulleted list:
Body=<ul><li>Preconditions<ul><li>PC1</li><li>PC2</li></ul></li><li>Use Case Triggers<ul><li>T1</li><li>T2</li></ul></li><li>Postconditions<ul><li>PO1</li><li>PO2</li></ul></li></ul>
(Alternative View):
PreconditionsPC1PC2Use Case TriggersT1T2PostconditionsPO1PO2
I'm trying to write a function in Python that will disect this list and pull out groups of data. The goal is to put this data in a matrix that would look like the following:
[[Preconditions, PC1],[Preconditions, PC2],[Use Case Triggers, T1],[Use Case Triggers, T2],[Postconditions, PO1],[Postconditions,PO2]]
The other hurdle to cross is the fact that I need this sort of matrix to generate regardless of the number of ul and li elements.
Any guidance is appreciated!
You can write a function that takes raw html and deletes all html tags
def cleanhtml(raw_html):
cleanr = re.compile("<.*?>|&([a-z0-9]+|#[0-9]{1,6}|#x[0-9a-f]{1,6});")
cleantext = re.sub(cleanr, " ", raw_html)
return cleantext
Some other cleanr options:
cleanr = re.compile("<[A-Za-z\/][^>]*>")
cleanr = re.compile("<[^>]*>")
cleanr = re.compile("<\/?\w+\s*[^>]*?\/?>")
But there is a better and easier way with Beautifulsoup.
from bs4 import BeautifulSoup
def clean_with_soup(url: str) -> str:
r = requests.get(url).text
soup = BeautifulSoup(r, "html.parser")
return soup.get_text()
a good library for parse html - beautifulsoup. code example:
html = "<ul><li>Preconditions<ul><li>PC1</li><li>PC2</li></ul></li><li>Use Case Triggers<ul><li>T1</li><li>T2</li></ul></li><li>Postconditions<ul><li>PO1</li><li>PO2</li></ul></li></ul>"
from bs4 import BeautifulSoup
bs = BeautifulSoup(html, "html.parser")
uls = bs.findAll("ul")
for ul in uls:
print(ul.findAll("li"))

Beautifulsoup remove pages numbers at bottom

I'm trying to remove the page numbers from this html. It seems to follow the pattern '\n','number','\n' if you look at the list texts. Would I be able to do it with BeautifulSoup? If not, how do I remove that pattern from the list?
import requests
from bs4 import BeautifulSoup
from bs4.element import Comment
def tag_visible(element):
if element.parent.name in ['sup']:
return False
if isinstance(element, Comment):
return False
return True
url='https://www.sec.gov/Archives/edgar/data/1318605/000156459018019254/tsla-10q_20180630.htm'
html = requests.get(url)
soup = BeautifulSoup(html.text, 'html.parser')
texts = soup.findAll(text=True)
### could remove ['\n','number','\n']
visible_texts = filter(tag_visible, texts)
You can try to extract tags containing page numbers from soup before getting text.
soup = BeautifulSoup(html.text, 'html.parser')
for hr in soup.select('hr'):
hr.find_previous('p').extract()
texts = soup.findAll(text=True)
This extracts tags with page numbers that are in style:
<p style="text-align:center;margin-top:12pt;margin-bottom:0pt;text-indent:0%;font-size:10pt;font-family:Times New Roman;font-weight:normal;font-style:normal;text-transform:none;font-variant: normal;">57</p>
<p style="text-align:center;margin-top:12pt;margin-bottom:0pt;text-indent:0%;font-size:10pt;font-family:Times New Roman;font-weight:normal;font-style:normal;text-transform:none;font-variant: normal;">58</p>
... etc.

Parsing HTML with lxml (python)

I'm trying to save the content of a HTML-page in a .html-file, but I only want to save the content under the tag "table". In addition, I'd like to remove all empty tags like <b></b>.
I did all these things already with BeautifulSoup:
f = urllib2.urlopen('http://test.xyz')
html = f.read()
f.close()
soup = BeautifulSoup(html)
txt = ""
for text in soup.find_all("table", {'class': 'main'}):
txt += str(text)
text = BeautifulSoup(text)
empty_tags = text.find_all(lambda tag: tag.name == 'b' and tag.find(True) is None and (tag.string is None or tag.string.strip()==""))
[empty_tag.extract() for empty_tag in empty_tags]
My question is: Is this also possible with lxml? If yes: How would this +/- look like?
Thanks a lot for any help.
import lxml.html
# lxml can download pages directly
root = lxml.html.parse('http://test.xyz').getroot()
# use a CSS selector for class="main",
# or use root.xpath('//table[#class="main"]')
tables = root.cssselect('table.main')
# extract HTML content from all tables
# use lxml.html.tostring(t, method="text", encoding=unicode)
# to get text content without tags
"\n".join([lxml.html.tostring(t) for t in tables])
# removing only specific empty tags, here <b></b> and <i></i>
for empty in root.xpath('//*[self::b or self::i][not(node())]'):
empty.getparent().remove(empty)
# removing all empty tags (tags that do not have children nodes)
for empty in root.xpath('//*[not(node())]'):
empty.getparent().remove(empty)
# root does not contain those empty tags anymore

Categories

Resources