Parsing multiple News articles - python

I have built a program for summarization that utilizes a parser to parse from multiple websites at a time. I extract only <p> in each article.
This throws out a lot of random content that is unrelated to the article. I've seen several people who can parse any article perfectly. How can i do it? I am using Beautiful Soup

Might be worth you trying an existing package like python-goose which does what it sounds like you're asking for, extracting article content from web pages.

Your solution is really going to be specific to each website page you want to scrape, so, without knowing the websites of interest, the only thing I could really suggest would be to inspect the page source of each page you want to scrape and look if the article is contained in some html element with a specific attribute (either a unique class, id, or even summary attribute) and then use beautiful soup to get the inner html text from that element

Related

Crawling text of a specific heading for any web page URL document in python

I have searched and get a little bit introduced to some of the web crawling libraries in python like scrapy, beautifulsoup etc. Using these libraries I want to crawl all of the text under a specific heading in a document. If any of you can help me his/her help would be highly appreciated. I have seen some tutorial that how one can get links under a specific class name (by view source page option) using beautiful soap but how can I get a simple text not links under the specific class of heading. Sorry for my bad English
import requests
from bs4 import BeautifulSoup
r=requests.get('https://patents.google.com/patent/US6886010B2/en')
print(r.content)
soup=BeautifulSoup(r.content)
for link in soup.find_all("div", class_="claims"):
print(link)
Here i have extracted claims text but it also shows other div written in these claims that is div in div i just want to extract the text of the claims only.
By links, I assume you mean the entire contents of the div elements. If you'd like to just print the text contained within them, use the .text attribute or .get_text() method. The entire text of the claims is wrapped inside a unique section element. So you might want to try this:
print(soup.find('section', attrs={'id': 'claims'}).text)
The get_text method gives you a bit more flexibility such as joining bits of text together with a separator and stripping the text of extra newlines.
Also, take a look at the BeautifulSoup Documentation and spend some time reading it.

Scraping webpage with "nonsensical" tags

I am attempting to build a web scraper to aggregate information on state level House and Senate bills. I am using Python and I can pull the HTML from the page, but parsing it is giving me difficulty. For example, the New Hampshire bill page wraps information in tags with "nonsensically" named tags. Here is an example page: http://www.gencourt.state.nh.us/bill_status/billText.aspx?sy=2017&id=14&txtFormat=html. How would I go about pulling, for example, the number of the bill, from the long list of tags?
If I had to guess, I'd say that markup was generated by some sort of WYSIWYG editor. (The presence of invalid CSS properties like tab-stops suggests that it might be output from a word processor.) If this is the case, the exact usage of classes in the output is unlikely to be consistent between documents.
With this in mind, your best bet will probably be to ignore the markup entirely and parse the text.
Open the page in a browser, right-click on something you want to be able to pull, and use Inspect, to see the class name used for that element. For instance, if you inspect the bill number, you'll see that it's
<span class="cs4904F745">76</span>
So in your web scraping code, search for the class cs4904F745 to get the bill number. These things may look random, but I've checked a few documents and they're consistent.
You can use the BeautifulSoup library to parse the HTML and search for what you want.

How to extract article contents from websites with different layouts

I have a list of 1,000 URLs of articles published by different agencies, and of course each has its own HTML layout.
I am writing a python code to extract ONLY the article body from each URL. Can this be done by only looking at the < p>< /p> Paragraph tags?
Will I be missing some content? or including irrelevant content by this approach?
Thanks
For some articles you will be missing content, and for others you will include irrelevant content. There is really no way to grab just the article body from a URL since each site layout will likely vary significantly.
One thing you could try is grabbing text contained in multiple consecutive p tags inside the body tag, but there is still no guarantee you will get just the body of the article.
It would be a lot easier if you broke the list of URLs into a list for each distinct site, that would you could define what the article body is case by case.
To answer your question, it's highly unlikely you can get ONLY article content targeting <p></p> tags. You WILL get a lot of unnecessary content that will take a ton of effort to filter through, guaranteed.
Try to find an RSS feed for these websites. That will make scraping target data much easier than parsing an entire HTML page.

Beautiful Soup Classic Confusion

Working with Python and Beautifulsoup. A bit new to CSS markup, so I know I'm making some beginner mistakes, a specific example would go a long way in helping me understand.
I'm trying to scrape a page for links, but only certain links.
CSS
links = soup.find_all("a", class_="details-title")
The code you have will search for links with the details-title class, which don't exist in the sample you provided. It seems like you are trying to find links located inside divs with the details-title class. I believe the easiest way to do this is to search using CSS selectors, which you can do with Beautiful Soup's .select method.
Example: links = soup.select("div.details-title a")
The <tag>.<class> syntax searches for all tags with that class, and elements separated by a space will search for sub-elements of the results before it. See here for more information.

How to read context from hyperlink on website

I'm looking for method to read context from hyperlink on website. Is it possible?
For example:
website = "WEBSITE"
openwebsite = urllib2.urlopen(website)
hyperlink = _some_method_to_find_hyperlink(openwebsite)
get_context_from_hyper(hyperlink)
I was searching in Beautiful Shop, but I cannot find something usefull.
I thinking that i could with lopp to find revelant hyperlink, and use urllib2 again, but website is quite large, and it would takes ages.
You could go and try the Beautiful Soup Package which enables you to parse HTML and thus extract any tag you might be looking for.

Categories

Resources