so i'm making alittle project as i am a beginner and i'm doing some webscraping. I wanted to print the lyrics of a song each on it's line using beautifulsoup in python but instead it's printing like this:
I looked out this morning and the sun was goneTurned on some music to start my dayI lost myself in a familiar songI closed my eyes and I slipped awayIt's more than a feeling (more than a feeling)When I hear that old song they used to play (more than a feeling)And I begin dreaming (more than a feeling)Till I see Marianne walk awayI see my Marianne walkin' awaySo many people have come and goneTheir faces fade as the years go byYet I still recall as I wander onAs clear as the sun in the summer skyIt's more than a feeling (more than a feeling)When I hear that old song they used to play (more than a feeling)And I begin dreaming (more than a feeling)Till I see Marianne walk awayI see my Marianne walkin' awayWhen I'm tired and thinking coldI hide in my music, forget the dayAnd dream of a girl I used to knowI closed my eyes and she slipped awayShe slipped awayIt's more than a feeling (more than a feeling)When I hear that old song they used to play (more than a feeling)And I begin dreaming (more than a feeling)Till I see Marianne walk away
This is my code:
import urllib
from bs4 import BeautifulSoup
html = urllib.urlopen("http://www.metrolyrics.com/more-than-a-feeling-lyrics-boston.html")
bsObj = BeautifulSoup(html, "lxml")
namelist = bsObj.find_all("div", {"id": "lyrics-body-text"})
print("".join([p.get_text(strip=True) for p in namelist]))
You need to remove the strip = True parameter to get_text. That strips the string resulting in the joined output you see.
By removing it:
print("".join([p.get_text() for p in namelist]))
It prints fine.
Try writing it into a simple for loop
for p in namelist:
print(p.get_text(strip=True))
I am new to python, and am wondering if anyone can help me with some file loading.
Situation is I have some text files and i'm trying to do sentiment analysis. Here's the text file. It is split into three category: <department>, <user>, <review>
Here are some sample data:
men peter123 the pants are too tight for my liking!
kids georgel i really like this toy, it keeps my kid entertained for days! It is affordable and comes on time, i strongly recommend it
health kksd1 the health pills is drowsy by nature, please take care and do not drive after you eat the pills
office ty7d1 the printer came on time, the only problem with it is with the duplex function which i suspect its not really working
I want to make into this
<category> <user> <review>
I have 50k lines of these data.
I have tried to load directly into numpy, but it says its an empty separator error. I looked up stackoverflow, but i couldn't find a situation where it applies to different number of delimiters. For instance, i will never get to know how many spaces are there in the data set that i have.
My biggest problem is, how do you count the number of delimiters and give them column. Is there a way that I can make into three categories <department>, <user>, <review>. Bear in mind that the review data can contain random commas and spaces which i can't control. So the system must be smart enough to pick up!
Any ideas? Is there a way that i can tell python that after you read the user data, then everything behind falls under review?
With data like this I'd just use split() with the maxplit argument:
If maxsplit is given, at most maxsplit splits are done (thus, the list will have at most maxsplit+1 elements).
Example:
from StringIO import StringIO
s = StringIO("""men peter123 the pants are too tight for my liking!
kids georgel i really like this toy, it keeps my kid entertained for days! It is affordable and comes on time, i strongly recommend it
health kksd1 the health pills is drowsy by nature, please take care and do not drive after you eat the pills
office ty7d1 the printer came on time, the only problem with it is with the duplex function which i suspect its not really working""")
for line in s:
category, user, review = line.split(None, 2)
print ("category: {} - user: {} - review: '{}'".format(category,
user,
review.strip()))
The output is:
category: men - user: peter123 - review: 'the pants are too tight for my liking!'
category: kids - user: georgel - review: 'i really like this toy, it keeps my kid entertained for days! It is affordable and comes on time, i strongly recommend it'
category: health - user: kksd1 - review: 'the health pills is drowsy by nature, please take care and do not drive after you eat the pills'
category: office - user: ty7d1 - review: 'the printer came on time, the only problem with it is with the duplex function which i suspect its not really working'
For reference:
https://docs.python.org/2/library/stdtypes.html#str.split
What about doing it sorta manually:
data = []
for line in input_data:
tmp_split = line.split(" ")
#Get the first part (dept)
dept = tmp_split[0]
#get the 2nd part
user = tmp_split[1]
#everything after is the review - put spaces inbetween each piece
review = " ".join(tmp_split[2:])
data.append([dept, user, review])
I've tested my regex with Pythex and it works as it's supposed to:
The HTML:
Something Very Important (SVI) 2013 Sercret Information, Big Company
Name (LBCN) Catalog Number BCN2013R18 and BSSN 3-55564-789-Y, was
developed as part of the SUP 2012 Something Task force was held in
conjunction with *SEM 2013, the second joint conference on study of
banana hand grenades and gorilla tactics (Association of Ape Warfare
Studies) interest groups BUDDY HOLLY and LION KING. It is comprised of
one hairy object containing 750 gross stories told in the voice of
Morgan Freeman and his trusty sidekick Michelle Bachman.
My regex:
,[\s\w()-]+,
When used with Pythex it selects the area I'm looking for, which is between the 2 commas in the paragraph:
Something Very Important (SVI) 2013 Sercret Information , Big
Company Name (LBCN) Catalog Number BCN2013R18 and BSSN
3-55564-789-Y, was developed as part of the SUP 2012 Something Task
force was held in conjunction with <a href="http://justaURL.com">*SEM
2013</a>, the second joint
conference on study of banana hand grenades and gorilla tactics
(Association of Ape Warfare Studies) interest groups BUDDY HOLLY and
LION KING. It is comprised of one hairy object containing 750 gross
stories told in the voice of Morgan Freeman and his trusty sidekick
Michelle Bachman.
However when I use BeautifulSoup's text regex:
print HTML.body.p.find_all(text=re.compile('\,[\s\w()-]+\,'))
I'm returned this instead of the area between the commas:
[u'Something Very Important (SVI) 2013 Sercret Information, Big Company Name (LBCN) Catalog Number BCN2013R18 and BSSN 3-55564-789-Y, was developed as part of the SUP 2012 Something Task force was held in conjunction with ']
I've also tried escaping the commas but to no luck. Beautiful soup just wants to return the whole <p> instead of the regex that I specified. Also I noticed that it returns the paragraph up until that link in the middle. Is this a problem with how I'm using BeautifulSoup or is this a regex problem?
BeautifulSoup uses the regular expression to search for matching elements. That whole text node matches your search.
You still then have to extract the part you want; BeautifulSoup does not do this for you. You could just reuse your regex here:
expression = re.compile('\,[\s\w()-]+\,')
textnode = HTML.body.p.find_all(text=expression)
print expression.search(textnode).group(0)
I've written a Scrapy spider that extracts text from a page. The spider parses and outputs correctly on many of the pages, but is thrown off by a few. I'm trying to maintain line breaks and formatting in the document. Pages such as http://www.state.gov/r/pa/prs/dpb/2011/04/160298.htm are formatted properly like such:
April 7, 2011
Mark C. Toner
2:03 p.m. EDT
MR. TONER: Good afternoon, everyone. A couple of things at the top,
and then I’ll take your questions. We condemn the attack on innocent
civilians in southern Israel in the strongest possible terms, as well
as ongoing rocket fire from Gaza. As we have reiterated many times,
there’s no justification for the targeting of innocent civilians,
and those responsible for these terrorist acts should be held
accountable. We are particularly concerned about reports that indicate
the use of an advanced anti-tank weapon in an attack against civilians
and reiterate that all countries have obligations under relevant
United Nations Security Council resolutions to prevent illicit
trafficking in arms and ammunition. Also just a brief statement --
QUESTION: Can we stay on that just for one second?
MR. TONER: Yeah. Go ahead, Matt.
QUESTION: Apparently, the target of that was a school bus. Does that
add to your outrage?
MR. TONER: Well, any attack on innocent civilians is abhorrent, but
certainly the nature of the attack is particularly so.
While pages like http://www.state.gov/r/pa/prs/dpb/2009/04/121223.htm have output like this with no line breaks:
April 2, 2009
Robert Wood
11:53 a.m. EDTMR. WOOD: Good morning, everyone. I think it’s just
about still morning. Welcome to the briefing. I don’t have anything,
so – sir.QUESTION: The North Koreans have moved fueling tankers, or
whatever, close to the site. They may or may not be fueling this
missile. What words of wisdom do you have for the North Koreans at
this moment?MR. WOOD: Well, Matt, I’m not going to comment on, you
know, intelligence matters. But let me just say again, we call on the
North to desist from launching any type of missile. It would be
counterproductive. It’s provocative. It further inflames tensions in
the region. We want to see the North get back to the Six-Party
framework and focus on denuclearization.Yes.QUESTION: Japan has also
said they’re going to call for an emergency meeting in the Security
Council, you know, should this launch go ahead. Is this something that
you would also be looking for?MR. WOOD: Well, let’s see if this test
happens. We certainly hope it doesn’t. Again, calling on the North
not to do it. But certainly, we will – if that test does go forward,
we will be having discussions with our allies.
The code I'm using is as follows:
def parse_item(self, response):
self.log('Hi, this is an item page! %s' % response.url)
hxs = HtmlXPathSelector(response)
speaker = hxs.select("//span[contains(#class, 'official_s_name')]") #gets the speaker
speaker = speaker.select('string()').extract()[0] #extracts speaker text
date = hxs.select('//*[#id="date_long"]') #gets the date
date = date.select('string()').extract()[0] #extracts the date
content = hxs.select('//*[#id="centerblock"]') #gets the content
content = content.select('string()').extract()[0] #extracts the content
texts = "%s\n\n%s\n\n%s" % (date, speaker, content) #puts everything together in a string
filename = ("/path/StateDailyBriefing-" + '%s' ".txt") % (date) #creates a file using the date
#opens the file defined above and writes 'texts' using utf-8
with codecs.open(filename, 'w', encoding='utf-8') as output:
output.write(texts)
I think they problem lies in the formatting of the HTML of the page. On the pages that output the text incorrectly, the paragraphs are separated by <br> <p></p>, while on the pages that output correctly the paragraphs are contained within <p align="left" dir="ltr">. So, while I've identified this, I'm not sure how to make everything output consistently in the correct form.
The problem is that when you getting text() or string(), <br> tags are not converted to newline.
Workaround - replace <br> tags before doing XPath requests. Code:
response = response.replace(body=response.body.replace('<br />', '\n'))
hxs = HtmlXPathSelector(response)
And let me give some advice, if you know, that there is only one node, you can use text() instead string():
date = hxs.select('//*[#id="date_long"]/text()').extract()[0]
Try this xpath:
//*[#id="centerblock"]//text()
I'd like to write a script that gets the Wikipedia description section only. That is, when I say
/wiki bla bla bla
it will go to the Wikipedia page for bla bla bla, get the following, and return it to the chatroom:
"Bla Bla Bla" is the name of a song
made by Gigi D'Agostino. He described
this song as "a piece I wrote thinking
of all the people who talk and talk
without saying anything". The
prominent but nonsensical vocal
samples are taken from UK band
Stretch's song "Why Did You Do It"
How can I do this?
Here are a few different possible approaches; use whichever works for you. All my code examples below use requests for HTTP requests to the API; you can install requests with pip install requests if you have Pip. They also all use the Mediawiki API, and two use the query endpoint; follow those links if you want documentation.
1. Get a plain text representation of either the entire page or the page "extract" straight from the API with the extracts prop
Note that this approach only works on MediaWiki sites with the TextExtracts extension. This notably includes Wikipedia, but not some smaller Mediawiki sites like, say, http://www.wikia.com/
You want to hit a URL like
https://en.wikipedia.org/w/api.php?action=query&format=json&titles=Bla_Bla_Bla&prop=extracts&exintro&explaintext
Breaking that down, we've got the following parameters in there (documented at https://www.mediawiki.org/wiki/Extension:TextExtracts#query+extracts):
action=query, format=json, and title=Bla_Bla_Bla are all standard MediaWiki API parameters
prop=extracts makes us use the TextExtracts extension
exintro limits the response to content before the first section heading
explaintext makes the extract in the response be plain text instead of HTML
Then parse the JSON response and extract the extract:
>>> import requests
>>> response = requests.get(
... 'https://en.wikipedia.org/w/api.php',
... params={
... 'action': 'query',
... 'format': 'json',
... 'titles': 'Bla Bla Bla',
... 'prop': 'extracts',
... 'exintro': True,
... 'explaintext': True,
... }
... ).json()
>>> page = next(iter(response['query']['pages'].values()))
>>> print(page['extract'])
"Bla Bla Bla" is the title of a song written and recorded by Italian DJ Gigi D'Agostino. It was released in May 1999 as the third single from the album, L'Amour Toujours. It reached number 3 in Austria and number 15 in France. This song can also be heard in an added remixed mashup with L'Amour Toujours (I'll Fly With You) in its US radio version.
2. Get the full HTML of the page using the parse endpoint, parse it, and extract the first paragraph
MediaWiki has a parse endpoint that you can hit with a URL like https://en.wikipedia.org/w/api.php?action=parse&page=Bla_Bla_Bla to get the HTML of a page. You can then parse it with an HTML parser like lxml (install it first with pip install lxml) to extract the first paragraph.
For example:
>>> import requests
>>> from lxml import html
>>> response = requests.get(
... 'https://en.wikipedia.org/w/api.php',
... params={
... 'action': 'parse',
... 'page': 'Bla Bla Bla',
... 'format': 'json',
... }
... ).json()
>>> raw_html = response['parse']['text']['*']
>>> document = html.document_fromstring(raw_html)
>>> first_p = document.xpath('//p')[0]
>>> intro_text = first_p.text_content()
>>> print(intro_text)
"Bla Bla Bla" is the title of a song written and recorded by Italian DJ Gigi D'Agostino. It was released in May 1999 as the third single from the album, L'Amour Toujours. It reached number 3 in Austria and number 15 in France. This song can also be heard in an added remixed mashup with L'Amour Toujours (I'll Fly With You) in its US radio version.
3. Parse wikitext yourself
You can use the query API to get the page's wikitext, parse it using mwparserfromhell (install it first using pip install mwparserfromhell), then reduce it down to human-readable text using strip_code. strip_code doesn't work perfectly at the time of writing (as shown clearly in the example below) but will hopefully improve.
>>> import requests
>>> import mwparserfromhell
>>> response = requests.get(
... 'https://en.wikipedia.org/w/api.php',
... params={
... 'action': 'query',
... 'format': 'json',
... 'titles': 'Bla Bla Bla',
... 'prop': 'revisions',
... 'rvprop': 'content',
... }
... ).json()
>>> page = next(iter(response['query']['pages'].values()))
>>> wikicode = page['revisions'][0]['*']
>>> parsed_wikicode = mwparserfromhell.parse(wikicode)
>>> print(parsed_wikicode.strip_code())
{{dablink|For Ke$ha's song, see Blah Blah Blah (song). For other uses, see Blah (disambiguation)}}
"Bla Bla Bla" is the title of a song written and recorded by Italian DJ Gigi D'Agostino. It was released in May 1999 as the third single from the album, L'Amour Toujours. It reached number 3 in Austria and number 15 in France. This song can also be heard in an added remixed mashup with L'Amour Toujours (I'll Fly With You) in its US radio version.
Background and writing
He described this song as "a piece I wrote thinking of all the people who talk and talk without saying anything". The prominent but nonsensical vocal samples are taken from UK band Stretch's song "Why Did You Do It"''.
Music video
The song also featured a popular music video in the style of La Linea. The music video shows a man with a floating head and no arms walking toward what appears to be a shark that multiplies itself and can change direction. This style was also used in "The Riddle", another song by Gigi D'Agostino, originally from British singer Nik Kershaw.
Chart performance
Chart (1999-00)PeakpositionIreland (IRMA)Search for Irish peaks23
References
External links
Category:1999 singles
Category:Gigi D'Agostino songs
Category:1999 songs
Category:ZYX Music singles
Category:Songs written by Gigi D'Agostino
Use the MediaWiki API, which runs on Wikipedia. You will have to do some parsing of the data yourself.
For instance:
http://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=json&&titles=Bla%20Bla%20Bla
means
fetch (action=query) the content (rvprop=content) of the most recent revision of Main Page (title=Main%20Page) in JSON format (format=json).
You will probably want to search for the query and use the first result, to handle spelling errors and the like.
You can get wiki data in Text formats. If you need to access many title's informations, you can get all title's wiki data in a single call. Use pipe character ( | ) to separate each titles.
http://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exlimit=max&explaintext&exintro&titles=Yahoo|Google&redirects=
Here this api call return both Googles and Yahoos data.
explaintext => Return extracts as plain text instead of limited HTML.
exlimit = max (now its 20); Otherwise only one result will return.
exintro => Return only content before the first section. If you want full data, just remove this.
redirects= Resolve redirect issues.
You can fetch just the first section using the API:
http://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvsection=0&titles=Bla%20Bla%20Bla&rvprop=content
This will give you raw wikitext, you'll have to deal with templates and markup.
Or you can fetch the whole page rendered into HTML which has its own pros and cons as far as parsing:
http://en.wikipedia.org/w/api.php?action=parse&prop=text&page=Bla_Bla_Bla
I can't see an easy way to get parsed HTML of the first section in a single call but you can do it with two calls by passing the wikitext you receive from the first URL back with text= in place of the page= in the second URL.
UPDATE
Sorry I neglected the "plain text" part of your question. Get the part of the article you want as HTML. It's much easier to strip HTML than to strip wikitext!
DBPedia is the perfect solution for this problem. Here: http://dbpedia.org/page/Metallica, look at the perfectly organised data using RDF. One can query for anything here at http://dbpedia.org/sparql using SPARQL, the query language for the RDF. There's always a way to find the pageID so as to get descriptive text but this should do for the most part.
There will be a learning curve for RDF and SPARQL for writing any useful code but this is the perfect solution.
For example, a query run for Metallica returns an HTML table with the abstract in several different languages:
<table class="sparql" border="1">
<tr>
<th>abstract</th>
</tr>
<tr>
<td><pre>"Metallica is an American heavy metal band formed..."#en</pre></td>
</tr>
<tr>
<td><pre>"Metallica es una banda de thrash metal estadounidense..."#es</pre></td>
...
SPARQL QUERY :
PREFIX dbpedia-owl: <http://dbpedia.org/ontology/>
PREFIX dbpprop: <http://dbpedia.org/property/>
PREFIX dbres: <http://dbpedia.org/resource/>
SELECT ?abstract WHERE {
dbres:Metallica dbpedia-owl:abstract ?abstract.
}
Change "Metallica" to any resource name (resource name as in wikipedia.org/resourcename) for queries pertaining to abstract.
Alternatively, you can try to load any of the text of wiki pages simply like this
https://bn.wikipedia.org/w/index.php?title=User:ShohagS&action=raw&ctype=text
where change bn to you your wiki language and User:ShohagS will be the page name. In your case use:
https://en.wikipedia.org/w/index.php?title=Bla_bla_bla&action=raw&ctype=text
in browsers, this will return a php formated text file.
You can use the wikipedia package of Python, and specifically the content attribute for the given page.
From the documentation:
>>> import wikipedia
>>> print wikipedia.summary("Wikipedia")
# Wikipedia (/ˌwɪkɨˈpiːdiə/ or /ˌwɪkiˈpiːdiə/ WIK-i-PEE-dee-ə) is a collaboratively edited, multilingual, free Internet encyclopedia supported by the non-profit Wikimedia Foundation...
>>> wikipedia.search("Barack")
# [u'Barak (given name)', u'Barack Obama', u'Barack (brandy)', u'Presidency of Barack Obama', u'Family of Barack Obama', u'First inauguration of Barack Obama', u'Barack Obama presidential campaign, 2008', u'Barack Obama, Sr.', u'Barack Obama citizenship conspiracy theories', u'Presidential transition of Barack Obama']
>>> ny = wikipedia.page("New York")
>>> ny.title
# u'New York'
>>> ny.url
# u'http://en.wikipedia.org/wiki/New_York'
>>> ny.content
# u'New York is a state in the Northeastern region of the United States. New York is the 27th-most exten'...
I think the better option is to use the extracts prop that provides you MediaWiki API. It returns you only some tags (b, i, h#, span, ul, li) and removes tables, infoboxes, references, etc.
http://en.wikipedia.org/w/api.php?action=query&prop=extracts&titles=Bla%20Bla%20Bla&format=xml
gives you something very simple:
<api><query><pages><page pageid="4456737" ns="0" title="Bla Bla Bla"><extract xml:space="preserve">
<p>"<b>Bla Bla Bla</b>" is the title of a song written and recorded by Italian DJ Gigi D'Agostino. It was released in May 1999 as the third single from the album, <i>L'Amour Toujours</i>. It reached number 3 in Austria and number 15 in France. This song can also be heard in an added remixed mashup with <i>L'Amour Toujours (I'll Fly With You)</i> in its US radio version.</p> <p></p> <h2><span id="Background_and_writing">Background and writing</span></h2> <p>He described this song as "a piece I wrote thinking of all the people who talk and talk without saying anything". The prominent but nonsensical vocal samples are taken from UK band Stretch's song <i>"Why Did You Do It"</i>.</p> <h2><span id="Music_video">Music video</span></h2> <p>The song also featured a popular music video in the style of La Linea. The music video shows a man with a floating head and no arms walking toward what appears to be a shark that multiplies itself and can change direction. This style was also used in "The Riddle", another song by Gigi D'Agostino, originally from British singer Nik Kershaw.</p> <h2><span id="Chart_performance">Chart performance</span></h2> <h2><span id="References">References</span></h2> <h2><span id="External_links">External links</span></h2> <ul><li>Full lyrics of this song at MetroLyrics</li> </ul>
</extract></page></pages></query></api>
You can then run it through a regular expression, in JavaScript would be something like this (maybe you have to do some minor modifications:
/^.*<\s*extract[^>]*\s*>\s*((?:[^<]*|<\s*\/?\s*[^>hH][^>]*\s*>)*).*<\s*(?:h|H).*$/.exec(data)
Which gives you (only paragrphs, bold and italic):
"Bla Bla Bla" is the title of a song written and recorded by Italian DJ Gigi D'Agostino. It was released in May 1999 as the third single from the album, L'Amour Toujours. It reached number 3 in Austria and number 15 in France. This song can also be heard in an added remixed mashup with L'Amour Toujours (I'll Fly With You) in its US radio version.
"...a script that gets the Wikipedia description section only..."
For your application you might what to look on the dumps, e.g.: http://dumps.wikimedia.org/enwiki/20120702/
The particular files you need are 'abstract' XML files, e.g., this small one (22.7MB):
http://dumps.wikimedia.org/enwiki/20120702/enwiki-20120702-abstract19.xml
The XML has a tag called 'abstract' which contain the first part of each article.
Otherwise wikipedia2text uses, e.g., w3m to download the page with templates expanded and formatted to text. From that you might be able to pick out the abstract via a regular expression.
You can try WikiExtractor: http://medialab.di.unipi.it/wiki/Wikipedia_Extractor
It's for Python 2.7 and 3.3+.
First check here.
There are a lot of invalid syntaxes in MediaWiki's text markup.
(Mistakes made by users...)
Only MediaWiki can parse this hellish text.
But still there are some alternatives to try in the link above.
Not perfect, but better than nothing!
You can try the BeautifulSoup HTML parsing library for python,but you'll have to write a simple parser.
There is also the opportunity to consume Wikipedia pages through a wrapper API like JSONpedia, it works both live (ask for the current JSON representation of a Wiki page) and storage based (query multiple pages previously ingested in Elasticsearch and MongoDB).
The output JSON also include plain rendered page text.