I have a response from an API returned in json format. It looks as follows:
page = requests.get(link)
page_dict = json.loads(page.content)
print page_dict
>> {u'sm_api_title': u'The Biggest Mysteries of Missing Malaysian Flight MH370', u'sm_api_keyword_array': [u'flight', u'plane', u'pilot', u'crash', u'passenger'], u'sm_api_content': u' Since the plane's disappearance early Saturday, revelations about the passenger list and plane's flight plan have left officials scrambling to decipher new complicated clues. The most dangerous parts of a flight are traditionally the takeoff and landing, but the missing jetliner disappeared about two hours into a six-hour flight, when it should have been cruising safely around 35,000 feet. The last plane to crash at altitude was Air France Flight 447, which crashed during a thunderstorm in the Atlantic Ocean en route from Rio De Janeiro to Paris. A day after the flight disappeared the biggest question authorities are asking is did the plane turn around and why? The first officer on the flight was identified as Fariq Hamid, 27, and had about 2,800 flight hours since 2007.', u'sm_api_limitation': u'Waited 0 extra seconds due to API limited mode, 89 requests left to make for today.', u'sm_api_character_count': u'773'}
As you can see the response comes back with characters like ' which are included in the response. What is the best way to clean this response?
I've used xmllib before and gotten it to work, but when I use it with django it gives me deprecation warnings.
Thank you for the help!
You need to unescape the strings in order to decode the HTML characters. You can unescape HTML strings using the standard library:
import HTMLParser
parser = HTMLParser.HTMLParser()
unescaped_string = parser.unescape(html_escaped_string)
Related
I have strings as below:
content =
"b'MAJOR CONRAD A. PREEDOM\\n2354 Fairchild Dr., Suite 6H-126\\nUSAF Academy, CO
\\xe2\\x80\\x93 Present 160 Flight Hours/145 Instructor Pilot Hours in Diamond Star
DA-40 (USAF T-52)\\n2004 \\xe2\\x80\\x93 2007\\n442 Hours/45 Flight Lead Hours in
McDonnell Douglas F-15E Strike Eagle\\n2003 \\xe2\\x80\\x93 2004\\n19 Hours in
Northrop AT-38B\\n2000 \\xe2\\x80\\x93 2003\\n1,311 Flight Hours/1051 Instructor
Pilot Hours in Cessna T-37B\\n1999 \\xe2\\x80\\x93 2000\\n26 Flight Hours in Northrop
T-38A\\n1995 PA \\xe2\\x80\\x93 1999\\nDistinguished Graduate, United States Air
Force Academy, CO \\xe2\\x80\\x93 1998\\nOmega Rho Honor Society for Operations
Research, United States Air Force Academy, CO \\xe2\\x80\\x93 1998\\nAIR FORCE AWARDS
AND DECORATIONS\\nMeritorious Service Medal\\nAir Force Commendation Medal\\nAir
Force Achievement Medal\\nAir Force Outstanding Unit Award\\nAir Force Organizational
Excellence Award\\nCombat Readiness Medal\\nNational Defense Service Medal\\nGlobal
War on Terrorism Service Medal\\nKorean Defense Service Medal\\nAF Longevity
Service\\nSmall Arms Expert Marksmanship Ribbon (Pistol)\\nAF Training Ribbon'"
I want to get rid of all these b' and anything with \x with 2 trailings like \xe2, \x80 and so on. I dont know how to get rid of it.
I tried
content.decode("utf-8", errors="ignore")
But because content is already str, I can't decode. So I tried below to make it like bytes and get rid of the things I want to get rid of and back to string but it does not work at all.
new_content =content.encode("ascii").decode("utf-8", errors="ignore")
when I run this code below, I can get rid of 'b and \x** things so I tried every possible thing but I do not know how to make my strings to bytes one like below. I can convert content to bytes, but it doesnt get rid of the stuff.
b'\x80abc sadad dkfbkafaf /n \n \x80dajhbahsdsabj'.decode("utf-8", errors="ignore")
Do you have any idea how my 'content' can get rid of b' and all of \x**?
You have a str value that contains the string representation of a bytes value, which itself is a UTF-8-encoded string. Use ast.literal_eval to get the actual bytes value, then decode it.
>>> import ast
>>> print(ast.literal_eval(content).decode())
MAJOR CONRAD A. PREEDOM
2354 Fairchild Dr., Suite 6H-126
USAF Academy, CO – Present 160 Flight Hours/145 Instructor Pilot Hours in Diamond Star DA-40 (USAF T-52)
2004 – 2007
[etc]
Hi~ I am having a problem while I am trying to tokenize facebook comments which are in CSV format. I have my CSV data ready, and I completed reading the file.
I am using Anaconda3; Python 3.5. (My CSV data has about 20k in rows and 1 in cols)
The codes are,
import csv
from nltk import sent_tokenize, word_tokenize as sent_tokenize, word_tokenize
with open('facebook_comments_samsung.csv', 'r') as f:
reader = csv.reader(f)
your_list = list(reader) #list(reader)
print (your_list)
What comes, as a result, is something like this:
[['comment_message'], ['b"Yet again been told a pack of lies by Samsung Customer services who have lost my daughters phone and couldn\'t care less. ANYONE WHO PURCHASES ANYTHING FROM THIS COMPANY NEEDS THEIR HEAD TESTED"'], ["b'You cannot really blame an entire brand worldwide for a problem caused by a branch. It is a problem yes, but address your local problem branch'"], ["b'Haha!! Sorry if they lost your daughters phone but I will always buy Samsung products no matter what.'"], ["b'Salim Gaji BEST REPLIE EVER \\xf0\\x9f\\x98\\x8e'"], ["b'<3 Bewafa zarge <3 \\r\\n\\n \\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x93\\r\\n\\xf0\\x9f\\x8e\\xad\\xf0\\x9f\\x91\\x89 AQIB-BOT.ML \\xf0\\x9f\\x91\\x88\\xf0\\x9f\\x8e\\xadMANUAL\\xe2\\x99\\xaaKing.Bot\\xe2\\x84\\xa2 \\r\\n\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x93\\xe2\\x80\\x94'"], ["b'\\xf0\\x9f\\x8c\\x90 LATIF.ML \\xf0\\x9f\\x8c\\x90'"], ['b"I\'m just waiting here patiently for you guys to say that you\'ll be releasing the s8 and s8+ a week early, for those who pre-ordered. Wishful thinking \\xf0\\x9f\\x98\\x86. Can\'t wait!"'], ['b"That\'s some good positive thinking there sir."'], ["b'(y) #NextIsNow #DoWhatYouCant'"], ["b'looking good'"], ['b"I\'ve always thought that when I first set eyes on my first born that I\'d like it to be on the screen of a cameraphone at arms length rather than eye-to-eye while holding my child. Thank you Samsung for improving our species."'], ["b'cool story'"], ["b'I believe so!'"], ["b'superb'"], ["b'Nice'"], ["b'thanks for the share'"], ["b'awesome'"], ["b'How can I talk to Samsung'"], ["b'Wow'"], ["b'#DoWhatYouCant siempre grandes innovadores Samsung Mobile'"], ["b'I had a problem with my s7 edge when I first got it all fixed now. However when I went to the Samsung shop they were useless and rude they refused to help and said there is nothing they could do no wonder the shop was dead quiet'"], ["b'Zeeshan Khan Masti Khel'"], ["b'I dnt had any problem wd my phn'"], ["b'I have maybe just had a bad phone to start with until it got fixed eventually. I had to go to carphone warehouse they were very helpful'"], ["b'awesome'"], ["b'Ch Shuja Uddin'"], ["b'akhheeerrr'"], ["b'superb'"], ["b'nice story'"], ["b'thanks for the share'"], ["b'superb'"], ["b'thanks for the share'"], ['b"On February 18th 2017 I sent my phone away to with a screen issue. The lower part of the screen was flickering bright white. The phone had zero physical damage to the screen\\n\\nI receive an email from Samsung Quotations with a picture of my SIM tray. Upon phoning I was told my SIM tray was stuck inside the phone and was handed a \\xc2\\xa392.14 repair bill. There is no way that my SIM tray was stuck in the phone as I removed my SIM and memory card before sending the phone away.\\n\\nAfter numerous calls I finally gave in and agreed to pay the \\xc2\\xa392.14 on the understanding that my screen repair would also be covered in this cost. This was confirmed to me by the person on the phone.\\n\\nOn
Sorry for your inconvenience in reading the result. My bad.
To continue, I added,
tokens = [word_tokenize(i) for i in your_list]
for i in tokens:
print (i)
print (tokens)
This is the part where I get the following error:
C:\Program Files\Anaconda3\lib\site-packages\nltk\tokenize\punkt.py in _slices_from_text(self, text) in line 1278 TypeError: expected string or bytes-like object
What I want to do next is,
import nltk
en = nltk.Text(tokens)
print(len(en.tokens))
print(len(set(en.tokens)))
en.vocab()
en.plot(50)
en.count('galaxy s8')
And finally, I want to draw a wordcloud based on the data.
Being aware of the fact that every seconds of your time is precious, I am terribly sorry to ask for your help. I have been working this for a couple of days, and cannot find the right solution for my problem. Thank you for reading.
The error you're getting is because your CSV file is turned into a list of lists-- one for each row in the file. The file only contains one column, so each of these lists has one element: The string containing the message you want to tokenize. To get past the error, unpack the sublists by using this line instead:
tokens = [word_tokenize(row[0]) for row in your_list]
After that, you'll need to learn some more python and learn how to examine your program and your variables.
I have the following text:
"It's the show your only friend and pastor have been talking about!
<i>Wonder Showzen</i> is a hilarious glimpse into the black
heart of childhood innocence! Get ready as the complete first season of MTV2's<i> Wonder Showzen</i> tackles valuable life lessons like birth,
nature, diversity, and history – all inside the prison of
your mind! Where else can you..."
What I want to do with this is remove the html tags and encode it into unicode. I am currently doing:
def remove_tags(text):
return TAG_RE.sub('', text)
Which only strips the tag. How would I correctly encode the above for database storage?
You could try passing your text through a HTML parser. Here is an example using BeautifulSoup:
from bs4 import BeautifulSoup
text = '''It's the show your only friend and pastor have been talking about!
<i>Wonder Showzen</i> is a hilarious glimpse into the black
heart of childhood innocence! Get ready as the complete first season of MTV2's<i> Wonder Showzen</i> tackles valuable life lessons like birth,
nature, diversity, and history – all inside the prison of
your mind! Where else can you...'''
soup = BeautifulSoup(text)
>>> soup.text
u"It's the show your only friend and pastor have been talking about! \nWonder Showzen is a hilarious glimpse into the black \nheart of childhood innocence! Get ready as the complete first season of MTV2's Wonder Showzen tackles valuable life lessons like birth, \nnature, diversity, and history \u2013 all inside the prison of \nyour mind! Where else can you..."
You now have a unicode string with the HTML entities converted to unicode escaped characters, i.e. – was converted to \u2013.
This also removes the HTML tags.
I've written a Scrapy spider that extracts text from a page. The spider parses and outputs correctly on many of the pages, but is thrown off by a few. I'm trying to maintain line breaks and formatting in the document. Pages such as http://www.state.gov/r/pa/prs/dpb/2011/04/160298.htm are formatted properly like such:
April 7, 2011
Mark C. Toner
2:03 p.m. EDT
MR. TONER: Good afternoon, everyone. A couple of things at the top,
and then I’ll take your questions. We condemn the attack on innocent
civilians in southern Israel in the strongest possible terms, as well
as ongoing rocket fire from Gaza. As we have reiterated many times,
there’s no justification for the targeting of innocent civilians,
and those responsible for these terrorist acts should be held
accountable. We are particularly concerned about reports that indicate
the use of an advanced anti-tank weapon in an attack against civilians
and reiterate that all countries have obligations under relevant
United Nations Security Council resolutions to prevent illicit
trafficking in arms and ammunition. Also just a brief statement --
QUESTION: Can we stay on that just for one second?
MR. TONER: Yeah. Go ahead, Matt.
QUESTION: Apparently, the target of that was a school bus. Does that
add to your outrage?
MR. TONER: Well, any attack on innocent civilians is abhorrent, but
certainly the nature of the attack is particularly so.
While pages like http://www.state.gov/r/pa/prs/dpb/2009/04/121223.htm have output like this with no line breaks:
April 2, 2009
Robert Wood
11:53 a.m. EDTMR. WOOD: Good morning, everyone. I think it’s just
about still morning. Welcome to the briefing. I don’t have anything,
so – sir.QUESTION: The North Koreans have moved fueling tankers, or
whatever, close to the site. They may or may not be fueling this
missile. What words of wisdom do you have for the North Koreans at
this moment?MR. WOOD: Well, Matt, I’m not going to comment on, you
know, intelligence matters. But let me just say again, we call on the
North to desist from launching any type of missile. It would be
counterproductive. It’s provocative. It further inflames tensions in
the region. We want to see the North get back to the Six-Party
framework and focus on denuclearization.Yes.QUESTION: Japan has also
said they’re going to call for an emergency meeting in the Security
Council, you know, should this launch go ahead. Is this something that
you would also be looking for?MR. WOOD: Well, let’s see if this test
happens. We certainly hope it doesn’t. Again, calling on the North
not to do it. But certainly, we will – if that test does go forward,
we will be having discussions with our allies.
The code I'm using is as follows:
def parse_item(self, response):
self.log('Hi, this is an item page! %s' % response.url)
hxs = HtmlXPathSelector(response)
speaker = hxs.select("//span[contains(#class, 'official_s_name')]") #gets the speaker
speaker = speaker.select('string()').extract()[0] #extracts speaker text
date = hxs.select('//*[#id="date_long"]') #gets the date
date = date.select('string()').extract()[0] #extracts the date
content = hxs.select('//*[#id="centerblock"]') #gets the content
content = content.select('string()').extract()[0] #extracts the content
texts = "%s\n\n%s\n\n%s" % (date, speaker, content) #puts everything together in a string
filename = ("/path/StateDailyBriefing-" + '%s' ".txt") % (date) #creates a file using the date
#opens the file defined above and writes 'texts' using utf-8
with codecs.open(filename, 'w', encoding='utf-8') as output:
output.write(texts)
I think they problem lies in the formatting of the HTML of the page. On the pages that output the text incorrectly, the paragraphs are separated by <br> <p></p>, while on the pages that output correctly the paragraphs are contained within <p align="left" dir="ltr">. So, while I've identified this, I'm not sure how to make everything output consistently in the correct form.
The problem is that when you getting text() or string(), <br> tags are not converted to newline.
Workaround - replace <br> tags before doing XPath requests. Code:
response = response.replace(body=response.body.replace('<br />', '\n'))
hxs = HtmlXPathSelector(response)
And let me give some advice, if you know, that there is only one node, you can use text() instead string():
date = hxs.select('//*[#id="date_long"]/text()').extract()[0]
Try this xpath:
//*[#id="centerblock"]//text()
I have written the following trial code to retreive the title of legislative acts from the European parliament.
import urllib2
from BeautifulSoup import BeautifulSoup
search_url = "http://www.europarl.europa.eu/sides/getDoc.do?type=REPORT&mode=XML&reference=A7-2010-%.4d&language=EN"
for number in xrange(1,10):
url = search_url % number
page = urllib2.urlopen(url).read()
soup = BeautifulSoup(page)
title = soup.findAll("title")
print title
However, whenever I run it i get the following error:
Traceback (most recent call last):
File "<stdin>", line 20, in <module>
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2013' in position 70: ordinal not in range(128)
I have narrowed it down to BeautifulSoup not being able to read the fourth document in the loop. Can anyone explain to me what I am doing wrong?
With kind regards
Thomas
BeautifulSoup works in Unicode, so it's not responsible for that decoding error. More likely, your problem comes with the print statement -- your standard output seems to be in ascii (i.e., sys.stdout.encoding = 'ascii' or absent) and therefore you would indeed get such errors if trying to print a string containing non-ascii characters.
What's your OS? How is your console AKA terminal set (e.g. if on Windows what "codepage")? Did you set in the environment PYTHONIOENCODING to control sys.stdout.encoding or are you just hoping the encoding will be picked up automatically?
On my Mac, where the encoding is correct detected, running your code (save for also printing the number together with each title, for clarity) works fine and shows:
$ python ebs.py
1 [<title>REPORT Report on the proposal for a Council regulation temporarily suspending autonomous Common Customs Tariff duties on imports of certain industrial products into the autonomous regions of Madeira and the Azores - A7-0001/2010</title>]
2 [<title>REPORT Report on the proposal for a Council directive concerning mutual assistance for the recovery of claims relating to taxes, duties and other measures - A7-0002/2010</title>]
3 [<title>REPORT Report on the proposal for a regulation of the European Parliament and of the Council amending Council Regulation (EC) No 1085/2006 of 17 July 2006 establishing an Instrument for Pre-Accession Assistance (IPA) - A7-0003/2010</title>]
4 [<title>REPORT on equality between women and men in the European Union – 2009 - A7-0004/2010</title>]
5 [<title>REPORT Report on the proposal for a Council decision on the conclusion by the European Community of the Convention on the International Recovery of Child Support and Other Forms of Family Maintenance - A7-0005/2010</title>]
6 [<title>REPORT on the proposal for a Council directive on administrative cooperation in the field of taxation - A7-0006/2010</title>]
7 [<title>REPORT Report on promoting good governance in tax matters - A7-0007/2010</title>]
8 [<title>REPORT Report on the proposal for a Council Directive amending Directive 2006/112/EC as regards an optional and temporary application of the reverse charge mechanism in relation to supplies of certain goods and services susceptible to fraud - A7-0008/2010</title>]
9 [<title>REPORT Recommendation on the proposal for a Council decision concerning the conclusion, on behalf of the European Community, of the Additional Protocol to the Cooperation Agreement for the Protection of the Coasts and Waters of the North-East Atlantic against Pollution - A7-0009/2010</title>]
$
Replacing
print title
with
for t in title:
print(t)
or
print('\n'.join(t.string for t in title))
works. I'm not entirely sure why print <somelist> sometimes works, and sometimes doesn't however.
If you want to print the titles to a file, you need to specify some encoding that can represent the non-ascii char, utf8 should work fine. To do this, you need to add:
out = codecs.open('titles.txt', 'w', 'utf8')
at the top of the script
and print to the file:
print >> out, title