I'm retrieving some date from a website via lxml xpath:
page = requests.get(url)
tree = html.fromstring(page.content)
titles_arr = tree.xpath("//span[#class='lister-item-header']/span/a/text()")
Some of the titles have German Umlaute (e.g. üöä) so I thought of encoding the text returned like so:
for title in titles_arr:
title = title.encode('utf-8')
but it still consists of like Der Herr der Ringe - Die R\u00fcckkehr des K\u00f6nigs instead of their respective unicode character. What am I doing wrong?
Thanks
You seem to be dealing with a bytestring encoded with unicode characters escaped.
You can decode like this:
>>> bs = b'Die R\u00fcckkehr des K\u00f6nigs'
>>> bs.decode('raw-unicode-escape')
'Die Rückkehr des Königs'
If you are dealing with text, rather than bytes, you'll need to encode then decode:
>>> s = 'Die R\u00fcckkehr des K\u00f6nigs'
>>> s.encode('latin-1').decode('raw-unicode-escape')
'Die Rückkehr des Königs'
This kind of encoding is used to escape unicode characters in json, to restrict the json to ascii values:
>>> json.dumps('Die Rückkehr des Königs')
'"Die R\\u00fcckkehr des K\\u00f6nigs"'
so it's possible that whatever url you are fetching is html with embedded json, or json with embedded html - it might be worth checking the response's json attribute.
Related
I'm trying to use awdeoroi mailmerge. In the html template i have french encoded characters in paragraph tags.
When i execute the mailing i have encoding errors :
UnicodeDecodeError : 'utf-8' codec can't decode byte 0xf4 in position 81: invalid continuation byte
How to encode those paragraph so that they are well treated in python ?
TO: {{email}}
SUBJECT: Testing mailmerge
FROM: My Self <myself#mydomain.com>
Content-Type: text/html
<html>
<body>
<p>Hi, {{name}},</p>
<p>Your number is {{number}}.</p>
<p>Sent by Here is the paragraph. Ce texte est en francais. <b>Accentué<b>. L'ideal</p>
</body>
</html>
Hex f4 is latin1 for ô. If this was typed into Python, you needed this at the start of the source file:
# -*- coding: utf-8 -*-
If the data is coming from a database, please provide some more details.
If the text is coming from somewhere else, please provide some more details.
I have an issue parsing this website: http://fm4-archiv.at/files.php?cat=106
It contains special characters such as umlauts. See here:
My chrome browser displays the umlauts properly as you can see in the screenshot above. However on other pages (e.g.: http://fm4-archiv.at/files.php?cat=105) the umlauts are not displayed properly, as can be seen in the screenshot below:
The meta HTML tag defines the following charset on the pages:
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"/>
I use the python requests package to get the HTML and then use Beautifulsoup to scrape the desired data. My code is as follows:
r = requests.get(URL)
soup = BeautifulSoup(r.content,"lxml")
If I print the encoding (print(r.encoding) the result is UTF-8. If I manually change the encoding to ISO-8859-1 or cp1252 by calling r.encoding = ISO-8859-1 nothing changes when I output the data on the console. This is also my main issue.
r = requests.get(URL)
r.encoding = 'ISO-8859-1'
soup = BeautifulSoup(r.content,"lxml")
still results in the following string shown on the console output in my python IDE:
Der Wildlöwenpfleger
instead it should be
Der Wildlöwenpfleger
How can I change my code to parse the umlauts properly?
In general, instead of using r.content which is the byte string received, use r.text which is the decoded content using the encoding determined by requests.
In this case requests will use UTF-8 to decode the incoming byte string because this is the encoding reported by the server in the Content-Type header:
import requests
r = requests.get('http://fm4-archiv.at/files.php?cat=106')
>>> type(r.content) # raw content
<class 'bytes'>
>>> type(r.text) # decoded to unicode
<class 'str'>
>>> r.headers['Content-Type']
'text/html; charset=UTF-8'
>>> r.encoding
'UTF-8'
>>> soup = BeautifulSoup(r.text, 'lxml')
That will fix the "Wildlöwenpfleger" problem, however, other parts of the page then begin to break, for example:
>>> soup = BeautifulSoup(r.text, 'lxml') # using decoded string... should work
>>> soup.find_all('a')[39]
Der Wildlöwenpfleger
>>> soup.find_all('a')[10]
<a href="files.php?cat=87" title="Stermann und Grissemann sind auf Sommerfrische und haben Hermes ihren Salon �bergeben. Auf Streifz�gen durch die Popliteratur st��t Hermes auf deren gro�e Themen und h�rt mit euch quer. In der heutige">Salon Hermes (6 files)
shows that "Wildlöwenpfleger" is fixed but now "übergeben" and others in the second link are broken.
It appears that multiple encodings are used in the one HTML document. The first link uses UTF-8 encoding:
>>> r.content[8013:8070].decode('iso-8859-1')
'Der Wildlöwenpfleger'
>>> r.content[8013:8070].decode('utf8')
'Der Wildlöwenpfleger'
but the second link uses ISO-8859-1 encoding:
>>> r.content[2868:3132].decode('iso-8859-1')
'Salon Hermes (6 files)\r\n'
>>> r.content[2868:3132].decode('utf8', 'replace')
'Salon Hermes (6 files)\r\n'
Obviously it is incorrect to use multiple encodings in the same HTML document. Other than contacting the document's author and asking for a correction, there is not much that you can easily do to handle the mixed encoding. Perhaps you can run chardet.detect() over the data as you process it, but it's not going to be pleasant.
I just found two solutions. Can you confirm?
Soup = BeautifulSoup(r.content.decode('utf-8','ignore'),"lxml")
and
Soup = BeautifulSoup(r.content,"lxml", fromEncoding='utf-8')
Both results in the following example output:
Der Wildlöwenpfleger
EDIT:
I just wonder why these work, because r.encoding results in UTF-8 anyway. This tells me that requests anyway handled the data as UTF-8 data. Hence I wonder why .decode('utf-8','ignore') or fromEncoding='utf-8' result in the desired output?
EDIT 2:
okay, I think I get it now. The .decode('utf-8','ignore') and fromEncoding='utf-8' mean that the actual data is encoded as UTF-8 and that Beautifulsoup should parse it handling it as UTF-8 encoded data which is actually the case.
I assume that requests correctly handled it as UTF-8, but BeautifulSoup did not. Hence, I have to do this extra decoding.
I have a directory of xml files, where a xml file is of the form:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="CoreNLP-to-HTML.xsl" type="text/xsl"?>
<root>
<document>
<sentences>
<sentence id="1">
<tokens>
<token id="1">
<word>Brand</word>
<lemma>brand</lemma>
<CharacterOffsetBegin>0</CharacterOffsetBegin>
<CharacterOffsetEnd>5</CharacterOffsetEnd>
<POS>NN</POS>
<NER>O</NER>
</token>
<token id="2">
<word>Blogs</word>
<lemma>blog</lemma>
<CharacterOffsetBegin>6</CharacterOffsetBegin>
<CharacterOffsetEnd>11</CharacterOffsetEnd>
<POS>NNS</POS>
<NER>O</NER>
</token>
<token id="3">
<word>Capture</word>
<lemma>capture</lemma>
<CharacterOffsetBegin>12</CharacterOffsetBegin>
<CharacterOffsetEnd>19</CharacterOffsetEnd>
<POS>VBP</POS>
<NER>O</NER>
</token>
I am parsing each xml file and storing the word between the tags, and then finding the top 100 words.
I am doing like this:
def find_top_words(xml_directory):
file_list = []
temp_list=[]
file_list2=[]
for dir_file in os.listdir(xml_directory):
dir_file_path = os.path.join(xml_directory, dir_file)
if os.path.isfile(dir_file_path):
with open(dir_file_path) as f:
page = f.read()
soup = BeautifulSoup(page,"xml")
for word in soup.find_all('word'):
file_list.append(str(word.string.strip()))
f.close()
for element in file_list:
s = element.lower()
file_list2.append(s)
counts = Counter(file_list2)
for w in sorted(counts, key=counts.get, reverse=True):
temp_list.append(w)
return temp_list[:100]
But, I'm getting this error:
File "prac31.py", line 898, in main
v = find_top_words('/home/xyz/xml_dir')
File "prac31.py", line 43, in find_top_words
file_list.append(str(word.string.strip()))
UnicodeEncodeError: 'ascii' codec can't encode character u'\xef' in position 2: ordinal not in range(128)
What does this mean and how to fix it?
Don't use BeautifulSoup, it's totally deprecated. Why not the standard lib ? if you want something more complex for xml handling you have lxml (but I am pretty sure that you don't)
It will solve your problem easily.
edit:
forget the preview answer it was bad -_-
your problem is str(my_string) in python 2 if my_string contains non-ascii characters, cause str() in python 2 on a unicode string is like trying to encode as ascii, use the method encode('utf-8') instead.
Str() function encode ascii codec and as your word.string.strip() does not return ascii character some where in your xml file you catch this error. the solution is using:
file_list.append(word.string.strip().encode('utf-8'))
and for returning this value you need to do something like :
for item in file_list:
print item.decode('utf-8')
Hope it helps.
In this line of code:
file_list.append(str(word.string.strip()))
why are you using str? The data is Unicode, and you can append unicode strings to a list. If you need a bytestring, then you can use word.string.strip().encode('utf8') instead.
I'm trying to fetch a segment of some website. The script works, however it's a website that has accents such as á, é, í, ó, ú.
When I fetch the site using urllib or urllib2, the site source code is not encoded in utf-8, which I would like it to be, as utf-8 supports these accents.
I believe that the target site is encoded in utf-8 as it contains the following meta tag:
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
My python script:
opener = urllib2.build_opener()
opener.addheaders = [('Accept-Charset', 'utf-8')]
url_response = opener.open(url)
deal_html = url_response.read().decode('utf-8')
However, I keep getting results that look like they are not encoded un utf-8.
E.g: "Milán" on website = "Mil\xe1n" after urllib2 fetches it
Any suggestions?
Your script is working correctly. The "\xe1" string is the representation of the unicode object resulting from decoding. For example:
>>> "Mil\xc3\xa1n".decode('utf-8')
u'Mil\xe1n'
The "\xc3\xa1" sequence is the UTF-8 sequence for leter a with diacritic mark: á.
How do you convert HTML entities to Unicode and vice versa in Python?
As to the "vice versa" (which I needed myself, leading me to find this question, which didn't help, and subsequently another site which had the answer):
u'some string'.encode('ascii', 'xmlcharrefreplace')
will return a plain string with any non-ascii characters turned into XML (HTML) entities.
You need to have BeautifulSoup.
from BeautifulSoup import BeautifulStoneSoup
import cgi
def HTMLEntitiesToUnicode(text):
"""Converts HTML entities to unicode. For example '&' becomes '&'."""
text = unicode(BeautifulStoneSoup(text, convertEntities=BeautifulStoneSoup.ALL_ENTITIES))
return text
def unicodeToHTMLEntities(text):
"""Converts unicode to HTML entities. For example '&' becomes '&'."""
text = cgi.escape(text).encode('ascii', 'xmlcharrefreplace')
return text
text = "&, ®, <, >, ¢, £, ¥, €, §, ©"
uni = HTMLEntitiesToUnicode(text)
htmlent = unicodeToHTMLEntities(uni)
print uni
print htmlent
# &, ®, <, >, ¢, £, ¥, €, §, ©
# &, ®, <, >, ¢, £, ¥, €, §, ©
Update for Python 2.7 and BeautifulSoup4
Unescape -- Unicode HTML to unicode with htmlparser (Python 2.7 standard lib):
>>> escaped = u'Monsieur le Curé of the «Notre-Dame-de-Grâce» neighborhood'
>>> from HTMLParser import HTMLParser
>>> htmlparser = HTMLParser()
>>> unescaped = htmlparser.unescape(escaped)
>>> unescaped
u'Monsieur le Cur\xe9 of the \xabNotre-Dame-de-Gr\xe2ce\xbb neighborhood'
>>> print unescaped
Monsieur le Curé of the «Notre-Dame-de-Grâce» neighborhood
Unescape -- Unicode HTML to unicode with bs4 (BeautifulSoup4):
>>> html = '''<p>Monsieur le Curé of the «Notre-Dame-de-Grâce» neighborhood</p>'''
>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup(html)
>>> soup.text
u'Monsieur le Cur\xe9 of the \xabNotre-Dame-de-Gr\xe2ce\xbb neighborhood'
>>> print soup.text
Monsieur le Curé of the «Notre-Dame-de-Grâce» neighborhood
Escape -- Unicode to unicode HTML with bs4 (BeautifulSoup4):
>>> unescaped = u'Monsieur le Curé of the «Notre-Dame-de-Grâce» neighborhood'
>>> from bs4.dammit import EntitySubstitution
>>> escaper = EntitySubstitution()
>>> escaped = escaper.substitute_html(unescaped)
>>> escaped
u'Monsieur le Curé of the «Notre-Dame-de-Grâce» neighborhood'
As hekevintran answer suggests, you may use cgi.escape(s) for encoding stings, but notice that encoding of quote is false by default in that function and it may be a good idea to pass the quote=True keyword argument alongside your string. But even by passing quote=True, the function won't escape single quotes ("'") (Because of these issues the function has been deprecated since version 3.2)
It's been suggested to use html.escape(s) instead of cgi.escape(s). (New in version 3.2)
Also html.unescape(s) has been introduced in version 3.4.
So in python 3.4 you can:
Use html.escape(text).encode('ascii', 'xmlcharrefreplace').decode() to convert special characters to HTML entities.
And html.unescape(text) for converting HTML entities back to plain-text representations.
For python3 use html.unescape():
import html
s = "&"
decoded = html.unescape(s)
# &
$ python3 -c "
> import html
> print(
> html.unescape('&©—')
> )"
&©—
$ python3 -c "
> import html
> print(
> html.escape('&©—')
> )"
&©—
$ python2 -c "
> from HTMLParser import HTMLParser
> print(
> HTMLParser().unescape('&©—')
> )"
&©—
$ python2 -c "
> import cgi
> print(
> cgi.escape('&©—')
> )"
&©—
HTML only strictly requires & (ampersand) and < (left angle bracket / less-than sign) to be escaped. https://html.spec.whatwg.org/multipage/parsing.html#data-state
If someone like me is out there wondering why some entity numbers (codes) like (for trademark symbol), (for euro symbol) are not encoded properly, the reason is in ISO-8859-1 (aka Windows-1252) those characters are not defined.
Also note that, the default character set as of html5 is utf-8 it was ISO-8859-1 for html4
So, we will have to workaround somehow (find & replace those at first)
Reference (starting point) from Mozilla's documentation
https://developer.mozilla.org/en-US/docs/Web/Guide/Localizations_and_character_encodings
I used the following function to convert unicode ripped from an xls file into a an html file while conserving the special characters found in the xls file:
def html_wr(f, dat):
''' write dat to file f as html
. file is assumed to be opened in binary format
. if dat is nul it is replaced with non breakable space
. non-ascii characters are translated to xml
'''
if not dat:
dat = ' '
try:
f.write(dat.encode('ascii'))
except:
f.write(html.escape(dat).encode('ascii', 'xmlcharrefreplace'))
hope this is useful to somebody
#!/usr/bin/env python3
import fileinput
import html
for line in fileinput.input():
print(html.unescape(line.rstrip('\n')))