how to encode character '\xa0' in 'ascii' codec - python

I am trying to fetch data using Here's Rest API using python but I am receiving the following error,
1132
1133 # Non-ASCII characters should have been eliminated earlier
-> 1134 self._output(request.encode('ascii'))
1135
1136 if self._http_vsn == 11:
UnicodeEncodeError: 'ascii' codec can't encode character '\xa0' in position 86: ordinal not in range(128)
My python code is -
df = pd.read_csv(r"data.csv", encoding='utf8', sep=",",
engine="python")
def GoogPlac(auth_key,lat,lon):
location = str(lat) + ',' + str(lon)
MyUrl = ('https://places.ls.hereapi.com/places/v1/browse'
'?apiKey=%s'
'&in=%s'
';r=2000'
'&cat=restaurant&pretty') % (auth_key,location)
#grabbing the JSON result
response = urllib.request.urlopen(MyUrl)
jsonRaw = response.read()
jsonData = json.loads(jsonRaw)
return jsonData
# Function call
df['response'] = df.apply(lambda x: GoogPlac(auth_key,x['latitude'],x['longitude']), axis=1)
I want to avoid the error and continue my API fetch

You said you want to avoid the error, but how you avoid it matters.
Your title says you want to encode something to ASCII, but the thing you want to encode is not encodable in ASCII. There is no A0 character in 7-bit ASCII. You've asked the impossible.
You can decide among a few different things:
Encode with a lossy encode() parameter that says to throw away everything that doesn't fit in ASCII. This is dangerous and probably not very smart. If you can't trust your data, then why are you using your data?
Use a different encoding for output. You seem to know what encoding your text was, because you could fetch it and render it to Unicode. (OR, you are using ancient Python 2, and the default system encoding understands that page's encoding, and there's a silent .decode(DEFAULT_ENCODING) right before your .encode("ascii") . This is by far the best scheme. Just don't use ASCII. UTF-8 is the present and future!
Specifically snip out A0 with .replace() before your .encode(). Also pretty bad.
Get your page author to agree it should be ASCII and get himher to fix it. This is best of all.

Related

Parsing JSON string with \u escapes

I have a Python service with and endpoint that passes on data to another service, get's back the result and passes it to the requester. There is a filed message in the form and if I input a Unicode character - let's say 'GRINNING FACE WITH SMILING EYES' (U+1F601) - I see following in the request form object
ImmutableMultiDict([('message', u'\U0001f601'),...
When I get response from the other service, I have this
{..., u'message': u'\xf0\x9f\x98\x81',...}
This is then JSONified using json.dumps into
{..."message": "\u00f0\u009f\u0098\u0081"...}
Finally, on client, the message string gets parsed into
ð
(If I'm not mistaken, Unicode code for that character is \u00f0)
So where does it go wrong? It looks like I have a string that gets returned from an external service with utf8 hex escapes. I tried utf8-decoding that string but I get the following error
return codecs.utf_8_decode(input, errors, True)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-3: ordinal not inrange(128)
To handle this correctly you need to fix the process that is creating the u'\xf0\x9f\x98\x81' mojibake. As noted, those bytes are correct, but they need to be in a plain string (in Python 3 that's a bytes string) not a Unicode string. We can't give further details without seeing the relevant code.
However, you can extract the byte codes from the mojibake by encoding it as Latin 1, and then decode those bytes as UTF-8 to create proper Unicode:
d = {u'message': u'\xf0\x9f\x98\x81'}
for k, v in d.items():
# Extract bytes from mojibake Unicode
b = v.encode('latin1')
# Now decode the extracted bytes as UTF-8
s = b.decode('UTF-8')
print k, s
output
message 😁
Or in a more compact form:
v = u'\xf0\x9f\x98\x81'
s = v.encode('latin1').decode('utf-8')
print(s)
That will work in both Python 2 & 3.
You should seriously consider migrating to Python 3, where Unicode handling is a lot saner, and you're much less likely to create these kinds of mix-ups.

Python: difficulty converting ascii to unicode

My goal: get the page source from a url and count all instances of a keyword within that page source
How I am doing it: getting the pagesource via urllib2, looping through each char of the page source and comparing it to the keyword
My problem: my keyword is encoded in utf-8 while the page source is in ascii... I am running into errors whenever I try conversions.
getting the page source:
import urllib2
response = urllib2.urlopen(myUrl)
return response.read()
comparing page source and keyword:
pageSource[i] == keyWord[j]
I need to convert one of these strings to the other's encoding. Intuitively I felt that ascii (the page source) to utf-8 (the key word) would be the best and easiest, so:
pageSource = unicode(pageSource)
UnicodeDecodeError: 'ascii' codec can't decode byte __ in position __: ordinal not in range(128)
When trying to work with text, don't leave your data as byte strings. Decode to Unicode early, encode back to bytes as late as possible.
Decode your downloaded network data:
import urllib2
response = urllib2.urlopen(myUrl)
# Latin-1 is the default for HTTP text/ responses, adjust as needed
codec = response.info().getparam('charset', 'latin1')
return response.read().decode(codec)
and do the same for your keyWord data. If it is encoded as UTF-8, decode it as such, or use Unicode string literals.
You may want to read up on Python and Unicode:
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) by Joel Spolsky
Pragmatic Unicode by Ned Batchelder
The Python Unicode HOWTO
I'll assume your remote "source page" contains more than just ASCII otherwise your comparison will already work as is (ASCII is now a subset of UTF-8. I.e. A in ASCII is 0x41, which is the same as UTF-8).
You may find Python Requests library easier as it will automatically decode remote content to Unicode strings based on the server's headers (Unicode strings are encoding neutral so can be compared without worrying about encoding).
resp = requests.get("http://www.example.com/utf8page.html")
resp.text
>> u'My unicode data €'
You will then need to decode your reference data:
keyWord[j] = "€".decode("UTF-8")
keyWord[j]
>> u'€'
If you're embedding non-ASCII in your source code, you need to define the encoding you're using. For example, at the top of your source code/script:
# coding=UTF-8

UnicodeEncodeError: 'ascii' codec can't encode characters due to één from database

I have a field to get from database which contains string with this part één and while getting this i get error:
"UnicodeEncodeError: 'ascii' codec can't encode characters in position 12-15: ordinal not in range(128)"
I have search this error, and other people were having issue due to unicodes which start something like this u'\xa0, etc. But in my case, i think its due to special characters. I can not do changes in database as its not under my access. I can just access it.
The code is here: (actually its call to external url)
req = urllib2.Request(url)
req.add_header("Content-type", "application/json")
res = urllib2.urlopen(req,timeout = 50) #50 secs timeout
clientid = res.read()
result = json.loads(clientid)
Then I use result variable to get the above mentioned string and I get error on this line:
updateString +="name='"+str(result['product_name'])+"', "
You need to find the encoding for which is used for your data before it's inserted into the database. Let's assume it's UTF-8 since that's the most common.
In that case you will want to UTF-8 decode instead of ascii decode. You didn't provide any code, so I'm assuming you have "data".decode(). Try "data".decode("utf-8"), and if your data was encoded using this encoding, it will work.
So it sounds to me like the string already was unicode then. So remove str() and unicode functions on that line.

Unicode error trying to call Google search API

I need to perform google search to retrieve the number of results for a query. I found the answer here - Google Search from a Python App
However, for few queries I am getting the below error. I think the query has unicode characters.
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 28: ordinal not in range(128)
I searched google and found I need to convert unicode to ascii, and found below code.
def convertToAscii(text, action):
temp = unicode(text, "utf-8")
fixed = unicodedata.normalize('NFKD', temp).encode('ASCII', action)
return fixed
except Exception, errorInfo:
print errorInfo
print "Unable to convert the Unicode characters to xml character entities"
raise errorInfo
If I use the action ignore, it removes those characters, but if I use other actions, I am getting exceptions.
Any idea, how to handle this?
Thanks
== Edit ==
I am using below code to encode and then perform the search and this is throwing the error.
query = urllib.urlencode({'q': searchfor})
You cannot urlencode raw Unicode strings. You need to first encode them to UTF-8 and then feed to it:
query = urllib.urlencode({'q': u"München".encode('UTF-8')})
This returns q=M%C3%BCnchen which Google happily accepts.
You can't safely convert Unicode to ASCII. Doing so involves throwing away information (specifically, it throws away non-English letters).
You should be doing the entire process in Unicode, so as not to lose any information.

Convert Unicode to ASCII without errors in Python

My code just scrapes a web page, then converts it to Unicode.
html = urllib.urlopen(link).read()
html.encode("utf8","ignore")
self.response.out.write(html)
But I get a UnicodeDecodeError:
Traceback (most recent call last):
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/webapp/__init__.py", line 507, in __call__
handler.get(*groups)
File "/Users/greg/clounce/main.py", line 55, in get
html.encode("utf8","ignore")
UnicodeDecodeError: 'ascii' codec can't decode byte 0xa0 in position 2818: ordinal not in range(128)
I assume that means the HTML contains some wrongly-formed attempt at Unicode somewhere. Can I just drop whatever code bytes are causing the problem instead of getting an error?
>>> u'aあä'.encode('ascii', 'ignore')
'a'
Decode the string you get back, using either the charset in the the appropriate meta tag in the response or in the Content-Type header, then encode.
The method encode(encoding, errors) accepts custom handlers for errors. The default values, besides ignore, are:
>>> u'aあä'.encode('ascii', 'replace')
b'a??'
>>> u'aあä'.encode('ascii', 'xmlcharrefreplace')
b'aあä'
>>> u'aあä'.encode('ascii', 'backslashreplace')
b'a\\u3042\\xe4'
See https://docs.python.org/3/library/stdtypes.html#str.encode
As an extension to Ignacio Vazquez-Abrams' answer
>>> u'aあä'.encode('ascii', 'ignore')
'a'
It is sometimes desirable to remove accents from characters and print the base form. This can be accomplished with
>>> import unicodedata
>>> unicodedata.normalize('NFKD', u'aあä').encode('ascii', 'ignore')
'aa'
You may also want to translate other characters (such as punctuation) to their nearest equivalents, for instance the RIGHT SINGLE QUOTATION MARK unicode character does not get converted to an ascii APOSTROPHE when encoding.
>>> print u'\u2019'
’
>>> unicodedata.name(u'\u2019')
'RIGHT SINGLE QUOTATION MARK'
>>> u'\u2019'.encode('ascii', 'ignore')
''
# Note we get an empty string back
>>> u'\u2019'.replace(u'\u2019', u'\'').encode('ascii', 'ignore')
"'"
Although there are more efficient ways to accomplish this. See this question for more details Where is Python's "best ASCII for this Unicode" database?
2018 Update:
As of February 2018, using compressions like gzip has become quite popular (around 73% of all websites use it, including large sites like Google, YouTube, Yahoo, Wikipedia, Reddit, Stack Overflow and Stack Exchange Network sites).
If you do a simple decode like in the original answer with a gzipped response, you'll get an error like or similar to this:
UnicodeDecodeError: 'utf8' codec can't decode byte 0x8b in position 1: unexpected code byte
In order to decode a gzpipped response you need to add the following modules (in Python 3):
import gzip
import io
Note: In Python 2 you'd use StringIO instead of io
Then you can parse the content out like this:
response = urlopen("https://example.com/gzipped-ressource")
buffer = io.BytesIO(response.read()) # Use StringIO.StringIO(response.read()) in Python 2
gzipped_file = gzip.GzipFile(fileobj=buffer)
decoded = gzipped_file.read()
content = decoded.decode("utf-8") # Replace utf-8 with the source encoding of your requested resource
This code reads the response, and places the bytes in a buffer. The gzip module then reads the buffer using the GZipFile function. After that, the gzipped file can be read into bytes again and decoded to normally readable text in the end.
Original Answer from 2010:
Can we get the actual value used for link?
In addition, we usually encounter this problem here when we are trying to .encode() an already encoded byte string. So you might try to decode it first as in
html = urllib.urlopen(link).read()
unicode_str = html.decode(<source encoding>)
encoded_str = unicode_str.encode("utf8")
As an example:
html = '\xa0'
encoded_str = html.encode("utf8")
Fails with
UnicodeDecodeError: 'ascii' codec can't decode byte 0xa0 in position 0: ordinal not in range(128)
While:
html = '\xa0'
decoded_str = html.decode("windows-1252")
encoded_str = decoded_str.encode("utf8")
Succeeds without error. Do note that "windows-1252" is something I used as an example. I got this from chardet and it had 0.5 confidence that it is right! (well, as given with a 1-character-length string, what do you expect) You should change that to the encoding of the byte string returned from .urlopen().read() to what applies to the content you retrieved.
Another problem I see there is that the .encode() string method returns the modified string and does not modify the source in place. So it's kind of useless to have self.response.out.write(html) as html is not the encoded string from html.encode (if that is what you were originally aiming for).
As Ignacio suggested, check the source webpage for the actual encoding of the returned string from read(). It's either in one of the Meta tags or in the ContentType header in the response. Use that then as the parameter for .decode().
Do note however that it should not be assumed that other developers are responsible enough to make sure the header and/or meta character set declarations match the actual content. (Which is a PITA, yeah, I should know, I was one of those before).
Use unidecode - it even converts weird characters to ascii instantly, and even converts Chinese to phonetic ascii.
$ pip install unidecode
then:
>>> from unidecode import unidecode
>>> unidecode(u'北京')
'Bei Jing'
>>> unidecode(u'Škoda')
'Skoda'
I use this helper function throughout all of my projects. If it can't convert the unicode, it ignores it. This ties into a django library, but with a little research you could bypass it.
from django.utils import encoding
def convert_unicode_to_string(x):
"""
>>> convert_unicode_to_string(u'ni\xf1era')
'niera'
"""
return encoding.smart_str(x, encoding='ascii', errors='ignore')
I no longer get any unicode errors after using this.
For broken consoles like cmd.exe and HTML output you can always use:
my_unicode_string.encode('ascii','xmlcharrefreplace')
This will preserve all the non-ascii chars while making them printable in pure ASCII and in HTML.
WARNING: If you use this in production code to avoid errors then most likely there is something wrong in your code. The only valid use case for this is printing to a non-unicode console or easy conversion to HTML entities in an HTML context.
And finally, if you are on windows and use cmd.exe then you can type chcp 65001 to enable utf-8 output (works with Lucida Console font). You might need to add myUnicodeString.encode('utf8').
You wrote """I assume that means the HTML contains some wrongly-formed attempt at unicode somewhere."""
The HTML is NOT expected to contain any kind of "attempt at unicode", well-formed or not. It must of necessity contain Unicode characters encoded in some encoding, which is usually supplied up front ... look for "charset".
You appear to be assuming that the charset is UTF-8 ... on what grounds? The "\xA0" byte that is shown in your error message indicates that you may have a single-byte charset e.g. cp1252.
If you can't get any sense out of the declaration at the start of the HTML, try using chardet to find out what the likely encoding is.
Why have you tagged your question with "regex"?
Update after you replaced your whole question with a non-question:
html = urllib.urlopen(link).read()
# html refers to a str object. To get unicode, you need to find out
# how it is encoded, and decode it.
html.encode("utf8","ignore")
# problem 1: will fail because html is a str object;
# encode works on unicode objects so Python tries to decode it using
# 'ascii' and fails
# problem 2: even if it worked, the result will be ignored; it doesn't
# update html in situ, it returns a function result.
# problem 3: "ignore" with UTF-n: any valid unicode object
# should be encodable in UTF-n; error implies end of the world,
# don't try to ignore it. Don't just whack in "ignore" willy-nilly,
# put it in only with a comment explaining your very cogent reasons for doing so.
# "ignore" with most other encodings: error implies that you are mistaken
# in your choice of encoding -- same advice as for UTF-n :-)
# "ignore" with decode latin1 aka iso-8859-1: error implies end of the world.
# Irrespective of error or not, you are probably mistaken
# (needing e.g. cp1252 or even cp850 instead) ;-)
If you have a string line, you can use the .encode([encoding], [errors='strict']) method for strings to convert encoding types.
line = 'my big string'
line.encode('ascii', 'ignore')
For more information about handling ASCII and unicode in Python, this is a really useful site: https://docs.python.org/2/howto/unicode.html
I think the answer is there but only in bits and pieces, which makes it difficult to quickly fix the problem such as
UnicodeDecodeError: 'ascii' codec can't decode byte 0xa0 in position 2818: ordinal not in range(128)
Let's take an example, Suppose I have file which has some data in the following form ( containing ascii and non-ascii chars )
1/10/17, 21:36 - Land : Welcome ��
and we want to ignore and preserve only ascii characters.
This code will do:
import unicodedata
fp = open(<FILENAME>)
for line in fp:
rline = line.strip()
rline = unicode(rline, "utf-8")
rline = unicodedata.normalize('NFKD', rline).encode('ascii','ignore')
if len(rline) != 0:
print rline
and type(rline) will give you
>type(rline)
<type 'str'>
unicodestring = '\xa0'
decoded_str = unicodestring.decode("windows-1252")
encoded_str = decoded_str.encode('ascii', 'ignore')
Works for me
You can use the following piece of code as an example to avoid Unicode to ASCII errors:
from anyascii import anyascii
content = "Base Rent for – CC# 2100 Acct# 8410: $41,667.00 – PO – Lines - for Feb to Dec to receive monthly"
content = anyascii(content)
print(content)
Looks like you are using python 2.x.
Python 2.x defaults to ascii and it doesn’t know about Unicode. Hence the exception.
Just paste the below line after shebang, it will work
# -*- coding: utf-8 -*-

Categories

Resources