Why can't I convert unicode string to plain python string? - python

url = u'/wiki/Category:%E6%89%93%E7%A3%9A%E5%A1%8A'
The decoded string is (through https://www.urldecoder.org/):
decoded_url = u'/wiki/Category:打磚塊'
In python, I have the following code to do this conversion:
decoded_url = url.decode('utf-8')
This code doesn't change it at all. I also tried:
decoded_url = url.encode('utf-8')
The string remains the same. How to convert it to the decoded string I want?

Here's Python 2.7 code that gives you the result you want from the original string in your question:
import urlparse
utfStr = u"/wiki/Category:%E6%89%93%E7%A3%9A%E5%A1%8A"
asciiStr = utfStr.encode()
str = urlparse.unquote(asciiStr)
print(str)
Result:
/wiki/Category:打磚塊
It appears that unquote does the wrong thing when given a unicode string. You have to first convert it to single-byte string before unquote will do the right thing.

it is not UTF-8 encoding but url escaping or url quoting
import urllib.parse
print( urllib.parse.unquote( u'/wiki/Category:%E6%89%93%E7%A3%9A%E5%A1%8A') )
Result
/wiki/Category:打磚塊
Python 3.x doc: urllib.parse
EDIT:
Python 2.7 has it in module urlparse
import urlparse
print( urlparse.unquote(u'/wiki/Category:%E6%89%93%E7%A3%9A%E5%A1%8A') )
Python 2.7 doc: urlparse
EDIT:
After testing with Python 2.7 it needs encode() before unquote() to work with str (plain text) instead of unicode
#-*- coding: utf-8 -*-
import urlparse
url = u'/wiki/Category:%e6%89%93%E7%A3%9A%E5%A1%8A'
url = url.encode('utf-8') # convert `unicode` to `str`
url = urlparse.unquote(url) # convert `%e6%89%93%E7%A3%9A%E5%A1%8A` to `打磚塊`
print url
print type(url)
print '打磚塊' in url
Result
/wiki/Category:打磚塊
<type 'str'>
True
BTW: The same for Python 3 - it doesn't need encode()
import urllib.parse
url = u'/wiki/Category:%e6%89%93%E7%A3%9A%E5%A1%8A'
url = urllib.parse.unquote(url) # convert `%e6%89%93%E7%A3%9A%E5%A1%8A` to `打磚塊`
print(url)
print(type(url))
print('打磚塊' in url)
Result:
/wiki/Category:打磚塊
<class 'str'>
True

Related

Python3 error: initial_value must be str or None, with StringIO

While porting code from python2 to 3, I get this error when reading from a URL
TypeError: initial_value must be str or None, not bytes.
import urllib
import json
import gzip
from urllib.parse import urlencode
from urllib.request import Request
service_url = 'https://babelfy.io/v1/disambiguate'
text = 'BabelNet is both a multilingual encyclopedic dictionary and a semantic network'
lang = 'EN'
Key = 'KEY'
params = {
'text' : text,
'key' : Key,
'lang' :'EN'
}
url = service_url + '?' + urllib.urlencode(params)
request = Request(url)
request.add_header('Accept-encoding', 'gzip')
response = urllib.request.urlopen(request)
if response.info().get('Content-Encoding') == 'gzip':
buf = StringIO(response.read())
f = gzip.GzipFile(fileobj=buf)
data = json.loads(f.read())
The exception is thrown at this line
buf = StringIO(response.read())
If I use python2, it works fine.
response.read() returns an instance of bytes while StringIO is an in-memory stream for text only. Use BytesIO instead.
From What's new in Python 3.0 - Text Vs. Data Instead Of Unicode Vs. 8-bit
The StringIO and cStringIO modules are gone. Instead, import the io module and use io.StringIO or io.BytesIO for text and data respectively.
This looks like another python3 bytes vs. str problem. Your response is of type bytes (which is different in python 3 from str). You need to get it into a string first using response.read().decode('utf-8') say and then use StringIO on it. Or you may want to use BytesIO as someone said - but if you expect it to be str, preferred way is to decode into an str first.
Consider using six.StringIO instead of io.StringIO.
And if you are migrating code from python2 to python3 and using suds old version use "suds-py3" for python3

Python 3.4.0 -- xpath -- gets me empty list [duplicate]

Trying to retrieve some data from the web using urlib and lxml, I've got an error and have no idea, how to fix it.
url='http://sum.in.ua/?swrd=автор'
page = urllib.request.urlopen(url)
The error itself:
UnicodeEncodeError: 'ascii' codec can't encode characters in position 11-15: ordinal not in range(128)
I'm using Ukrainian in API this time, but when I use API (without any Ukrainian letters in it) here:
url="http://www.toponymic-dictionary.in.ua/index.php?option=com_content&view=section&layout=blog&id=8&Itemid=9"
page = urllib.request.urlopen(url)
pageWritten = page.read()
pageReady = pageWritten.decode('utf-8')
xmldata = lxml.html.document_fromstring(pageReady)
text1 = xmldata.xpath('//p[#class="MsoNormal"]//text()')
it gets me the data in Ukrainian and everything works just fine.
URLs can only use a subset of printable ASCII codepoints; everything else must be properly encoded using URL percent encoding.
You can best achieve that by letting Python handle your parameters. The urllib.parse.urlencode() function can convert a dictionary (or a sequence of key-value pairs) for use in URLs:
from urllib.parse import urlencode
url = 'http://sum.in.ua/'
parameters = {'swrd': 'автор'}
url = '{}?{}'.format(url, urlencode(parameters))
page = urllib.request.urlopen(url)
This will first encode the parameters to UTF-8 bytes, then convert those bytes to percent-encoding sequences:
>>> from urllib.parse import urlencode
>>> parameters = {'swrd': 'автор'}
>>> urlencode(parameters)
'swrd=%D0%B0%D0%B2%D1%82%D0%BE%D1%80'
If you did not construct this URL yourself, you'll need to 'repair' the encoding. You can split of the query string, parse it into a dictionary, then pass it to urlencode to put it back into the URL using urllib.parse.urlparse() and urllib.parse.parse_qs():
from urllib.parse import urlparse, parse_qs, urlencode
url = 'http://sum.in.ua/?swrd=автор'
parsed_url = urlparse(url)
parameters = parse_qs(parsed_url.query)
url = parsed_url._replace(query=urlencode(parameters, doseq=True)).geturl()
This splits the URL into its constituent parts, parses out the query string, re-encodes and re-builds the URL afterwards:
>>> from urllib.parse import urlparse, parse_qs, urlencode
>>> url = 'http://sum.in.ua/?swrd=автор'
>>> parsed_url = urlparse(url)
>>> parameters = parse_qs(parsed_url.query)
>>> parsed_url._replace(query=urlencode(parameters, doseq=True)).geturl()
'http://sum.in.ua/?swrd=%D0%B0%D0%B2%D1%82%D0%BE%D1%80'
I believe you can do something like below
url = 'http://sum.in.ua/'
q = 'swrd=автор'
import urllib,requests
requests.get(url+"?"+urllib.quote(q))
I think urllib.quote will transform "swrd=автор" into something like "swrd=%D0%B0%D0%B2%D1%82%D0%BE%D1%80"
which should be accepted just fine

Regex on unicode string

I am trying to download a few hundred Korean pages like this one:
http://homeplusexpress.com/store/store_view.asp?cd_express=3
For each page, I want to use a regex to extract the "address" field, which in the above page looks like:
*주소 : 서울시 광진구 구의1동 236-53
So I do this:
>>> import requests
>>> resp=requests.get('http://homeplusexpress.com/store/store_view.asp?cd_express=3')
>>> resp.encoding
'ISO-8859-1'
>>> # I wonder why it's ISO-8859-1, since I thought that is for Latin text (Latin-1).
>>> html = resp.text
>>> type(html)
<type 'unicode'>
>>> html
(outputs a long string that contains a lot of characters like \xc3\xb7\xaf\xbd\xba \xc0\xcd\xbd\xba\xc7\xc1\xb7\xb9\)
I then wrote a script. I set # -*- coding: utf-8 -*- on the .py file and put this:
address = re.search('주소', html)
However, re.search is returning None. I tried with and without the u prefix on the regex string.
Usually I can solve issues like this with a call to .encode or .decode but I tried a few things and am stuck. Any pointers on what I'm missing?
According to the tag in the html document header:
<meta http-equiv="Content-Type" content="text/html; charset=euc-kr">
the web page uses the euc-kr encoding.
I wrote this code:
# -*- coding: euc-kr -*-
import re
import requests
resp=requests.get('http://homeplusexpress.com/store/store_view.asp?cd_express=3')
html = resp.text
address = re.search('주소', html)
print address
Then I saved it in gedit using the euc-kr encoding.
I got a match.
But actually there is an even better solution! You can keep the utf-8 encoding for your files.
# -*- coding: utf-8 -*-
import re
import requests
resp=requests.get('http://homeplusexpress.com/store/store_view.asp?cd_express=3')
resp.encoding = 'euc-kr'
# we need to specify what the encoding is because the
# requests library couldn't detect it correctly
html = resp.text
# now the html variable contains a utf-8 encoded unicode instance
print type(html)
# we use the re.search functions with unicode strings
address = re.search(u'주소', html)
print address
From requests documetation: When you make a request, Requests makes educated guesses about the encoding of the response based on the HTTP headers
If you check your website, we can see there is no encoding in server response:
I think the only option in this case is directly specify what encoding to use:
# -*- coding: utf-8 -*-
import requests
import re
r = requests.get('http://homeplusexpress.com/store/store_view.asp?cd_express=3')
r.encoding = 'euc-kr'
print re.search(ur'주소', r.text, re.UNICODE)

BeautifulSoup isn't recognizing UTF-8 characters even after using "fromEncoding=UTF-8"

I wrote a simple script that just takes a webpage and extracts the contents of it to a tokenized list. However, I'm running into an issue where when I convert the BeautifulSoup object to a String, the UTF-8 characters for ",', etc. won't convert. Instead, they remain in the unicode format.
I'm defining the source as UTF-8 when I create the BeautifulSoup object, and I've even tried running a unicode conversion separately, but nothing works. Any have any idea why this is happening?
from urllib2 import urlopen
from bs4 import BeautifulSoup
import nltk, re, pprint
url = "http://www.bloomberg.com/news/print/2013-07-05/softbank-s-21-6-billion-bid-for- sprint-approved-by-u-s-.html"
raw = urlopen(url).read()
soup = BeautifulSoup(raw, fromEncoding="UTF-8")
result = soup.find_all(id="story_content")
str_result = str(result)
notag = re.sub("<.*?>", " ", str_result)
output = nltk.word_tokenize(notag)
print(output)
The characters you're having trouble with aren't " (U+0022) and ' (U+0027), they're curly quotes “ (U+201C) and ” (U+201D) and ’ (U+2019). Convert those to their straight versions first, and you should get the results you're expecting:
raw = urlopen(url).read()
original = raw.decode('utf-8')
replacement = original.replace('\u201c', '"').replace('\u201d', '"').replace('\u2019', "'")
soup = BeautifulSoup(replacement) # Don't need fromEncoding if we're passing in Unicode
That should get the quote characters into the form you're expecting.

How to fetch a non-ascii url with urlopen?

I need to fetch data from a URL with non-ascii characters but urllib2.urlopen refuses to open the resource and raises:
UnicodeEncodeError: 'ascii' codec can't encode character u'\u0131' in position 26: ordinal not in range(128)
I know the URL is not standards compliant but I have no chance to change it.
What is the way to access a resource pointed by a URL containing non-ascii characters using Python?
edit: In other words, can / how urlopen open a URL like:
http://example.org/Ñöñ-ÅŞÇİİ/
Strictly speaking URIs can't contain non-ASCII characters; what you have there is an IRI.
To convert an IRI to a plain ASCII URI:
non-ASCII characters in the hostname part of the address have to be encoded using the Punycode-based IDNA algorithm;
non-ASCII characters in the path, and most of the other parts of the address have to be encoded using UTF-8 and %-encoding, as per Ignacio's answer.
So:
import re, urlparse
def urlEncodeNonAscii(b):
return re.sub('[\x80-\xFF]', lambda c: '%%%02x' % ord(c.group(0)), b)
def iriToUri(iri):
parts= urlparse.urlparse(iri)
return urlparse.urlunparse(
part.encode('idna') if parti==1 else urlEncodeNonAscii(part.encode('utf-8'))
for parti, part in enumerate(parts)
)
>>> iriToUri(u'http://www.a\u0131b.com/a\u0131b')
'http://www.xn--ab-hpa.com/a%c4%b1b'
(Technically this still isn't quite good enough in the general case because urlparse doesn't split away any user:pass# prefix or :port suffix on the hostname. Only the hostname part should be IDNA encoded. It's easier to encode using normal urllib.quote and .encode('idna') at the time you're constructing a URL than to have to pull an IRI apart.)
In python3, use the urllib.parse.quote function on the non-ascii string:
>>> from urllib.request import urlopen
>>> from urllib.parse import quote
>>> chinese_wikipedia = 'http://zh.wikipedia.org/wiki/Wikipedia:' + quote('首页')
>>> urlopen(chinese_wikipedia)
Python 3 has libraries to handle this situation. Use
urllib.parse.urlsplit to split the URL into its components, and
urllib.parse.quote to properly quote/escape the unicode characters
and urllib.parse.urlunsplit to join it back together.
>>> import urllib.parse
>>> url = 'http://example.com/unicodè'
>>> url = urllib.parse.urlsplit(url)
>>> url = list(url)
>>> url[2] = urllib.parse.quote(url[2])
>>> url = urllib.parse.urlunsplit(url)
>>> print(url)
http://example.com/unicod%C3%A8
It is more complex than the accepted #bobince's answer suggests:
netloc should be encoded using IDNA;
non-ascii URL path should be encoded to UTF-8 and then percent-escaped;
non-ascii query parameters should be encoded to the encoding of a page URL was extracted from (or to the encoding server uses), then percent-escaped.
This is how all browsers work; it is specified in https://url.spec.whatwg.org/ - see this example. A Python implementation can be found in w3lib (this is the library Scrapy is using); see w3lib.url.safe_url_string:
from w3lib.url import safe_url_string
url = safe_url_string(u'http://example.org/Ñöñ-ÅŞÇİİ/', encoding="<page encoding>")
An easy way to check if a URL escaping implementation is incorrect/incomplete is to check if it provides 'page encoding' argument or not.
Based on #darkfeline answer:
from urllib.parse import urlsplit, urlunsplit, quote
def iri2uri(iri):
"""
Convert an IRI to a URI (Python 3).
"""
uri = ''
if isinstance(iri, str):
(scheme, netloc, path, query, fragment) = urlsplit(iri)
scheme = quote(scheme)
netloc = netloc.encode('idna').decode('utf-8')
path = quote(path)
query = quote(query)
fragment = quote(fragment)
uri = urlunsplit((scheme, netloc, path, query, fragment))
return uri
For those not depending strictly on urllib, one practical alternative is requests, which handles IRIs "out of the box".
For example, with http://bücher.ch:
>>> import requests
>>> r = requests.get(u'http://b\u00DCcher.ch')
>>> r.status_code
200
Encode the unicode to UTF-8, then URL-encode.
Use iri2uri method of httplib2. It makes the same thing as by bobin (is he/she the author of that?)
Another option to convert an IRI to an ASCII URI is to use furl package:
gruns/furl: 🌐 URL parsing and manipulation made easy. - https://github.com/gruns/furl
Python's standard urllib and urlparse modules provide a number of URL
related functions, but using these functions to perform common URL
operations proves tedious. Furl makes parsing and manipulating URLs
easy.
Examples
Non-ASCII domain
http://国立極地研究所.jp/english/ (Japanese National Institute of Polar Research website)
import furl
url = 'http://国立極地研究所.jp/english/'
furl.furl(url).tostr()
'http://xn--vcsoey76a2hh0vtuid5qa.jp/english/'
Non-ASCII path
https://ja.wikipedia.org/wiki/日本語 ("Japanese" article in Wikipedia)
import furl
url = 'https://ja.wikipedia.org/wiki/日本語'
furl.furl(url).tostr()
'https://ja.wikipedia.org/wiki/%E6%97%A5%E6%9C%AC%E8%AA%9E'
works! finally
I could not avoid from this strange characters, but at the end I come through it.
import urllib.request
import os
url = "http://www.fourtourismblog.it/le-nuove-tendenze-del-marketing-tenere-docchio/"
with urllib.request.urlopen(url) as file:
html = file.read()
with open("marketingturismo.html", "w", encoding='utf-8') as file:
file.write(str(html.decode('utf-8')))
os.system("marketingturismo.html")

Categories

Resources