How can I normalize a URL in python - python

I'd like to know do I normalize a URL in python.
For example, If I have a url string like : "http://www.example.com/foo goo/bar.html"
I need a library in python that will transform the extra space (or any other non normalized character) to a proper URL.

Have a look at this module: werkzeug.utils. (now in werkzeug.urls)
The function you are looking for is called "url_fix" and works like this:
>>> from werkzeug.urls import url_fix
>>> url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffsklärung)')
'http://de.wikipedia.org/wiki/Elf%20%28Begriffskl%C3%A4rung%29'
It's implemented in Werkzeug as follows:
import urllib
import urlparse
def url_fix(s, charset='utf-8'):
"""Sometimes you get an URL by a user that just isn't a real
URL because it contains unsafe characters like ' ' and so on. This
function can fix some of the problems in a similar way browsers
handle data entered by the user:
>>> url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffsklärung)')
'http://de.wikipedia.org/wiki/Elf%20%28Begriffskl%C3%A4rung%29'
:param charset: The target charset for the URL if the url was
given as unicode string.
"""
if isinstance(s, unicode):
s = s.encode(charset, 'ignore')
scheme, netloc, path, qs, anchor = urlparse.urlsplit(s)
path = urllib.quote(path, '/%')
qs = urllib.quote_plus(qs, ':&=')
return urlparse.urlunsplit((scheme, netloc, path, qs, anchor))

Real fix in Python 2.7 for that problem
Right solution was:
# percent encode url, fixing lame server errors for e.g, like space
# within url paths.
fullurl = quote(fullurl, safe="%/:=&?~#+!$,;'#()*[]")
For more information see Issue918368: "urllib doesn't correct server returned urls"

use urllib.quote or urllib.quote_plus
From the urllib documentation:
quote(string[, safe])
Replace special characters in string
using the "%xx" escape. Letters,
digits, and the characters "_.-" are
never quoted. The optional safe
parameter specifies additional
characters that should not be quoted
-- its default value is '/'.
Example: quote('/~connolly/') yields '/%7econnolly/'.
quote_plus(string[, safe])
Like quote(), but also replaces spaces
by plus signs, as required for quoting
HTML form values. Plus signs in the
original string are escaped unless
they are included in safe. It also
does not have safe default to '/'.
EDIT: Using urllib.quote or urllib.quote_plus on the whole URL will mangle it, as #ΤΖΩΤΖΙΟΥ points out:
>>> quoted_url = urllib.quote('http://www.example.com/foo goo/bar.html')
>>> quoted_url
'http%3A//www.example.com/foo%20goo/bar.html'
>>> urllib2.urlopen(quoted_url)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\python25\lib\urllib2.py", line 124, in urlopen
return _opener.open(url, data)
File "c:\python25\lib\urllib2.py", line 373, in open
protocol = req.get_type()
File "c:\python25\lib\urllib2.py", line 244, in get_type
raise ValueError, "unknown url type: %s" % self.__original
ValueError: unknown url type: http%3A//www.example.com/foo%20goo/bar.html
#ΤΖΩΤΖΙΟΥ provides a function that uses urlparse.urlparse and urlparse.urlunparse to parse the url and only encode the path. This may be more useful for you, although if you're building the URL from a known protocol and host but with a suspect path, you could probably do just as well to avoid urlparse and just quote the suspect part of the URL, concatenating with known safe parts.

Because this page is a top result for Google searches on the topic, I think it's worth mentioning some work that has been done on URL normalization with Python that goes beyond urlencoding space characters. For example, dealing with default ports, character case, lack of trailing slashes, etc.
When the Atom syndication format was being developed, there was some discussion on how to normalize URLs into canonical format; this is documented in the article PaceCanonicalIds on the Atom/Pie wiki. That article provides some good test cases.
I believe that one result of this discussion was Mark Nottingham's urlnorm.py library, which I've used with good results on a couple projects. That script doesn't work with the URL given in this question, however. So a better choice might be Sam Ruby's version of urlnorm.py, which handles that URL, and all of the aforementioned test cases from the Atom wiki.

Py3
from urllib.parse import urlparse, urlunparse, quote
def myquote(url):
parts = urlparse(url)
return urlunparse(parts._replace(path=quote(parts.path)))
>>> myquote('https://www.example.com/~user/with space/index.html?a=1&b=2')
'https://www.example.com/~user/with%20space/index.html?a=1&b=2'
Py2
import urlparse, urllib
def myquote(url):
parts = urlparse.urlparse(url)
return urlparse.urlunparse(parts[:2] + (urllib.quote(parts[2]),) + parts[3:])
>>> myquote('https://www.example.com/~user/with space/index.html?a=1&b=2')
'https://www.example.com/%7Euser/with%20space/index.html?a=1&b=2'
This quotes only the path component.

Just FYI, urlnorm has moved to github:
http://gist.github.com/246089

Valid for Python 3.5:
import urllib.parse
urllib.parse.quote([your_url], "\./_-:")
example:
import urllib.parse
print(urllib.parse.quote("http://www.example.com/foo goo/bar.html", "\./_-:"))
the output will be http://www.example.com/foo%20goo/bar.html
Font: https://docs.python.org/3.5/library/urllib.parse.html?highlight=quote#urllib.parse.quote

I encounter such an problem: need to quote the space only.
fullurl = quote(fullurl, safe="%/:=&?~#+!$,;'#()*[]") do help, but it's too complicated.
So I used a simple way: url = url.replace(' ', '%20'), it's not perfect, but it's the simplest way and it works for this situation.

A lot of answers here talk about quoting URLs, not about normalizing them.
The best tool to normalize urls (for deduplication etc.) in Python IMO is w3lib's w3lib.url.canonicalize_url util.
Taken from the official docs:
Canonicalize the given url by applying the following procedures:
- sort query arguments, first by key, then by value
percent encode paths ; non-ASCII characters are percent-encoded using UTF-8 (RFC-3986)
- percent encode query arguments ; non-ASCII characters are percent-encoded using passed encoding (UTF-8 by default)
- normalize all spaces (in query arguments) ‘+’ (plus symbol)
- normalize percent encodings case (%2f -> %2F)
- remove query arguments with blank values (unless keep_blank_values is True)
- remove fragments (unless keep_fragments is True)
- List item
The url passed can be bytes or unicode, while the url returned is always a native str (bytes in Python 2, unicode in Python 3).
>>> import w3lib.url
>>>
>>> # sorting query arguments
>>> w3lib.url.canonicalize_url('http://www.example.com/do?c=3&b=5&b=2&a=50')
'http://www.example.com/do?a=50&b=2&b=5&c=3'
>>>
>>> # UTF-8 conversion + percent-encoding of non-ASCII characters
>>> w3lib.url.canonicalize_url('http://www.example.com/r\u00e9sum\u00e9')
'http://www.example.com/r%C3%A9sum%C3%A9'
I've used this util with great success when broad crawling the web to avoid duplicate requests because of minor url differences (different parameter order, anchors etc)

Related

Python encode spaces in url only and not other special characters

I know this question has been asked many times but I can't seem to find the variation that I'm looking for specifically.
I have a url, lets say its:
https://somethingA/somethingB/somethingC/some spaces here
I want to convert it to:
https://somethingA/somethingB/somethingC/some%20spaces%20here
I know I can do it with the replace function like below:
url = https://somethingA/somethingB/somethingC/some spaces here
url.replace(' ', '%20')
But I have a feeling that the best practice is probably to use the urllib.parse library. The problem is that when I use it, it encodes other special characters like : too.
So if I do:
url = https://somethingA/somethingB/somethingC/some spaces here
urllib.parse.quote(url)
I get:
https%3A//somethingA/somethingB/somethingC/some%20spaces%20here
Notice the : also gets converted to %3A. So my question is, is there a way I can achieve the same thing as replace with urllib? I think I would rather use a tried and tested library that is designed specifically to encode URLs instead of reinventing the wheel, which I may or may not be missing something leading to a security loop hole. Thank you.
So quote() there is built to work on just the path portion of a url. So you need to break things up a bit like this:
from urllib.parse import urlparse
url = 'https://somethingA/somethingB/somethingC/some spaces here'
parts = urlparse(url)
fixed_url = f"{parts.scheme}://{parts.netloc}/{urllib.parse.quote(parts.path)}"

Parsing url string from '+' to '%2B'

I have url address where its extension needs to be in ASCII/UTF-8
a='sAE3DSRAfv+HG='
i need to convert above as this:
a='sAE3DSRAfv%2BHG%3D'
I searched but not able to get it.
Please see built-in method urllib.parse.quote()
A very important task for the URL is its safe transmission. Its meaning must not change after you created it till it is received by the intended receiver. To achieve that end URL encoding was incorporated. See RFC 2396
URL might contain non-ascii characters like cafés, López etc. Or it might contain symbols which have different meaning when put in the context of a URL. For example, # which signifies a bookmark. To ensure safe transmitting of such characters HTTP standards maintain that you quote the url at the point of origin. And URL is always present in quoted format to anyone else.
I have put sample usage below.
>>> import urllib.parse
>>> a='sAE3DSRAfv+HG='
>>> urllib.parse.quote(a)
'sAE3DSRAfv%2BHG%3D'
>>>

URL Encoding yields two different results? Only one works

I'm writing a Python script to fetch Korean vocabulary pronunciation. I have a URL ready to go, and when I open the URL in Safari, it retrieves the expected JSON from the server.
When I use requests to get the JSON, the call fails and no results are found.
Using Charles, I can see that the URL with my original query, a Hangul word, is URL encoded after I paste the URL into Safari and hit enter. For example, the instance of 소식 in the URL string becomes %EC%86%8C%EC%8B%9D on its way out.
However, when I make that same request with requests, the word is encoded as %E1%84%89%E1%85%A9%E1%84%89%E1%85%B5%E1%86%A8. Both encodings can be decoded back to the original word 소식 (using a web app to confirm). The former encoding is accepted by the server, the latter is not.
Why would I be getting a different encoding from requests?
Edit
Query string comes into the script as 소식
query = sys.argv[1]
sys.stderr.write(query) -> 소식
Interpolating the query into the URL string yields ...json/word/소식... when printing it.
Going through Charles it now looks like this /json/word/%E1%84%89%E1%85%A9%E1%84%89%E1%85%B5%E1%86%A8/. Everything is default, no specified encoding.
These are both valid url-encodings of the "same" input text:
>>> from urllib.parse import unquote
>>> ulong = unquote('%E1%84%89%E1%85%A9%E1%84%89%E1%85%B5%E1%86%A8')
>>> ushort = unquote('%EC%86%8C%EC%8B%9D')
>>> ulong
'소식'
>>> ushort
'소식'
The strings are not actually equal, though, they have different forms in unicode:
>>> from unicodedata import name
>>> [name(x) for x in ulong]
['HANGUL CHOSEONG SIOS',
'HANGUL JUNGSEONG O',
'HANGUL CHOSEONG SIOS',
'HANGUL JUNGSEONG I',
'HANGUL JONGSEONG KIYEOK']
>>> [name(x) for x in ushort]
['HANGUL SYLLABLE SO', 'HANGUL SYLLABLE SIG']
I do not know any Korean, but it looks like the long string is composed of combining characters (you can also see similar things with latin characters and accents). If I perform a canonical decomposition and composition of the forms, I get equality:
>>> from unicodedata import normalize
>>> normalize('NFC', ulong) == ushort
True
So, either you are using different input texts, that just happen to look the same (even repr is not enough to see the difference, you have to examine the codepoints) or one of the methods you are using - probably the browser - is performing a normalization/transformation.
Since the short form of the text is what worked with the server, I suggest you normalize the inputs to your script into the NFC form.

Python: How to check if a string is a valid IRI?

Is there a standard function to check an IRI, to check an URL apparently I can use:
parts = urlparse.urlsplit(url)
if not parts.scheme or not parts.netloc:
'''apparently not an url'''
I tried the above with an URL containing Unicode characters:
import urlparse
url = "http://fdasdf.fdsfîășîs.fss/ăîăî"
parts = urlparse.urlsplit(url)
if not parts.scheme or not parts.netloc:
print "not an url"
else:
print "yes an url"
and what I get is yes an url. Does this means I'm good an this tests for valid IRI? Is there another way ?
Using urlparse is not sufficient to test for a valid IRI.
Use the rfc3987 package instead:
from rfc3987 import parse
parse('http://fdasdf.fdsfîășîs.fss/ăîăî', rule='IRI')
The only character-set-sensitive code in the implementation of urlparse is requiring that the scheme should contain only ASCII letters, digits and [+-.] characters; otherwise it's completely agnostic so will work fine with non-ASCII characters.
As this is non-documented behaviour, it's your responsibility to check that it continues to be the case (with tests in your project), but I don't imagine it would be changed to break IRIs.
urllib provides quoting functions to convert IRIs to/from ASCII URIs, although they still don't mention IRIs explicitly in the documentation, and they are broken in some cases: Is there a unicode-ready substitute I can use for urllib.quote and urllib.unquote in Python 2.6.5?

How can I split a URL string up into separate parts in Python?

I decided that I'll learn Python tonight :)
I know C pretty well (wrote an OS in it), so I'm not a noob in programming, so everything in Python seems pretty easy, but I don't know how to solve this problem:
let's say I have this address:
http://example.com/random/folder/path.html
Now how can I create two strings from this, one containing the "base" name of the server, so in this example it would be
http://example.com/
and another containing the thing without the last filename, so in this example it would be
http://example.com/random/folder/
Also I of course know the possibility to just find the third and last slash respectively, but is there a better way?
Also it would be cool to have the trailing slash in both cases, but I don't care since it can be added easily.
So is there a good, fast, effective solution for this? Or is there only "my" solution, finding the slashes?
The urlparse module in Python 2.x (or urllib.parse in Python 3.x) would be the way to do it.
>>> from urllib.parse import urlparse
>>> url = 'http://example.com/random/folder/path.html'
>>> parse_object = urlparse(url)
>>> parse_object.netloc
'example.com'
>>> parse_object.path
'/random/folder/path.html'
>>> parse_object.scheme
'http'
>>>
If you wanted to do more work on the path of the file under the URL, you can use the posixpath module:
>>> from posixpath import basename, dirname
>>> basename(parse_object.path)
'path.html'
>>> dirname(parse_object.path)
'/random/folder'
After that, you can use posixpath.join to glue the parts together.
Note: Windows users will choke on the path separator in os.path. The posixpath module documentation has a special reference to URL manipulation, so all's good.
If this is the extent of your URL parsing, Python's inbuilt rpartition will do the job:
>>> URL = "http://example.com/random/folder/path.html"
>>> Segments = URL.rpartition('/')
>>> Segments[0]
'http://example.com/random/folder'
>>> Segments[2]
'path.html'
From Pydoc, str.rpartition:
Splits the string at the last occurrence of sep, and returns a 3-tuple containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, return a 3-tuple containing two empty strings, followed by the string itself
What this means is that rpartition does the searching for you, and splits the string at the last (right most) occurrence of the character you specify (in this case / ). It returns a tuple containing:
(everything to the left of char , the character itself , everything to the right of char)
I have no experience with Python, but I found the urlparse module, which should do the job.
In Python, a lot of operations are done using lists. The urlparse module mentioned by Sebasian Dietz may well solve your specific problem, but if you're generally interested in Pythonic ways to find slashes in strings, for example, try something like this:
url = 'http://example.com/random/folder/path.html'
# Create a list of each bit between slashes
slashparts = url.split('/')
# Now join back the first three sections 'http:', '' and 'example.com'
basename = '/'.join(slashparts[:3]) + '/'
# All except the last one
dirname = '/'.join(slashparts[:-1]) + '/'
print 'slashparts = %s' % slashparts
print 'basename = %s' % basename
print 'dirname = %s' % dirname
The output of this program is this:
slashparts = ['http:', '', 'example.com', 'random', 'folder', 'path.html']
basename = http://example.com/
dirname = http://example.com/random/folder/
The interesting bits are split, join, the slice notation array[A:B] (including negatives for offsets-from-the-end) and, as a bonus, the % operator on strings to give printf-style formatting.
It seems like the posixpath module mentioned in sykora's answer is not available in my Python setup (Python 2.7.3).
As per this article, it seems that the "proper" way to do this would be using...
urlparse.urlparse and urlparse.urlunparse can be used to detach and reattach the base of the URL
The functions of os.path can be used to manipulate the path
urllib.url2pathname and urllib.pathname2url (to make path name manipulation portable, so it can work on Windows and the like)
So for example (not including reattaching the base URL)...
>>> import urlparse, urllib, os.path
>>> os.path.dirname(urllib.url2pathname(urlparse.urlparse("http://example.com/random/folder/path.html").path))
'/random/folder'
You can use Python's library furl:
f = furl.furl("http://example.com/random/folder/path.html")
print(str(f.path)) # '/random/folder/path.html'
print(str(f.path).split("/")) # ['', 'random', 'folder', 'path.html']
To access word after first "/", use:
str(f.path).split("/") # 'random'

Categories

Resources