Python encoding/decoding problems - python

How do I decode strings such as this one "weren\xe2\x80\x99t" back to the normal encoding.
So this word is actually weren't and not "weren\xe2\x80\x99t"?
For example:
print "\xe2\x80\x9cThings"
string = "\xe2\x80\x9cThings"
print string.decode('utf-8')
print string.encode('ascii', 'ignore')
“Things
“Things
Things
But I actually want to get "Things.
or:
print "weren\xe2\x80\x99t"
string = "weren\xe2\x80\x99t"
print string.decode('utf-8')
print string.encode('ascii', 'ignore')
weren’t
weren’t
werent
But I actually want to get weren't.
How should i do this?

I mapped the most common strange chars so this is pretty much complete answer based on the Oliver W. answer.
This function is by no means ideal,but it is the best place to start with.
There are more chars definitions:
http://utf8-chartable.de/unicode-utf8-table.pl?start=8192&number=128&utf8=string
http://www.utf8-chartable.de/unicode-utf8-table.pl?start=128&number=128&names=-&utf8=string-literal
...
def unicodetoascii(text):
uni2ascii = {
ord('\xe2\x80\x99'.decode('utf-8')): ord("'"),
ord('\xe2\x80\x9c'.decode('utf-8')): ord('"'),
ord('\xe2\x80\x9d'.decode('utf-8')): ord('"'),
ord('\xe2\x80\x9e'.decode('utf-8')): ord('"'),
ord('\xe2\x80\x9f'.decode('utf-8')): ord('"'),
ord('\xc3\xa9'.decode('utf-8')): ord('e'),
ord('\xe2\x80\x9c'.decode('utf-8')): ord('"'),
ord('\xe2\x80\x93'.decode('utf-8')): ord('-'),
ord('\xe2\x80\x92'.decode('utf-8')): ord('-'),
ord('\xe2\x80\x94'.decode('utf-8')): ord('-'),
ord('\xe2\x80\x94'.decode('utf-8')): ord('-'),
ord('\xe2\x80\x98'.decode('utf-8')): ord("'"),
ord('\xe2\x80\x9b'.decode('utf-8')): ord("'"),
ord('\xe2\x80\x90'.decode('utf-8')): ord('-'),
ord('\xe2\x80\x91'.decode('utf-8')): ord('-'),
ord('\xe2\x80\xb2'.decode('utf-8')): ord("'"),
ord('\xe2\x80\xb3'.decode('utf-8')): ord("'"),
ord('\xe2\x80\xb4'.decode('utf-8')): ord("'"),
ord('\xe2\x80\xb5'.decode('utf-8')): ord("'"),
ord('\xe2\x80\xb6'.decode('utf-8')): ord("'"),
ord('\xe2\x80\xb7'.decode('utf-8')): ord("'"),
ord('\xe2\x81\xba'.decode('utf-8')): ord("+"),
ord('\xe2\x81\xbb'.decode('utf-8')): ord("-"),
ord('\xe2\x81\xbc'.decode('utf-8')): ord("="),
ord('\xe2\x81\xbd'.decode('utf-8')): ord("("),
ord('\xe2\x81\xbe'.decode('utf-8')): ord(")"),
}
return text.decode('utf-8').translate(uni2ascii).encode('ascii')
print unicodetoascii("weren\xe2\x80\x99t")

In Python 3 I would do it like this:
string = "\xe2\x80\x9cThings"
bytes_string = bytes(string, encoding="raw_unicode_escape")
happy_result = bytes_string.decode("utf-8", "strict")
print(happy_result)
No translation maps needed, just code :)

You should provide a translation map that maps unicode characters to other unicode characters (the latter should be within the ASCII range if you want to re-encode to it):
uni2ascii = {ord('\xe2\x80\x99'.decode('utf-8')): ord("'")}
yourstring.decode('utf-8').translate(uni2ascii).encode('ascii')
print(yourstring) # prints: "weren't"

Related

Encoding foreign alphabet characters

I am getting data from an XML provided by an API that for some reason lists Czechslovak characters in a different encoding (e.g. instead of correct Czechoslovak "ý" it uses "ý"). Therefore, instead of providing the
correct output to the user -> "Zelený"
the output is -> "Zelený"
I went through multiple StackOverflow posts, other fora and tutorials, but I still cannot figure out how to make it turn "Zelený" into "Zelený" (this is just one of the weird characters used by the XML so I cannot use str.replace).
I figured out, that the correct encoding for the Czech/Slovak language is "windows-1250"
My code:
def change_encoding(what):
what = what.encode("windows-1250")
return what
clean_xml_input = change_encoding(xml_input)
This produces error:
'charmap' codec can't encode characters in position 5-6: character
maps to <undefined>
"Zelený".encode("Windows-1252").decode("utf-8") #'Zelený'
"Zelený".encode("windows-1254").decode("utf-8") #'Zelený'
"Zelený".encode("iso-8859-1").decode("utf-8") #'Zelený'
"Zelený".encode("iso-8859-9").decode("utf-8") #'Zelený'
If it is helpful
from itertools import permutations
all_encoding = ['ASMO-708',
'big5',
'cp1025',
'cp866',
'cp875',
'csISO2022JP',
'DOS-720',
'DOS-862',
'EUC-CN',
'EUC-JP',
'euc-jp',
'euc-kr',
'GB18030',
'gb2312',
'hz-gb-2312',
'IBM00858',
'IBM00924',
'IBM01047',
'IBM01140',
'IBM01141',
'IBM01142',
'IBM01143',
'IBM01144',
'IBM01145',
'IBM01146',
'IBM01147',
'IBM01148',
'IBM01149',
'IBM037',
'IBM1026',
'IBM273',
'IBM277',
'IBM278',
'IBM280',
'IBM284',
'IBM285',
'IBM290',
'IBM297',
'IBM420',
'IBM423',
'IBM424',
'IBM437',
'IBM500',
'ibm737',
'ibm775',
'ibm850',
'ibm852',
'IBM855',
'ibm857',
'IBM860',
'ibm861',
'IBM863',
'IBM864',
'IBM865',
'ibm869',
'IBM870',
'IBM871',
'IBM880',
'IBM905',
'IBM-Thai',
'iso-2022-jp',
'iso-2022-jp',
'iso-2022-kr',
'iso-8859-1',
'iso-8859-13',
'iso-8859-15',
'iso-8859-2',
'iso-8859-3',
'iso-8859-4',
'iso-8859-5',
'iso-8859-6',
'iso-8859-7',
'iso-8859-8',
'iso-8859-8-i',
'iso-8859-9',
'Johab',
'koi8-r',
'koi8-u',
'ks_c_5601-1987',
'macintosh',
'shift_jis',
'us-ascii',
'utf-16',
'utf-16BE',
'utf-32',
'utf-32BE',
'utf-7',
'utf-8',
'windows-1250',
'windows-1251',
'Windows-1252',
'windows-1253',
'windows-1254',
'windows-1255',
'windows-1256',
'windows-1257',
'windows-1258',
'windows-874',
'x-Chinese-CNS',
'x-Chinese-Eten',
'x-cp20001',
'x-cp20003',
'x-cp20004',
'x-cp20005',
'x-cp20261',
'x-cp20269',
'x-cp20936',
'x-cp20949',
'x-cp50227',
'x-EBCDIC-KoreanExtended',
'x-Europa',
'x-IA5',
'x-IA5-German',
'x-IA5-Norwegian',
'x-IA5-Swedish',
'x-iscii-as',
'x-iscii-be',
'x-iscii-de',
'x-iscii-gu',
'x-iscii-ka',
'x-iscii-ma',
'x-iscii-or',
'x-iscii-pa',
'x-iscii-ta',
'x-iscii-te',
'x-mac-arabic',
'x-mac-ce',
'x-mac-chinesesimp',
'x-mac-chinesetrad',
'x-mac-croatian',
'x-mac-cyrillic',
'x-mac-greek',
'x-mac-hebrew',
'x-mac-icelandic',
'x-mac-japanese',
'x-mac-korean',
'x-mac-romanian',
'x-mac-thai',
'x-mac-turkish',
'x-mac-ukrainian']
for i,j in permutations(all_encoding, 2):
try:
if("Zelený".encode(i).decode(j) == 'Zelený'):
print(f'encode with `{i}` and decode with `{j}`')
except:
pass

Using ctypes to call a function that takes char ** because it writes both the buffer and the pointer

I'm trying to call iconv(3) from a Python (3) program, using ctypes. The C type signature of iconv is
size_t iconv(iconv_t cd,
char **inptr, size_t *inbytesleft,
char **outptr, size_t *outbytesleft);
and you are expected to call it like this:
char *inp = "abcdef";
char outbuf[16];
char *outp = outbuf;
size_t ibytes = strlen(inbuf);
size_t obytes = sizeof outbuf;
size_t rv = iconv(cd, &inp, &ibytes, &outp, &obytes);
It will write to outbuf, obviously, and it will also modify all four of the variables inp, outp, ibytes, and obytes to indicate how far it got with the conversion before running into a problem (if any). It guarantees not to write to the input string, despite that not being const.
Now, naively, you reflect that in ctypes like this:
iconv = libc.iconv
iconv.restype = ctypes.c_size_t
iconv.argtypes = [ctypes.c_void_p,
ctypes.POINTER(ctypes.c_char_p),
ctypes.POINTER(ctypes.c_size_t),
ctypes.POINTER(ctypes.c_char_p),
ctypes.POINTER(ctypes.c_size_t)]
(iconv_t is a typedef for void * in the C library I'm testing on) but when I try to call that, I get errors:
>>> obuf = ctypes.create_string_buffer(16)
>>> obuflen = ctypes.c_size_t(16)
>>> iconv(utf8_to_utf16,
... ctypes.byref(ctypes.c_char_p(b"abcdef")),
... ctypes.byref(ctypes.c_size_t(6)),
... ctypes.byref(obuf),
... ctypes.byref(obuflen))
ArgumentError: argument 4: <class 'TypeError'>: expected LP_c_char_p
instance instead of pointer to c_char_Array_16
Trying to explicitly convert obuf to c_char_p doesn't work either:
>>> optr = ctypes.c_char_p(obuf)
TypeError: bytes or integer address expected instead of c_char_Array_16 instance
These type names it's using in the error messages don't appear in the manual, and I'm pretty well stumped. What's the right way to go about this?
(If you are wondering why I would want to do this instead of using Python's built-in encoding converters, the short version is because Python's converters don't support the same set of encodings as [GNU] iconv, nor do they have the //TRANSLIT feature.)
Below is code which I used to find chars which are converted to '"' with translit. Sorry it is not very polished but I hope it could be still usable for somebody. :)
I needed to run it under python2.6 so I tried to wrote it a little more universal. (2.6 has some gotchas so code could be nicer and simpler under python3)
from __future__ import print_function
import sys
import ctypes
libc = ctypes.cdll.LoadLibrary("libc.so.6")
if sys.version_info[0]>2:
def unichr(a):
return chr(a)
LP_c_char2 = ctypes.POINTER( ctypes.c_char_p)
LP_c_char = ctypes.POINTER(ctypes.create_string_buffer(16).__class__)
get_errno_loc = libc.__errno_location
get_errno_loc.restype = ctypes.POINTER(ctypes.c_int)
class MyError(OSError):
def __init__(self, e):
if sys.version_info[0]<=2:
super(MyError, self).__init__(e)
else:
super().__init__(e)
def errcheck(ret, func, args):
if ret == -1 or ret == 2**64-1:
e = get_errno_loc()[0]
raise MyError(e)
return ret
iconv_open = libc.iconv_open
iconv_open.restype = ctypes.c_void_p
ret = iconv_open(
ctypes.c_char_p(b"ISO8859-2//TRANSLIT"),
ctypes.c_char_p(b"UTF-8"))
iconv = libc.iconv
iconv.errcheck = errcheck
iconv.restype = ctypes.c_size_t
obuf = ctypes.create_string_buffer(16)
obuflen = ctypes.c_size_t(16)
optr = LP_c_char(obuf)
inp = b'\xe2\x80\x9c'
iconv.argtypes = [ctypes.c_void_p, LP_c_char2, ctypes.POINTER(ctypes.c_size_t), ctypes.POINTER(LP_c_char), ctypes.POINTER(ctypes.c_size_t)]
r = iconv(ret, LP_c_char2(ctypes.c_char_p(inp)), ctypes.byref(ctypes.c_size_t(len(inp))), ctypes.byref(optr), ctypes.byref(obuflen))
print(obuf.value)
def func(inp = b"bbb"):
assert len(inp)<16 , "too big input"
obuf = ctypes.create_string_buffer(16)
obuflen = ctypes.c_size_t(16)
optr = LP_c_char(obuf)
r = iconv(ret, LP_c_char2(ctypes.c_char_p(inp)), ctypes.byref(ctypes.c_size_t(len(inp))), ctypes.byref(optr), ctypes.byref(obuflen))
return obuf, obuflen, r, obuf.value[:16-obuflen.value]
oo, uu, r, vys = func(b'\xe2\x80\x9c\xe2\x80\x9c')
print(oo.raw, uu, r, vys)
for i in range(sys.maxunicode):
try:
oo, uu, r, vys = func(unichr(i).encode('utf-8'))
if vys==b'"':
print(i, unichr(i))
except UnicodeEncodeError:
pass
except MyError as E:
pass
# print("MyError: {E} , {i}".format(E=E, i=i))

Is there a way to programmatically combine Korean unicode into one?

Using a Korean Input Method Editor (IME), it's possible to type 버리 + 어 and it will automatically become 버려.
Is there a way to programmatically do that in Python?
>>> x, y = '버리', '어'
>>> z = '버려'
>>> ord(z[-1])
47140
>>> ord(x[-1]), ord(y)
(47532, 50612)
Is there a way to compute that 47532 + 50612 -> 47140?
Here's some more examples:
가보 + 아 -> 가봐
끝나 + ㄹ -> 끝날
I'm a Korean. First, if you type 버리 + 어, it becomes 버리어 not 버려. 버려 is an abbreviation of 버리어 and it's not automatically generated. Also 가보아 cannot becomes 가봐 automatically during typing by the same reason.
Second, by contrast, 끝나 + ㄹ becomes 끝날 because 나 has no jongseong(종성). Note that one character of Hangul is made of choseong(초성), jungseong(중성), and jongseong. choseong and jongseong are a consonant, jungseong is a vowel. See more at Wikipedia. So only when there's no jongseong during typing (like 끝나), there's a chance that it can have jongseong(ㄹ).
If you want to make 버리 + 어 to 버려, you should implement some Korean language grammar like, especially for this case, abbreviation of jungseong. For example ㅣ + ㅓ = ㅕ, ㅗ + ㅏ = ㅘ as you provided. 한글 맞춤법 chapter 4. section 5 (I can't find English pages right now) defines abbreviation like this. It's possible, but not so easy job especially for non-Koreans.
Next, if what you want is just to make 끝나 + ㄹ to 끝날, it can be a relatively easy job since there're libraries which can handle composition and decomposition of choseong, jungseong, jongseong. In case of Python, I found hgtk. You can try like this (nonpractical code):
# hgtk methods take one character at a time
cjj1 = hgtk.letter.decompose('나') # ('ㄴ', 'ㅏ', '')
cjj2 = hgtk.letter.decompose('ㄹ') # ('ㄹ', '', '')
if cjj1[2]) == '' and cjj2[1]) == '':
cjj = (cjj1[0], cjj1[1], cjj2[0])
cjj2 = None
Still, without proper knowledge of Hangul, it will be very hard to get it done.
You could use your own Translation table.
The drawback is you have to input all pairs manual or you have a file to get it from.
For instance:
# Sample Korean chars to map
k = [[('버리', '어'), ('버려')], [('가보', '아'), ('가봐')], [('끝나', 'ㄹ'), ('끝날')]]
class Korean(object):
def __init__(self):
self.map = {}
for m in k:
key = m[0][0] + m[0][1]
self.map[hash(key)] = m[1]
def __getitem__(self, item):
return self.map[hash(item)]
def translate(self, s):
return [ self.map[hash(token)] for token in s]
if __name__ == '__main__':
k_map = Korean()
k_chars = [ m[0][0] + m[0][1] for m in k]
print('Input: %s' % k_chars)
print('Output: %s' % k_map.translate(k_chars))
one_char_3 = k[0][0][0] + k[0][0][1]
print('%s = %s' % (one_char_3, k_map[ one_char_3 ]) )
Input: ['버리어', '가보아', '끝나ㄹ']
Output: ['버려', '가봐', '끝날']
버리어 = 버려
Tested with Python:3.4.2

Cipher program losing accuracy

I have a program in python that takes two strings. One is the plain text string, another is the cipher key. what it does is go over each of the characters and xors the bits with the cipher characters. But when going back and forth a few of the letter do not seem to change properly. Here is the code:
//turns int into bin string length 8
def bitString(n):
bin_string = bin(n)[2:]
bin_string = ("0" * (8 - len(bin_string))) + bin_string
return bin_string
//xors the bits
def bitXOR(b0, b1):
nb = ""
for x in range(min(len(b0), len(b1))):
nb += "0" if b0[x] == b1[x] else "1"
return nb
//takes 2 chars, turns them into bin strings, xors them, then returns the new char
def cypherChar(c0, c1):
return chr(int(bitXOR(bitString(ord(c0)), bitString(ord(c1))), 2))
//takes s0 (the plaintext) and encrypts it using the cipher key (s1)
def cypherString(s0, s1):
ns = ""
for x in range(len(s0)):
ns += cypherChar(s0[x], s1[x%len(s1)])
return ns
For example sometimes in a long string the word 'test' will cipher back into 'eest', and stuff like that
I have checked over the code a dozen times and I can't figure out whats causing some of the characters to change. Is it possible some characters just behave strangely?
EDIT:
example:
This is a test
Due to the fact that in the last test
Some symbols: !##$%^&*()
were not changed properly
I am retesting
END
using the cipher key : 'cypher key'
translates back to :
This is a test
Due to toe aact that in the last sest
Some symbols: !##$%^&*()
were not changed properly
I am retestiig
END
Sorry it its a little messy, I put it together real quick
from binascii import hexlify, unhexlify
from sys import version_info
def bit_string(string):
if version_info >= (3, 0):
return bin(int.from_bytes(string.encode(), 'big'))
else:
return bin(int(hexlify(string), 16))
def bitXOR_encrypt(plain_text, key):
encrypted_list = []
for j in range(2, len(plain_text)):
encrypted_list.append(int(plain_text[j]) ^ int(key[j])) #Assume the key and string are the same length
return encrypted_list
def decrypt(cipher_text, key):
decrypted_list = []
for j in range(2, len(cipher_text)): #or xrange
decrypted_list.append(int(cipher_text[j]) ^ int(key[j])) #Again assumes key is the same length as the string
decrypted_list = [str(i) for i in decrypted_list]
add_binary = "0b" + "".join(decrypted_list)
decrypted_string = int(add_binary, 2)
if version_info >= (3, 0):
message = decrypted_string.to_bytes((decrypted_string.bit_length() + 7) // 8, 'big').decode()
else:
message = unhexlify('%x' % decrypted_string)
return message
def main():
plain_text = "Hello"
plain_text_to_bits = bit_string(plain_text)
key_to_bits = bit_string("candy")
#Encrypt
cipher_text = bitXOR_encrypt(plain_text_to_bits, key_to_bits)
#make Strings
cipher_text_string = "".join([str(i) for i in cipher_text])
key_string = "".join([str(i) for i in key_to_bits])
#Decrypt
decrypted_string = decrypt("0B"+cipher_text_string, key_string)
print("plain text: %s" % plain_text)
print("plain text to bits: % s" % plain_text_to_bits)
print("key string in bits: %s" % key_string)
print("Ciphered Message: %s" %cipher_text_string)
print("Decrypted String: %s" % decrypted_string)
main()
for more details or example code you can visit my repository either on github
https://github.com/marcsantiago/one_time_pad_encryption
Also, I know that in this example the key is the same length as the string. If you want to use a string that is smaller than the string try wrapping it like in a vigenere cipher (http://en.wikipedia.org/wiki/Vigenère_cipher)
I think you are overcomplicating things:
def charxor(s1, s2):
return chr(ord(s1) ^ ord(s2))
def wordxor(w1, w2):
return ''.join(charxor(w1[i], w2[i]) for i in range(min(len(w1), len(w2))))
word = 'test'
key = 'what'
cyphered = wordxor(word, key)
uncyphered = wordxor(cyphered, key)
print(repr(cyphered))
print(uncyphered)
You get
'\x03\r\x12\x00'
test
There is a fairly good explanation of Python's bit arithmetic in How do you get the logical xor of two variables in Python?
I could find nothing wrong with the results of your functions when testing with your input data and key. To demonstrate, you could try this test code which should not fail:
import random
def random_string(n):
return ''.join(chr(random.getrandbits(8)) for _ in range(n))
for i in range(1000):
plaintext = random_string(500)
key = random_string(random.randrange(1,100))
ciphertext = cypherString(plaintext, key)
assert cypherString(ciphertext, key) == plaintext
If you can provide a definitive sample of plain text, key, and cipher text that fails, I can look further.

pretty print assertEqual() for HTML strings

I want to compare two strings in a python unittest which contain html.
Is there a method which outputs the result in a human friendly (diff like) version?
A simple method is to strip whitespace from the HTML and split it into a list. Python 2.7's unittest (or the backported unittest2) then gives a human-readable diff between the lists.
import re
def split_html(html):
return re.split(r'\s*\n\s*', html.strip())
def test_render_html():
expected = ['<div>', '...', '</div>']
got = split_html(render_html())
self.assertEqual(expected, got)
If I'm writing a test for working code, I usually first set expected = [], insert a self.maxDiff = None before the assert and let the test fail once. The expected list can then be copy-pasted from the test output.
You might need to tweak how whitespace is stripped depending on what your HTML looks like.
I submitted a patch to do this some years back. The patch was rejected but you can still view it on the python bug list.
I doubt you would want to hack your unittest.py to apply the patch (if it even still works after all this time), but here's the function for reducing two strings a manageable size while still keeping at least part of what differs. So long as all you didn't want the complete differences this might be what you want:
def shortdiff(x,y):
'''shortdiff(x,y)
Compare strings x and y and display differences.
If the strings are too long, shorten them to fit
in one line, while still keeping at least some difference.
'''
import difflib
LINELEN = 79
def limit(s):
if len(s) > LINELEN:
return s[:LINELEN-3] + '...'
return s
def firstdiff(s, t):
span = 1000
for pos in range(0, max(len(s), len(t)), span):
if s[pos:pos+span] != t[pos:pos+span]:
for index in range(pos, pos+span):
if s[index:index+1] != t[index:index+1]:
return index
left = LINELEN/4
index = firstdiff(x, y)
if index > left + 7:
x = x[:left] + '...' + x[index-4:index+LINELEN]
y = y[:left] + '...' + y[index-4:index+LINELEN]
else:
x, y = x[:LINELEN+1], y[:LINELEN+1]
left = 0
cruncher = difflib.SequenceMatcher(None)
xtags = ytags = ""
cruncher.set_seqs(x, y)
editchars = { 'replace': ('^', '^'),
'delete': ('-', ''),
'insert': ('', '+'),
'equal': (' ',' ') }
for tag, xi1, xi2, yj1, yj2 in cruncher.get_opcodes():
lx, ly = xi2 - xi1, yj2 - yj1
edits = editchars[tag]
xtags += edits[0] * lx
ytags += edits[1] * ly
# Include ellipsis in edits line.
if left:
xtags = xtags[:left] + '...' + xtags[left+3:]
ytags = ytags[:left] + '...' + ytags[left+3:]
diffs = [ x, xtags, y, ytags ]
if max([len(s) for s in diffs]) < LINELEN:
return '\n'.join(diffs)
diffs = [ limit(s) for s in diffs ]
return '\n'.join(diffs)
Maybe this is a quite 'verbose' solution. You could add a new 'equality function' for your user defined type (e.g: HTMLString) which you have to define first:
class HTMLString(str):
pass
Now you have to define a type equality function:
def assertHTMLStringEqual(first, second):
if first != second:
message = ... # TODO here: format your message, e.g a diff
raise AssertionError(message)
All you have to do is format your message as you like. You can also use a class method in your specific TestCase as a type equality function. This gives you more functionality to format your message, since unittest.TestCase does this a lot.
Now you have to register this equality function in your unittest.TestCase:
...
def __init__(self):
self.addTypeEqualityFunc(HTMLString, assertHTMLStringEqual)
The same for a class method:
...
def __init__(self):
self.addTypeEqualityFunc(HTMLString, 'assertHTMLStringEqual')
And now you can use it in your tests:
def test_something(self):
htmlstring1 = HTMLString(...)
htmlstring2 = HTMLString(...)
self.assertEqual(htmlstring1, htmlstring2)
This should work well with python 2.7.
I (the one asking this question) use BeautfulSoup now:
def assertEqualHTML(string1, string2, file1='', file2=''):
u'''
Compare two unicode strings containing HTML.
A human friendly diff goes to logging.error() if there
are not equal, and an exception gets raised.
'''
from BeautifulSoup import BeautifulSoup as bs
import difflib
def short(mystr):
max=20
if len(mystr)>max:
return mystr[:max]
return mystr
p=[]
for mystr, file in [(string1, file1), (string2, file2)]:
if not isinstance(mystr, unicode):
raise Exception(u'string ist not unicode: %r %s' % (short(mystr), file))
soup=bs(mystr)
pretty=soup.prettify()
p.append(pretty)
if p[0]!=p[1]:
for line in difflib.unified_diff(p[0].splitlines(), p[1].splitlines(), fromfile=file1, tofile=file2):
logging.error(line)
raise Exception('Not equal %s %s' % (file1, file2))

Categories

Resources