python appengine form-posted utf8 file issue - python

i am trying to form-post a sql file that consists on many INSERTS, eg.
INSERT INTO `TABLE` VALUES ('abcdé', 2759);
then i use re.search to parse it and extract the fields to put into my own datastore. The problem is that, although the file contains accented characters (see the e is a é), once uploaded it loses it and either errors or stores a bytestring representation of it.
Heres what i am currently using (and I have tried loads of alternatives):
form = cgi.FieldStorage()
uFile = form['sql']
uSql = uFile.file.read()
lineX = uSql.split("\n") # to get each line
and so on.
has anyone got a robust way of making this work? remember i am on appengine so access to some libraries is restricted/forbidden

You mention utf8 in the Q's title but then never again: what are you doing (in terms of setting headers and checking them) to verify what encoding is in use? There should be headers of the form
Content-Type: text/plain; charset=utf-8
and the charset= part is where the encoding is specified. So what are the values upon sending and receiving this? If charset is erroneous, you may have to manually perform some encoding and decoding. To help us gauge what the encoding seems to be, besides the headers, what's the ord value of that accented-e? E.g., if the encoding was actually iso-8859-1, that ord value would be 233 (in decimal; 0xE9 in hex).

Related

Problems with unicode, beautifulsoup, cld2, and python [duplicate]

The question about unicode in Python2.
As I know about this I should always decode everything what I read from outside (files, net). decode converts outer bytes to internal Python strings using charset specified in parameters. So decode("utf8") means that outside bytes are unicode string and they will be decoded to python strings.
Also I should always encode everything what I write to outside. I specify encoding in parameters of encode function and it converts to proper encoding and writes.
These statements are right, ain't they?
But sometimes when I parse html documents I get decode errors. As I understand the document in other encoding (for example cp1252) and error happens when I try to decode this using utf8 encoding. So the question is how to write bulletproof application?
I found that there is good library to guess encoding is chardet and this is the only way to write bulletproof applications. Right?
... decode("utf8") means that outside bytes are unicode string and they will be decoded to python strings.
...
These statements are right, ain't they?
No, outside bytes are binary data, they are not a unicode string. So <str>.decode("utf8") will produce a Python unicode object by interpreting the bytes in <str> as UTF-8; it may raise an error if the bytes cannot be decoded as UTF-8.
Determining the encoding of any given document is not necessarily a simple task. You either need to have some external source of information that tells you the encoding, or you need to know something about what is in the document. For example, if you know that it is an HTML document with its encoding specified internally, then you can parse the document using an algorithm like the one outlined in the HTML Standard to find the encoding and then use that encoding to parse the document (it's a two-pass operation). However, just because an HTML document specifies an encoding it does not mean that it can be decoded with that encoding. You may still get errors if the data is corrupt or if document was not encoded properly in the first place.
There are libraries such as chardet (I see you mentioned it already) that will try to guess the encoding of a document for you (it's only a guess, not necessarily correct). But they can have their own issues such as performance, and they may not recognize the encoding of your document.
Try wrapping your functions in try:except: calls.
Try decoding as utf-8:
Catch exception if not utf-8:
if exception raised, try next encoding:
etc, etc...
Make it a function that returns str when (and if) it finds an encoding that wasn't excepted, and returns None or an empty str when it exhausts its list of encodings and the last exception is raised.
Like the others said, the encoding should be recorded somewhere, so check that first.
Not efficient, and frankly due to my skill level, may be way off, but to my newbie mind, it may alleviate some of the problems when dealing with unknown or undocumented encoding.
Convert to unicode from cp437. This way you get your bytes right to unicode and back.

Bulletproof work with encoding in Python

The question about unicode in Python2.
As I know about this I should always decode everything what I read from outside (files, net). decode converts outer bytes to internal Python strings using charset specified in parameters. So decode("utf8") means that outside bytes are unicode string and they will be decoded to python strings.
Also I should always encode everything what I write to outside. I specify encoding in parameters of encode function and it converts to proper encoding and writes.
These statements are right, ain't they?
But sometimes when I parse html documents I get decode errors. As I understand the document in other encoding (for example cp1252) and error happens when I try to decode this using utf8 encoding. So the question is how to write bulletproof application?
I found that there is good library to guess encoding is chardet and this is the only way to write bulletproof applications. Right?
... decode("utf8") means that outside bytes are unicode string and they will be decoded to python strings.
...
These statements are right, ain't they?
No, outside bytes are binary data, they are not a unicode string. So <str>.decode("utf8") will produce a Python unicode object by interpreting the bytes in <str> as UTF-8; it may raise an error if the bytes cannot be decoded as UTF-8.
Determining the encoding of any given document is not necessarily a simple task. You either need to have some external source of information that tells you the encoding, or you need to know something about what is in the document. For example, if you know that it is an HTML document with its encoding specified internally, then you can parse the document using an algorithm like the one outlined in the HTML Standard to find the encoding and then use that encoding to parse the document (it's a two-pass operation). However, just because an HTML document specifies an encoding it does not mean that it can be decoded with that encoding. You may still get errors if the data is corrupt or if document was not encoded properly in the first place.
There are libraries such as chardet (I see you mentioned it already) that will try to guess the encoding of a document for you (it's only a guess, not necessarily correct). But they can have their own issues such as performance, and they may not recognize the encoding of your document.
Try wrapping your functions in try:except: calls.
Try decoding as utf-8:
Catch exception if not utf-8:
if exception raised, try next encoding:
etc, etc...
Make it a function that returns str when (and if) it finds an encoding that wasn't excepted, and returns None or an empty str when it exhausts its list of encodings and the last exception is raised.
Like the others said, the encoding should be recorded somewhere, so check that first.
Not efficient, and frankly due to my skill level, may be way off, but to my newbie mind, it may alleviate some of the problems when dealing with unknown or undocumented encoding.
Convert to unicode from cp437. This way you get your bytes right to unicode and back.

dealing with multiple charset in python 3

I'm using python 3.3.0 in Windows 8.
requrl = urllib.request.Request(url)
response = urllib.request.urlopen(requrl)
source = response.read()
source = source.decode('utf-8')
It will work fine if the websites have utf-8 charset but what if it has iso-8859-1 or any other charset. Means I may have different website url with different charset.
So, how to deal with multiple charset?
Now let me tell you my efforts when I tried to resolve this issue like:
b1 = b'charset=iso-8859-1'
b1 = b1.decode('iso-8859-1')
if b1 in source:
source = source.decode('iso-8859-1')
It gave me an error like TypeError: Type str doesn't support the buffer API
So, I'm assuming that it's considering b1 as string! and this is not the correct way! :(
Please, don't say that manually change charset in the source code or have you read python docs!
I have already tried to put my head into python 3 docs but still have no luck or I may not be picking up correct modules/contents to read!
In Python 3, a str is actually a sequence of unicode characters (equivalent to u'mystring' syntax in Python 2). What you get back from response.read() is a byte string (a sequence of bytes).
The reason your b1 in source fails is you are trying to find a unicode character sequence inside a byte string. This makes no sense, so it fails. If you take out the line b1.decode('iso-8859-1'), it should work because you are now comparing two byte sequences.
Now back to your real underlying issue. To support multiple charsets, you need to determine the character set so you cn decode it to a Unicode string. This is tricky to do. Normally you can examine the Content-Type header of the response. (See the rules below.) However, so many websites declare the wrong encoding in the header that we have had to develop other complicated encoding sniffing rules for html. Please read that link so you realize what a difficult problem this is!
I recommend you either:
Use the requests library instead of urllib, because it automatically takes care of most unicode conversions properly. (It's also much easier to use.) If conversion to unicode at this layer fails:
Try to pass the bytes directly to an underlying library you are using (e.g. lxml or html5lib) and let them deal with determining the encoding. They often implement the right charset-sniffing algorithms for the document type.
If neither of these work, you can get more aggressive and use libraries like chardet to detect the encoding, but in my experience people who serve their web pages this incorrectly are so incompetent that they produce mixed-encoding documents, so you will end up with garbage characters no matter what you do!
Here are the rules for interpreting the charset declared in a content-type header.
With no explicit charset declared:
text/* (e.g., text/html) is in ASCII.
application/* (e.g. application/json, application/xhtml+xml) is utf-8.
With an explicit charset declared:
if type is text/html and charset is iso-8859-1, it's actually win-1252 (==CP1252)
otherwise use the charset declared.
(Note that the html5 spec willfully violates the w3c specs by looking for UTF8 and UTF16 byte markers in preference to the Content-Type header. Please read that encoding detection algorithm link and see why we can't have nice things...)
The big problem here is that in many cases you can't be sure about the encoding of a webpage, even if it defines a charset. I've seen enough pages declaring one charset but acutally being in another, or having a different charsets in their Content-Type header then in their meta-tag or xml declaration.
In such cases chardet can be helpful.
You're checking whether str bytes contained within bytes object:
>>> 'df' in b'df'
Traceback (most recent call last):
File "<pyshell#107>", line 1, in <module>
'df' in b'df'
TypeError: Type str doesn't support the buffer API
So, yes, it considers b1 a str, because you've decoded bytes object into a str object with the certain encoding. Instead, you should check against original value of b1. It's not clear why you do .decode on it.
Have a look at the HTML standard, Parsing HTML documents, Determine character set (HTML5 is sufficient for our purposes).
There is an algorithm to take. For your purpose boils down to the following:
Check for identifying sequences for UTF-16 or UTF-8 (see provided link)
Use the character set supplied by HTTP (via the Content-Type header)
Apply the algorithm described a little later in Prescan a byte-stream to determine its encoding. This is basically searching for "charset=" in the document and extracting the value.

What is the correct procedure to store a utf-16 encoded rss stream into sqlite3 using python

I have a python sgi script that attempts to extract an rss items that is posted to it and store the rss in a sqlite3 db. I am using flup as the WSGIServer.
To obtain the posted content:
postData = environ["wsgi.input"].read(int(environ["CONTENT_LENGTH"]))
To attempt to store in the db:
from pysqlite2 import dbapi2 as sqlite
ldb = sqlite.connect("/var/vhost/mysite.com/db/rssharvested.db")
lcursor = ldb.cursor()
lcursor.execute("INSERT into rss(data) VALUES(?)", (postData,))
This results in only the first few characters of the rss being stored in the record:
ÿþ<
I believe the initial chars are the BOM of the rss.
I have tried every permutation I could think of including first encoding rss as utf-8 and then attempting to store but the results were the same. I could not decode because some characters could not be represented as unicode.
Running python 2.5.2
sqlite 3.5.7
Thanks in advance for any insight into this problem.
Here is a sample of the initial data contained in postData as modified by the repr function, written to a file and viewed with less:
'\xef\xbb\xbf
Thanks for the all the replies! Very helpful.
The sample I submitted didn't make it through the stackoverflow html filters will try again, converting less and greater than to entities (preview indicates this works).
\xef\xbb\xbf<?xml version="1.0" encoding="utf-16"?><rss xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><channel><item d3p1:size="0" xsi:type="tFileItem" xmlns:d3p1="http://htinc.com/opensearch-ex/1.0/">
Regarding the insertion encoding - in any decent database API, you should insert unicode strings and unicode strings only.
For the reading and parsing bit, I'd recommend Mark Pilgrim's Feed Parser. It properly handles BOM, and the license allows commercial use. This may be a bit too heavy handed if you are not doing any actual parsing on the RSS data.
Are you sure your incoming data are encoded as UTF-16 (otherwise known as UCS-2)?
UTF-16 encoded unicode strings typically include lots of NUL characters (surely for all characters existing in ASCII too), so UTF-16 data hardly can be stored in environment variables (env vars in POSIX are NUL terminated).
Please provide samples of the postData variable contents. Output them using repr().
Until then, the solid advice is: in all DB interactions, your strings on the Python side should be unicode strings; the DB interface should take care of all translations/encodings/decodings necessary.
Before the SQL insertion you should to convert the string to unicode compatible strings. If you raise an UnicodeError exception, then encode the string.encode("utf-8").
Or , you can autodetect encoding and encode it , on his encode schema. Auto detect encoding

how to tell if a string is base64 or not

I have many emails coming in from different sources.
they all have attachments, many of them have attachment names in chinese, so these
names are converted to base64 by their email clients.
When I receive these emails, I wish to decode the name. but there are other names which are
not base64. How can I differentiate whether a string is base64 or not, using the jython programming language?
Ie.
First attachment:
------=_NextPart_000_0091_01C940CC.EF5AC860
Content-Type: application/vnd.ms-excel;
name="Copy of Book1.xls"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
filename="Copy of Book1.xls"
second attachment:
------=_NextPart_000_0091_01C940CC.EF5AC860
Content-Type: application/vnd.ms-excel;
name="=?gb2312?B?uLGxvmhlbrixsb5nLnhscw==?="
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
filename="=?gb2312?B?uLGxvmhlbrixsb5nLnhscw==?="
Please note both "Content-Transfer-Encoding" have base64
The header value tells you this:
=?gb2312?B?uLGxvmhlbrixsb5nLnhscw==?=
"=?" introduces an encoded value
"gb2312" denotes the character encoding of the original value
"B" denotes that B-encoding (equal to Base64) was used (the alternative
is "Q", which refers to something close to quoted-printable)
"?" functions as a separator
"uLG..." is the actual value, encoded using the encoding specified before
"?=" ends the encoded value
So splitting on "?" actually gets you this (JSON notation)
["=", "gb2312", "B", "uLGxvmhlbrixsb5nLnhscw==", "="]
In the resulting array, if "B" is on position 2, you face a base-64 encoded string on position 3. Once you decoded it, be sure to pay attention to the encoding on position 1, probably it would be best to convert the whole thing to UTF-8 using that info.
Please note both Content-Transfer-Encoding have base64
Not relevant in this case, the Content-Transfer-Encoding only applies to the body payload, not to the headers.
=?gb2312?B?uLGxvmhlbrixsb5nLnhscw==?=
That's an RFC2047-encoded header atom. The stdlib function to decode it is email.header.decode_header. It still needs a little post-processing to interpret the outcome of that function though:
import email.header
x= '=?gb2312?B?uLGxvmhlbrixsb5nLnhscw==?='
try:
name= u''.join([
unicode(b, e or 'ascii') for b, e in email.header.decode_header(x)
])
except email.Errors.HeaderParseError:
pass # leave name as it was
However...
Content-Type: application/vnd.ms-excel;
name="=?gb2312?B?uLGxvmhlbrixsb5nLnhscw==?="
This is simply wrong. What mailer created it? RFC2047 encoding can only happen in atoms, and a quoted-string is not an atom. RFC2047 §5 explicitly denies this:
An 'encoded-word' MUST NOT appear within a 'quoted-string'.
The accepted way to encode parameter headers when long string or Unicode characters are present is RFC2231, which is a whole new bag of hurt. But you should be using a standard mail-parsing library which will cope with that for you.
So, you could detect the '=?' in filename parameters if you want, and try to decode it via RFC2047. However, the strictly-speaking-correct thing to do is to take the mailer at its word and really call the file =?gb2312?B?uLGxvmhlbrixsb5nLnhscw==?=!
#gnud, #edg - Unless I misunderstand, he's asking about the filename, not the file content
#setori - the Content-Trasfer-Encoding is telling you how the CONTENT of the file is encoded, not the "filename".
I'm not an expert, but this part here in the filename is telling him about the characters that follow:
=?gb2312?B?
I'm looking for the documentation in the RFCs... Ah! here it is: https://www.rfc-editor.org/rfc/rfc2047
The RFC says:
Generally, an "encoded-word" is a sequence of printable ASCII characters that begins with "=?", ends with "?=", and has two "?"s in between.
Something else to look at is the code in SharpMimeTools, a MIME parser (in C#) that I use in my bug tracking app, BugTracker.NET
There is a better way than bobince’s method to handle the output of decode_header. I found it here: http://mail.python.org/pipermail/email-sig/2007-March/000332.html
name = unicode(email.header.make_header(email.header.decode_header(x)))
Well, you parse the email header into a dictionary. And then you check if Content-Transfer-Encoding is set, and if it = "base64" or "base-64".
Question: """Also I actually need to know what type of file it is ie .xls or .doc so I do need to decode the filename in order to correctly process the attachment, but as above, seems gb2312 is not supported in jython, know any roundabouts?"""
Data:
Content-Type: application/vnd.ms-excel;
name="=?gb2312?B?uLGxvmhlbrixsb5nLnhscw==?="
Observations:
(1) The first line indicates Microsoft Excel, so .xls is looking better than .doc
(2)
>>> import base64
>>> base64.b64decode("uLGxvmhlbrixsb5nLnhscw==")
'\xb8\xb1\xb1\xbehen\xb8\xb1\xb1\xbeg.xls'
>>>
(a) The extension appears to be .xls -- no need for a gb2312 codec
(b) If you want a file-system-safe file name, you could use the "-_" variant of base64 OR you could percent-encode it
(c) For what it's worth, the file name is XYhenXYg.xls where X and Y are 2 Chinese characters that together mean "copy" and the remainder are literal ASCII characters.

Categories

Resources