Decoding a WBXML SyncML message from an S60 device - python

I'm trying to decode a WBXML encoded SyncML message from a Nokia N95.
My first attempt was to use the python pywbxml module which wraps calls to libwbxml. Decoding the message with this gave a lot of <unknown> tags and a big chunk of binary within a <Collection> tag. I tried running the contents of the <Collection> through by itself but it failed. Is there something I'm missing?
Also, does anyone know of a pure python implementation of a wbxml parser? Failing that a command line or online tool to decode these messages would be useful -- it would make it a lot easier for me to write my own...

Funnily enough I've been working on the same problem. I'm about halfway through writing my own pure-Python WBXML parser, but it's not yet complete enough to be useful, and I have very little time to work on it right now.
Those <Unknown> tags might be because pywbxml / libwbxml doesn't have the right tag vocabulary loaded. WBXML represents tags by an index number to avoid transmitting the same tag name hundreds of times, and the table that maps index numbers to tag names has to be supplied separately from the WBXML document itself. From a vague glance at the libwbxml source it seems like libwbxml has a bunch of tag tables hard coded. It has tables for SyncML 1.0-1.2; I think my Nokia E71 sends SyncML 1.3 (if so, your N95 probably does too), which it looks like libwbxml doesn't support yet.
Getting it to work might be as simple as adding a SyncML 1.3 table to libwbxml. That said, last time I tried, pywbxml doesn't compile against the vanilla libwbxml source, so you have to apply some patches first... so "simple" may be a relative term.

I ended up writing a python parser myself. I managed to do it by following the spec here:
http://www.w3.org/TR/wbxml/
And then taking the code tables from the horde.org cvs.
The open mobile alliance's site and documentation are terrible, this was a very trying project :(

I used pywbxml ,
Just needed one patch in pywbxml.pyx:
params.lang in function wbxml2xml around line 25 set to:
params.lang = WBXML_LANG_UNKNOWN
works like charm. Also changing base class for WBXMLParseError to exception helps:
class WBXMLParseError(Exception):

Related

saving back to a docx xml file with python

I am trying to preform find and replace on docx files, whilst still maintaining formatting.
From reading up on this, the best way seems to be preforming the find/replace on the xml file of the document.
I can load in the xml file and find/replace on it, but unsure how to write it back.
docx:
Hello {text}!
python:
import zipfile
zip = zipfile.ZipFile(open('test.docx', 'rb'))
xmlString = zip.read('word/document.xml').decode('utf-8')
xmlString.replace('{text}', 'world')
What you are trying is dangerous, because you are processing a high lever docx file at a lower level. If you really want to do it, just use the hints from overwriting file in ziparchive as suggested by #shahvishal.
But unless you fully know all the details of docx format, my advice is : do not do that. Suppose there is for any reason in an internal field or attribute the string {text}. You are likely to change the file in an unexpected way leading immediately or even worse later to the destruction of the file (Word being no longer able to process it).
If you do your processing on a Windows machine with an installed word, you certainly could try to use automation to process the file with Microsoft Word. Unfortunately, I only did that long time ago and cannot give useful links. You will need:
general knowledge on pywin30 module
sufficient knowledge on the Automation interface of MS/WORD. Hopefully, its documentation is nice with many examples provided you have a full installation of Microsoft Office including macro help
You're really going to want to use a library for reading/writing docx files rather than trying to just deal with them as raw XML. A cursory search came up with the pypi module docx but I haven't used this module so I can't endorse it:
https://pypi.python.org/pypi/docx/0.2.4
I've had the (unfortunate) experience of dealing with the manipulation of MS Office documents from other programming languages, and spending the time to find good libraries really paid off.
The old saying goes "don't reinvent the wheel" and I think that's definitely true when manipulating non-trivial file formats. If a somewhat mature library exists to do the job, use it!
You would need to replace the file in the zip archive. There is no "simple" way of achieving this. The following is a question that should help:
overwriting file in ziparchive

xgoogle python library is not working any more?

I was using the xgoogle python library for one of my projects. It was working fine till recently. I am not getting the resultset that I used to get before. If anyone who has used this library written by Peter Krummins, faced a similar situation, can you please suggest a work around ?
The presence of BeautifulSoup.py hints that this library uses web scraping to get its result.
A common problem with this is that it will easily break when the design/layout of the page being scraped changes. And the problem you see seems to coincide with the new search results layout that Google introduced just recently.
Another problem is that it often is against the terms of service of the site being scraped. And according to point 5.3 of the Google Terms Of Service it actually is:
You specifically agree not to access (or attempt to access) any of the Services through any automated means (including use of scripts or web crawlers) [...]
A better idea would be to use the Custom Search API.
Peter Krumin's product xgoogle looks to be extremely useful both to me and I image many others.
https://github.com/pkrumins/xgoogle
For me the current version is 1.3 is not working.
I tried a new install from GitHub, ran the examples and nothing is returned.
Adding a debugger to the source code and tracing the data captured in a query to its disappearance the problem occurs in a routine called search.py subroutine "_extract_results" at a parser command
results = soup.findAll('li', {'class': 'g'})
The soup object has material in it but the "findAll" fails to return anything.
Looks like its searching for lists and if there are none it returns nothing.
I am unsure what html you would try to match to get a result.
If anyone knows how to make the is work I am very interested.
A little more googling and it appears xgoogle is no longer supported or works.
Part of the trouble is that Google changes the layout of its results pages every so often and so any scraping software that assumes some standard layout is in time doomed to failure.
There are however other search engines that are locally installed and thus provide a results layout that are less likely change with upgrades and will not change at all if you don't upgrade.
I am currently investigating Yacy. Easy to install and can be pointed at specific sites if you want.

Python pefile member question

Gang,
I apologize if this is a really dumb question... I am wanting to use the super convenient python script pefile (http://code.google.com/p/pefile/) that parses an executable and lists particular information about the PE structure. My question is where can I find information about how to access particular members of the executable? I've scoured the wiki and read the usage examples but that documentation only covered 4-5 members. What I am wondering is if you guys have a list of members I can access to display the information I care about. So specifically, if I wanted to list the Stack Commit Size of an executable, does it look like this: pe.FILE_HEADER.StackCommitSize, obviously I can run this code and figure it out but have you guys seen API DOC floating around that I find the members i need?
THANKS!
From the PE docstring:
Basic headers information will be available in the attributes:
DOS_HEADER
NT_HEADERS
FILE_HEADER
OPTIONAL_HEADER
All of them will contain among their attributes the members of the
corresponding structures as defined in WINNT.H
So, look at winnt.h and you'll see which attributes are available.
Or just read the source code for the module. It's big, but everything you need to know is in there.
You can generally find everything that is in a PE file in the Microsoft PE/COFF specification.
Once you've looked there, you know that the StackCommitSize is in the optional image header. Then all you have to do is to look for the corresponding structure in pefile, which usually bears a similar name, if not indeed the very same name. In this case:
pe = pefile.PE("C:\\Windows\\Notepad.exe")
print pe.OPTIONAL_HEADER.SizeOfStackCommit
Will give you what you want.
If you have trouble finding SizeOfStackCommit (after you've found it in the specification), just use your quick find on the source code. It's as easy to read as you can get, and I don't think you'll have any trouble finding the required structure.
Now, there probably aren't any API docs for pefile itself, but as you can see there's really no need for it, since it's just a nice Pythonic wrapper around the PE specification itself.

What's a good document standard to use programmatically?

I'm writing a program that requires input in the form of a document, it needs to replace a few values, insert a table, and convert it to PDF. It's written in Python + Qt (PyQt). Is there any well known document standard which can be easily used programmatically? It must be cross platform, and preferably open.
I have looked into Microsoft Doc and Docx, which are binary formats and I can't edit them. Python has bindings for it, but they're only on Windows.
Open Office's ODT/ODF is zipped in an xml file, so I can edit that one but there's no command line utilities or any way to programmatically convert the file to a PDF. Open Office provides bindings, but you need to run Open Office from the command line, start a server, etc. And my clients may not have Open Office installed.
RTF is readable from Python, but I couldn't find any way/libraries to convert RTF documents to PDF.
At the moment I'm exporting from Microsoft Word to HTML, replacing the values and using PyQt to convert it to a PDF. However it loses formatting features and looks awful. I'm surprised there isn't a well known library which lets you edit a variety of document formats and convert them into other formats, am I missing something?
Update: Thanks for the advice, I'll have a look at using Latex.
Thanks,
Jackson
Have you looked into using LaTeX documents?
They are perfect to use programatically (compiling documents? You gotta love that...), and you have several Python frameworks you can use such as plasTeX and PyTex.
Exporting a LaTeX documents to PDF is almost immediate.
Since you're already using PyQt anyway, it might be worth looking at Qt's built-in RTF processing module which looks decent. Here's the documentation on detailed content manipulation including inserting tables. Also the QPrinter module's default print-to-file format happens to be PDF.
Without knowing more about your particular needs it's hard to say if these would do what you want, but since your application already has PyQt as a dependency, seems silly to introduce any more without evaluating the functionality you've already got available.
The non-GUI parts of the Qt framework are often overlooked though.
edit: included more links.
You might want to try ReportLab. The open source version can write PDFs, and the commercial version has a lot of really nice abstractions to allow output to a variety of different formats from a single input.
I don't know the kind of odience of your program, Tex is good and i would go with it.
Another possible choice is Excel format, parsing it with xlrd.
I've used it a couple of time and it's pretty straightforward.
Excel file is a good for the following reasons:
Well known format easy to edit
You could prepare a predefined template with constrains and table
Creating XML documents, transforming them to XSL/fo and rendering with Fop or RenderX. If you use docbook as the primary input, there are toolchains freely available for converting that to PDF, RTF, HTML and so forth.
It is rather quirky to use and not my idea of fun, but is does deliver and can be embedded in an application, AFAICT.
Creating docbook is very straightforward as it has a wide range of semantic tags, table support etc to give a "meaningful" markup which can be reliably formatted. The XSL stylesheets are modular and allow parts to be customized or replaced to generate your own look and feel.
It works well for relatively free flow documents with lots of text.
For filling in the blanks kind of documents, a regular reporting engine may be a better fit, or some straighforward XSL stylesheets spitting out the XSL-fo directly.

ElementTree in Python 2.6.2 Processing Instructions support?

I'm trying to create XML using the ElementTree object structure in python. It all works very well except when it comes to processing instructions. I can create a PI easily using the factory function ProcessingInstruction(), but it doesn't get added into the elementtree. I can add it manually, but I can't figure out how to add it above the root element where PI's are normally placed. Anyone know how to do this? I know of plenty of alternative methods of doing it, but it seems that this must be built in somewhere that I just can't find.
Try the lxml library: it follows the ElementTree api, plus adds a lot of extras. From the compatibility overview:
ElementTree ignores comments and processing instructions when parsing XML, while etree will read them in and treat them as Comment or ProcessingInstruction elements respectively. This is especially visible where comments are found inside text content, which is then split by the Comment element.
You can disable this behaviour by passing the boolean remove_comments and/or remove_pis keyword arguments to the parser you use. For convenience and to support portable code, you can also use the etree.ETCompatXMLParser instead of the default etree.XMLParser. It tries to provide a default setup that is as close to the ElementTree parser as possible.
Not in the stdlib, I know, but in my experience the best bet when you need stuff that the standard ElementTree doesn't provide.
With the lxml API it couldn't be easier, though it is a bit "underdocumented":
If you need a top-level processing instruction, create it like this:
from lxml import etree
root = etree.Element("anytagname")
root.addprevious(etree.ProcessingInstruction("anypi", "anypicontent"))
The resulting document will look like this:
<?anypi anypicontent?>
<anytagname />
They certainly should add this to their FAQ because IMO it is another feature that sets this fine API apart.
Yeah, I don't believe it's possible, sorry. ElementTree provides a simpler interface to (non-namespaced) element-centric XML processing than DOM, but the price for that is that it doesn't support the whole XML infoset.
There is no apparent way to represent the content that lives outside the root element (comments, PIs, the doctype and the XML declaration), and these are also discarded at parse time. (Aside: this appears to include any default attributes specified in the DTD internal subset, which makes ElementTree strictly-speaking a non-compliant XML processor.)
You can probably work around it by subclassing or monkey-patching the Python native ElementTree implementation's write() method to call _write on your extra PIs before _writeing the _root, but it could be a bit fragile.
If you need support for the full XML infoset, probably best stick with DOM.
I don't know much about ElementTree. But it is possible that you might be able to solve your problem using a library I wrote called "xe".
xe is a set of Python classes designed to make it easy to create structured XML. I haven't worked on it in a long time, for various reasons, but I'd be willing to help you if you have questions about it, or need bugs fixed.
It has the bare bones of support for things like processing instructions, and with a little bit of work I think it could do what you need. (When I started adding processing instructions, I didn't really understand them, and I didn't have any need for them, so the code is sort of half-baked.)
Take a look and see if it seems useful.
http://home.avvanta.com/~steveha/xe.html
Here's an example of using it:
import xe
doc = xe.XMLDoc()
prefs = xe.NestElement("prefs")
prefs.user_name = xe.TextElement("user_name")
prefs.paper = xe.NestElement("paper")
prefs.paper.width = xe.IntElement("width")
prefs.paper.height = xe.IntElement("height")
doc.root_element = prefs
prefs.user_name = "John Doe"
prefs.paper.width = 8
prefs.paper.height = 10
c = xe.Comment("this is a comment")
doc.top.append(c)
If you ran the above code and then ran print doc here is what you would get:
<?xml version="1.0" encoding="utf-8"?>
<!-- this is a comment -->
<prefs>
<user_name>John Doe</user_name>
<paper>
<width>8</width>
<height>10</height>
</paper>
</prefs>
If you are interested in this but need some help, just let me know.
Good luck with your project.
f = open('D:\Python\XML\test.xml', 'r+')
old = f.read()
f.seek(44,0) #place cursor after xml declaration
f.write('<?xml-stylesheet type="text/xsl" href="C:\Stylesheets\expand.xsl"?>'+ old[44:])
I was facing the same problem and came up with this crude solution after failing to insert the PI into the .xml file correctly even after using one of the Element methods in my case root.insert (0, PI) and trying multiple ways to cut and paste the inserted PI to the correct location only to find the data to be deleted from unexpected locations.

Categories

Resources