Parse XML with lxml, and then manipulate it with cElementTree - python

I have an app which constantly reloads a large amount of XML data from a file, and then performs manipulations, and then writes back to file.
The lxml library is proven much faster for parsing and un-parsing XML, but cElementTree is much faster for certain kinds of manipulation. Both have an almost identical API.
How can I parse an XML file with lxml, and then manipulate it with cElementTree?
This is what I've tried, but the object produced by lxml parse methods inherently use it's own manipulative methods.
import xml.etree.cElementTree as ET
from lxml import etree as lxmlET

This question is perhaps the Python equivalent of "My friend has a fast car and I just have a clunker. How can I make my car go as fast as hers?"
I'm not saying this couldn't be done, but I should call call such an enterprise either ambitious or foolhardy, depending on your level of programming skill. The point is that each system has, as you have discovered, its own internal representation of the parsed XML.
While it might be possible to write code to take the parsed object produced by lxml and re-create or wrap it as ElementTree elements, it's probably going to a) take as long as parsing with ElementTree in the first place, and b) be a maintenance nightmare.
So do yourself a favor and choose one technology then stick with it (at least for each individual program).
I would also point out that XML was intended primarily as a data interchange language. The fact that you seem to be using it as a structured data repository inevitably introduces large inefficiencies in the processing, particularly as data volumes go up. Might it be better to choose some more amenable representation and then only convert it to XML for output and usage by other systems?

Related

Retrieve only a portion of an XML feed

I'm using Scrapy XMLFeedSpider to parse a big XML feed(60MB) from a website, and i was just wondering if there is a way to retrieve only a portion of it instead of all 60MB because right now the RAM consumed is pretty high, maybe something to put in the link like:
"http://site/feed.xml?limit=10", i've searched if there is something similar to this but i haven't found anything.
Another option would be limit the items parsed by scrapy, but i don't know how to do that.Right now once the XMLFeedSpider parsed the whole document the bot will analyze only the first ten items, but i supposes that the whole feed will still be in the memory.
Have you any idea on how to improve the bot's performance , diminishing the RAM and CPU consumption? Thanks
When you are processing large xml documents and you don't want to load the whole thing in memory as DOM parsers do. You need to switch to a SAX parser.
SAX parsers have some benefits over DOM-style parsers. A SAX parser
only needs to report each parsing event as it happens, and normally
discards almost all of that information once reported (it does,
however, keep some things, for example a list of all elements that
have not been closed yet, in order to catch later errors such as
end-tags in the wrong order). Thus, the minimum memory required for a
SAX parser is proportional to the maximum depth of the XML file (i.e.,
of the XML tree) and the maximum data involved in a single XML event
(such as the name and attributes of a single start-tag, or the content
of a processing instruction, etc.).
For a 60 MB XML document, this is likely to be very low compared to the requirments for creating a DOM. Most DOM based systems actually use at a much lower level to build up the tree.
In order to create make use of sax, subclass xml.sax.saxutils.XMLGenerator and overrider endElement, startElement and characters. Then call xml.sax.parse with it. I am sorry I don't have a detailed example at hand to share with you, but I am sure you will find plenty online.
You should set the iterator mode of your XMLFeedSpider to iternodes (see here):
It’s recommended to use the iternodes iterator for performance reasons
After doing so, you should be able to iterate over your feed and stop at any point.

How to pretty print xml in python without generating a DOM tree?

Generating a DOM tree is too expensive for very large xml data. Is there a method to accomplish the printing without generating it? I am using python-2.7.
Whatever the language, the way to parse a XML document without generating a tree is to use event-oriented parsers. With these kinds of parser, you give to the parser some event handlers that the parser will call at specific points of the processing: beginning of a node, end of a node, beginning of data, etc.
So you can use that kind of parser and go to a new line each time there is a new node, and increase indentation where you are entering a node and decrease indentation when you are exiting a node.
Because of the way these parsers work, it will be tricky to look ahead to see for example if a node fit in a line, so the pretty print may not be as pretty as when working with a tree (or you can, but it would be complicated).
In python, there are 3 event-driven parsers that come with the standard library (in no particular order):
ElementTree.iterparse()
pyexpat
sax (SAX is a well-known event-driven XML parsing API)
I suggest you have a look at them and try to play with.

Parsing a large (~40GB) XML text file in python

I've got an XML file I want to parse with python. What is best way to do this? Taking into memory the entire document would be disastrous, I need to somehow read it a single node at a time.
Existing XML solutions I know of:
element tree
minixml
but I'm afraid they aren't quite going to work because of the problem I mentioned. Also I can't open it in a text editor - any good tips in generao for working with giant text files?
First, have you tried ElementTree (either the built-in pure-Python or C versions, or, better, the lxml version)? I'm pretty sure none of them actually read the whole file into memory.
The problem, of course, is that, whether or not it reads the whole file into memory, the resulting parsed tree ends up in memory.
ElementTree has a nifty solution that's pretty simple, and often sufficient: iterparse.
for event, elem in ET.iterparse(xmlfile, events=('end')):
...
The key here is that you can modify the tree as it's built up (by replacing the contents with a summary containing only what the parent node will need). By throwing out all the stuff you don't need to keep in memory as it comes in, you can stick to parsing things in the usual order without running out of memory.
The linked page gives more details, including some examples for modifying XML-RPC and plist as they're processed. (In those cases, it's to make the resulting object simpler to use, not to save memory, but they should be enough to get the idea across.)
This only helps if you can think of a way to summarize as you go. (In the most trivial case, where the parent doesn't need any info from its children, this is just elem.clear().) Otherwise, this won't work for you.
The standard solution is SAX, which is a callback-based API that lets you operate on the tree a node at a time. You don't need to worry about truncating nodes as you do with iterparse, because the nodes don't exist after you've parsed them.
Most of the best SAX examples out there are for Java or Javascript, but they're not too hard to figure out. For example, if you look at http://cs.au.dk/~amoeller/XML/programming/saxexample.html you should be able to figure out how to write it in Python (as long as you know where to find the documentation for xml.sax).
There are also some DOM-based libraries that work without reading everything into memory, but there aren't any that I know of that I'd trust to handle a 40GB file with reasonable efficiency.
The best solution will depend in part on what you are trying to do, and how free your system resources are. Converting it to a postgresql or similar database might not be a bad first goal; on the other hand, if you just need to pull data out once, it's probably not needed. When I have to parse large XML files, especially when the goal is to process the data for graphs or the like, I usually convert the xml to S-expressions, and then use an S-expression interpreter (implemented in python) to analyse the tags in order and build the tabulated data. Since it can read the file in a line at a time, the length of the file doesn't matter, so long as the resulting tabulated data all fits in memory.

XML object serialization in python, are there any alternatives to Gnosis?

For a while I've been using a package called "gnosis-utils" which provides an XML pickling service for Python. This class works reasonably well, however it seems to have been neglected by it's developer for the last four years.
At the time we originally selected gnosis it was the only XML serization tool for Python. The advantage of Gnosis was that it provided a set of classes whose function was very similar to the built-in Python XML pickler. It produced XML which python-developers found easy to read, but non-python developers found confusing.
Now that the proejct has grown we have a new requirement: We need to be able to exchange XML with our colleagues who prefer Java or .Net. These non-python developers will not be using Python - they intend to produce XML directly, hence we have a need to simplify the format of the XML.
So are there any alternatives to Gnosis. Our requirements:
Must work on Python 2.4 / Windows x86 32bit
Output must be XML, as simple as possible
API must resemble Pickle as closely as possible
Performance is not hugely important
Of course we could simply adapt Gnosis, however we'd prefer to simply use a component which already provides the functions we requrie (assuming that it exists).
So what you're looking for is a python library that spits out arbitrary XML for your objects? You don't need to control the format, so you can't be bothered to actually write something that iterates over the relevant properties of your data and generates the XML using one of the existing tools?
This seems like a bad idea. Arbitrary XML serialization doesn't sound like a good way to move forward. Any format that includes all of pickle's features is going to be ugly, verbose, and very nasty to use. It will not be simple. It will not translate well into Java.
What does your data look like?
If you tell us precisely what aspects of pickle you need (and why lxml.objectify doesn't fulfill those), we will be better able to help you.
Have you considered using JSON for your serialization? It's easy to parse, natively supports python-like data structures, and has wide-reaching support. As an added bonus, it doesn't open your code to all kinds of evil exploits the way the native pickle module does.
Honestly, you need to bite the bullet and define a format, and build a serializer using the standard XML tools, if you absolutely must use XML. Consider JSON.
There is xml_marshaller which provides a simple way of dumping arbitrary Python objects to XML:
>>> from xml_marshaller import xml_marshaller
>>> class Foo(object): pass
>>> foo = Foo()
>>> foo.bar = 'baz'
>>> dump_str = xml_marshaller.dumps(foo)
Pretty printing the above with lxml (which is a dependency of xml_marshaller anyway):
>>> from lxml.etree import fromstring, tostring
>>> print tostring(fromstring(dump_str), pretty_print=True)
You get output like this:
<marshal>
<object id="i2" module="__main__" class="Foo">
<tuple/>
<dictionary id="i3">
<string>bar</string>
<string>baz</string>
</dictionary>
</object>
</marshal>
I did not check for Python 2.4 compatibility since this question was asked long ago, but a solution for xml dumping arbitrary Python objects remains relevant.

Python and Memory Consumption

I am searching for a way to be able to handle overloading the RAM and CPU using a high memory program... I would like to process a LARGE amount of data contained in files. I then read the files and process the data therein. The problem is there are many nested for loops and a root XML file is being created from all the data processed.
The program easily consumes a couple gigs of RAM after half hour or so of run-time.
Is there something I can do to not let RAM get so big and/or work around it.. ?
Do you really need to keep the whole data from the XML file on memory at once?
Most (all?) XML libraries out there allow you to do iterative parsing, meaning that you keep in memory just a few nodes of the XML file, not the whole file. That is unless you are making a string containing the XML file yourself without any library, but that is a bit insane. If that is the case, use a library ASAP.
The specific code samples presented here might not apply to your project, but consider a few principles—borne out by testing and the lxml documentation—when faced with XML data measured in gigabytes or more:
Use an iterative parsing strategy to incrementally process large documents.
If searching the entire document in random order is required, move to an indexed XML database.
Be extremely conservative in the data that you select. If you are only interested in particular nodes, use methods that select by those names. If you require predicate syntax, try one of the XPath classes and methods available.
Consider the task at hand and the comfort level of the developer. Object models such as lxml's objectify or Amara might be more natural for Python developers when speed is not a consideration. cElementTree is faster when only parsing is required.
Take the time to do even simple benchmarking. When processing millions of records, small differences add up, and it is not always obvious which methods are the most efficient.
If you need to do complex operations on the data, why don't you just put it on a relational database and operate on the data from there? That will have better performance.

Categories

Resources