I'm struggling to parse an ATOM XML file, coming from an API, to a common data structure, like dict, Pandas dataframe or JSON,
I understand XML files are more complex than JSON files, and hence there won't be a very simple, generic solution to this. I hope that given the fact that I'm dealing with an ATOM structure might help parsing the file to a more general data structure.
The structure of the XML data: http://opendata.cbs.nl/ODataFeed/OData/70266ned/TypedDataSet
And similar for JSON here: http://opendata.cbs.nl/ODataFeed/OData/70266ned/TypedDataSet
The reason I can't use the JSON file is that it is often not available.
I played around with libraries like xml.etree, xmltodict, lxml, xmljson and feedparser, but I keep getting errors.
For example, using feedparser:
r = requests.get('http://opendata.cbs.nl/ODataFeed/OData/70266ned/TypedDataSet')
tree = ElementTree.fromstring(r.content)
Yields the error
xml.etree.ElementTree.ParseError: not well-formated (invalid token): line 1, column 0
Help would be highly appreciated!
I don't know if you solved it yet but, have you tried using?:
tree = ElementTree.fromstring(r.text)
r.content returns the content in bytes (see: http://docs.python-requests.org/en/master/api/#requests.Response)
Related
python lxml can be used to extract text (e.g., with xpath) from XML files without having to fully parse XML. For example, I can do the following which is faster than BeautifulSoup, especially for large input. I'd like to have some equivalent code for JSON.
from lxml import etree
tree = etree.XML('<foo><bar>abc</bar></foo>')
print type(tree)
r = tree.xpath('/foo/bar')
print [x.tag for x in r]
I see http://goessner.net/articles/JsonPath/. But I don't see an example python code to extract some text from a json file without having use json.load(). Could anybody show me an example? Thanks.
I'm assuming you don't want to load the entire JSON for performance reasons.
If that's the case, perhaps ijson is what you need. I used it to search huge JSON files (>8gb) and it works well.
However, you will have to implement the search code yourself.
I collected some tweets from the twitter API and stored it to mongodb, I tried exporting the data to a JSON file and didn't have any issues there, until I tried to make a python script to read the JSON and convert it to a csv. I get this traceback error with my code:
json.decoder.JSONDecodeError: Extra data: line 367 column 1 (char 9745)
So, after digging around the internet I was pointed to check the actual JSON data in an online validator, which I did. This gave me the error of:
Multiple JSON root elements
from the site https://jsonformatter.curiousconcept.com/
Here are pictures of the 1st/2nd object beginning/end of the file:
or a link to the data here
Now, the problem is, I haven't found anything on the internet of how to handle that error. I'm not sure if it's an error with the data I've collected, exported, or if I just don't know how to work with it.
My end game with these tweets is to make a network graph. I was looking at either Networkx or Gephi, which is why I'd like to get a csv file.
Robert Moskal is right. If you can address the issue at source and use --jsonArray flag when you use mongoexport then it will make the problem easier i guess. If you can't address it at source then read the below points.
The code below will extract you the individual json objects from the given file and convert them to python dictionaries.
You can then apply your CSV logic to each individual dictionary.
If you are using csv module then I would say use unicodecsv module as it would handle the unicode data in your json objects.
import json
with open('path_to_your_json_file', 'rb') as infile:
json_block = []
for line in infile:
json_block.append(line)
if line.startswith('}'):
json_dict = json.loads(''.join(json_block))
json_block = []
print json_dict
If you want to convert it to CSV using pandas you can use the below code:
import json, pandas as pd
with open('path_to_your_json_file', 'rb') as infile:
json_block = []
dictlist=[]
for line in infile:
json_block.append(line)
if line.startswith('}'):
json_dict = json.loads(''.join(json_block))
dictlist.append(json_dict)
json_block = []
df = pd.DataFrame(jsonlist)
df.to_csv('out.csv',encoding='utf-8')
If you want to flatten out the json object you can use pandas.io.json.json_normalize() method.
Elaborating on #MYGz suggestion to use --jsonArray
Your post doesn't show how you exported the data from mongo. If you use the following via the terminal, you will get valid json from mongodb:
mongoexport --collection=somecollection --db=somedb --jsonArray --out=validfile.json
Replace somecollection, somedb and validfile.json with your target collection, target database, and desired output filename respectively.
The following: mongoexport --collection=somecollection --db=somedb --out=validfile.json...will NOT give you the results you are looking for because:
By default mongoexport writes data using one JSON document for every
MongoDB document. Ref
A bit late reply, and I am not sure it was available the time this question was posted. Anyway, now there is a simple way to import the mongoexport json data as follows:
df = pd.read_json(filename, lines=True)
mongoexport provides each line as a json objects itself, instead of the whole file as json.
How can I convert to a pickle object to a xml document?
For example, I have a pickle like that:
cpyplusplus_test
Coordinate
p0
(I23
I-11
tp1
Rp2
.
I want to get something like:
<Coordinate>
<x>23</x>
<y>-11</y>
</Coordinate>
The Coordinate class has x and y attributes of course. I can supply a xml schema for conversion.
I tried gnosis.xml module. It can objectify xml documents to python object. But it cannot serialize objects to xml documents like above.
Any suggestion?
Thanks.
gnosis.xml does support pickling to XML:
import gnosis.xml.pickle
xml_str = gnosis.xml.pickle.dumps(obj)
To deserialize the XML, use loads:
o2 = gnosis.xml.pickle.loads(xml_str)
Of course, this will not directly convert existing pickles to XML — you have to first deserialize them into live object, and then dump them to XML.
Having said that, I must warn you that gnosis.xml is quite slow, somewhat fragile, and most likely unmaintained (last release was over six years ago). It is also very bloated, containing a huge number of subpackages with lots and lots of features that not only you won't need, but are untested and buggy. We tried to use for our development and, after a lot of effort wasted on trying to debug and improve it, ended up writing a simple XML pickler running at ~500 lines of code, and never looked back.
First you need to unpickle the data by pickle.load or pickle.loads. Then generate xml snippet. If you have a pickle in tmpStr variable, simply do this:
c = pickle.loads(tmpStr)
print '<Coordinate>\n<x>%d</x>\n<y>%d</y>\n</Coordinate>' % (c.x, c.y)
Writing to file is left as an exercise to the reader.
I've been trying for some hours to grab the response from the imgur API. I got the XML in the terminal, but I don't know how to grab it and parse it. Here's my code.
c = pycurl.Curl()
values = [
("key", "Super Secret API Number"),
("image", (c.FORM_FILE, "pic.jpg"))]
c.setopt(c.URL, "http://api.imgur.com/2/upload.xml")
c.setopt(c.HTTPPOST, values)
c.perform()
c.close()
I'm a big noob with python, this is my first time. Python virgin. I read that you can parse the xml with ElementTree, but I can't find any cool documentation.
Hope you can help me. Thanks.
Store the response from imgur-api into a file.Than need to use a xml parser to parse the xml response/file you are getting from Imgur-API.
There are lots of option available like lxml or BeautifulSoup.
Here is an example of how to use lxml with XPath expressions.
from lxml import etree
xml = """<foo>baz!</foo>"""
>>> xml = """<foo>baz!</foo>"""
>>> xp = etree.fromstring(xml)
>>> values = xp.xpath("//foo/text()")
>>> values
['baz!']
If you need to parse a xml file:
# parse from file
et = etree.parse(source_xml)
value = et.xpath("your xpath xpr here")
If you need to parse directly from url
# parse from URL
etree.parse("http://example.com/somefile.xml")
For, XPath use firefox's firebug extension or install firepath
When I started using the included ElementTree module I found the documentation lacking good examples (currently there are only 3, and only one of those shows anything immediately practical).
I've answered a couple of questions here on SO related to lxml/ElementTree, and I usually see people getting stuck trying to write these weird list comprehensions to deal with something XPath handles in one line much more clearly:
Parsing lxml.etree._Element contents
lxml classic: Get text content except for that of nested tags?
If you have a more specific question, please post some source XML and desired effect.
I hope this helps,
I need to convert an activity diagram in xmi format to xml format.Is this conversion possible using python?Are there any tools to convert xmi files to xml?
Converting XML to XML is usually called XML transformation. For Python you can use libxsltmod to perform XML transformations by using XSLT 'stylesheets'.
As Ignacio says, the problem may not be that the target tool expects XML but that probably expects a diffent XMI format.
Unfortunately, each tool follows its own interpretation of the XMI standard so two modeling tools will most likely generate two incompatible XMI files for the same model. See an example in this "model once open anywhere not true" post
you can get the information that you need (classes and attribute ...) from any file.xmi this doc maybe help
from xml.dom import minidom
xmldoc = minidom.parse('file.xmi')
for element in xmldoc.getElementsByTagName("UML:Class"):
print(" -> UML:Class ",element.getAttribute('name'))
for a in element.getElementsByTagName("UML:Attribute"):
print(" -> UML:Attr : ",a.getAttribute('name'))