How to use python to parse XML to required custom fields - python

I've got a directory full of salesforce objects in XML format. I'd like to identify the <fullName> and parent file of all the custom <fields> where <required> is true. Here is some truncated sample data, lets call it "Custom_Object__c:
<?xml version="1.0" encoding="UTF-8"?>
<CustomObject xmlns="http://soap.sforce.com/2006/04/metadata">
<deprecated>false</deprecated>
<description>descriptiontext</description>
<fields>
<fullName>custom_field1</fullName>
<required>false</required>
<type>Text</type>
<unique>false</unique>
</fields>
<fields>
<fullName>custom_field2</fullName>
<deprecated>false</deprecated>
<visibleLines>5</visibleLines>
</fields>
<fields>
<fullName>custom_field3</fullName>
<required>false</required>
</fields>
<fields>
<fullName>custom_field4</fullName>
<deprecated>false</deprecated>
<description>custom field 4 description</description>
<externalId>true</externalId>
<required>true</required>
<scale>0</scale>
<type>Number</type>
<unique>false</unique>
</fields>
<fields>
<fullName>custom_field5</fullName>
<deprecated>false</deprecated>
<description>Creator of this log message. Application-specific.</description>
<externalId>true</externalId>
<label>Origin</label>
<length>255</length>
<required>true</required>
<type>Text</type>
<unique>false</unique>
</fields>
<label>App Log</label>
<nameField>
<displayFormat>LOG-{YYYYMMDD}-{00000000}</displayFormat>
<label>Entry ID</label>
<type>AutoNumber</type>
</nameField>
</CustomObject>
The desired output would be a dictionary with format something like:
required_fields = {'Custom_Object__1': 'custom_field4', 'Custom_Object__1': 'custom_field5',... etc for all the required fields in all files in the fold.}
or anything similar.
I've already gotten my list of objects through glob.glob, and I can get a list of all the children and their attributes with ElementTree but I'm struggling past there. I feel like I'm very close but I'd love a hand finishing this task off. Here is my code so far:
import os
import glob
import xml.etree.ElementTree as ET
os.chdir("/Users/paulsallen/workspace/fforce/FForce Dev Account/config/objects/")
objs = []
for file in glob.glob("*.object"):
objs.append(file)
fields_dict = {}
for object in objs:
root = ET.parse(objs).getroot()
....
and once I get the XML data parsed I don't know where to take it from there.

You really want to switch to using lxml here, because then you can use an XPath query:
from lxml import etree as ET
os.chdir("/Users/paulsallen/workspace/fforce/FForce Dev Account/config/objects/")
objs = glob.glob("*.object")
fields_dict = {}
for filename in objs:
root = ET.parse(filename).getroot()
required = root.xpath('.//n:fullName[../n:required/text()="true"]/text()',
namespaces={'n': tree.nsmap[None]})
fields_dict[os.path.splitext(filename)[0]] = required
With that code you end up with a dictionary of lists; each key is a filename (without the extension), each value is a list of required fields.
The XPath query looks for fullName elements in the default namespace, that have a required element as sibling with the text 'true' in them. It then takes the contained text of each of those matching elements, which is a list we can store in the dictionary.

Use this function to find all required fields under a given root. It should also help as an example/starting point for future parsing needs
def find_required_fields(root):
NS = {'soap': 'http://soap.sforce.com/2006/04/metadata'}
required_fields = []
for field in root.findall('soap:fields', namespaces=NS):
required = field.findtext('soap:required', namespaces=NS) == "true"
name = field.findtext('soap:fullName', namespaces=NS)
if required:
required_fields.append(name)
return required_fields
Example usage:
>>> import xml.etree.ElementTree as ET
>>> root = ET.parse('objects.xml') # where objects.xml contains the example in the question
>>> print find_required_fields(root)
['custom_field4', 'custom_field5']
>>>

Related

xml parsing in python with XPath

I am trying to parse an XML file in Python with the built in xml module and Elemnt tree, but what ever I try to do according to the documentation, it does not give me what I need.
I am trying to extract all the value tags into a list
<?xml version="1.0" encoding="UTF-8"?>
<CustomField xmlns="http://soap.sforce.com/2006/04/metadata">
<fullName>testPicklist__c</fullName>
<externalId>false</externalId>
<label>testPicklist</label>
<required>false</required>
<trackFeedHistory>false</trackFeedHistory>
<type>Picklist</type>
<valueSet>
<restricted>true</restricted>
<valueSetDefinition>
<sorted>false</sorted>
<value>
<fullName>a 32</fullName>
<default>false</default>
<label>a 32</label>
</value>
<value>
<fullName>23 432;:</fullName>
<default>false</default>
<label>23 432;:</label>
</value>
and here is the example code that I cant get to work. It's very basic and all I have issues is the xpath.
from xml.etree.ElementTree import ElementTree
field_filepath= "./testPicklist__c.field-meta.xml"
mydoc = ElementTree()
mydoc.parse(field_filepath)
root = mydoc.getroot()
print(root.findall(".//value")
print(root.findall(".//*/value")
print(root.findall("./*/value")
Since the root element has attribute xmlns="http://soap.sforce.com/2006/04/metadata", every element in the document will belong to this namespace. So you're actually looking for {http://soap.sforce.com/2006/04/metadata}value elements.
To search all <value> elements in this document you have to specify the namespace argument in the findall() function
from xml.etree.ElementTree import ElementTree
field_filepath= "./testPicklist__c.field-meta.xml"
mydoc = ElementTree()
mydoc.parse(field_filepath)
root = mydoc.getroot()
# get the namespace of root
ns = root.tag.split('}')[0][1:]
# create a dictionary with the namespace
ns_d = {'my_ns': ns}
# get all the values
values = root.findall('.//my_ns:value', namespaces=ns_d)
# print the values
for value in values:
print(value)
Outputs:
<Element '{http://soap.sforce.com/2006/04/metadata}value' at 0x7fceea043ba0>
<Element '{http://soap.sforce.com/2006/04/metadata}value' at 0x7fceea043e20>
Alternatively you can just search for the {http://soap.sforce.com/2006/04/metadata}value
# get all the values
values = root.findall('.//{http://soap.sforce.com/2006/04/metadata}value')

python, xml: how to access the 3rd child by element' name

Would you help me, pleace, to get an access to elemnt with name 'id' by the following construction in Python (i have lxml and xml.etree.ElementTree libraries).
Desirable result: '0000000'
Desirable method:
Search in xml-document a child, where it's name is fcsProtocolEF3.
Search in fcsProtocolEF3 an element with name 'id'.
It is crucial to search by element name. Not by ordinal position.
I tried to use something like this: tree.findall('{http://zakupki.gov.ru/oos/export/1}fcsProtocolEF3')[0].findall('{http://zakupki.gov.ru/oos/types/1}id')[0].text
it works, but it requires to input namespaces. XML-document have different namespaces and I don't know how to define them beforehand.
Thank you.
That would be great to use something like XQuery in SQL:
value('(/*:export/*:fcsProtocolEF3/*:id)[1]', 'nvarchar(21)')) AS [id],
XML-document:
<?xml version="1.0" encoding="UTF-8" standalone="true"?>
<ns2:export xmlns:ns3="http://zakupki.gov.ru/oos/common/1" xmlns:ns4="http://zakupki.gov.ru/oos/base/1" xmlns:ns2="http://zakupki.gov.ru/oos/export/1" xmlns:ns10="http://zakupki.gov.ru/oos/printform/1" xmlns:ns11="http://zakupki.gov.ru/oos/control99/1" xmlns:ns9="http://zakupki.gov.ru/oos/SMTypes/1" xmlns:ns7="http://zakupki.gov.ru/oos/pprf615types/1" xmlns:ns8="http://zakupki.gov.ru/oos/EPtypes/1" xmlns:ns5="http://zakupki.gov.ru/oos/TPtypes/1" xmlns:ns6="http://zakupki.gov.ru/oos/CPtypes/1" xmlns="http://zakupki.gov.ru/oos/types/1">
<ns2:fcsProtocolEF3 schemeVersion="10.2">
<id>0000000</id>
<purchaseNumber>0000000000000000</purchaseNumber>
</ns2:fcsProtocolEF3>
</ns2:export>
lxml solution:
xml = '''<?xml version="1.0"?>
<ns2:export xmlns:ns3="http://zakupki.gov.ru/oos/common/1" xmlns:ns4="http://zakupki.gov.ru/oos/base/1" xmlns:ns2="http://zakupki.gov.ru/oos/export/1" xmlns:ns10="http://zakupki.gov.ru/oos/printform/1" xmlns:ns11="http://zakupki.gov.ru/oos/control99/1" xmlns:ns9="http://zakupki.gov.ru/oos/SMTypes/1" xmlns:ns7="http://zakupki.gov.ru/oos/pprf615types/1" xmlns:ns8="http://zakupki.gov.ru/oos/EPtypes/1" xmlns:ns5="http://zakupki.gov.ru/oos/TPtypes/1" xmlns:ns6="http://zakupki.gov.ru/oos/CPtypes/1" xmlns="http://zakupki.gov.ru/oos/types/1">
<ns2:fcsProtocolEF3 schemeVersion="10.2">
<id>0000000</id>
<purchaseNumber>0000000000000000</purchaseNumber>
</ns2:fcsProtocolEF3>
</ns2:export>'''
from lxml import etree as et
root = et.fromstring(xml)
text = root.xpath('//*[local-name()="export"]/*[local-name()="fcsProtocolEF3"]/*[local-name()="id"]/text()')[0]
print(text)
Below is ET based solution. NS are in use.
import xml.etree.ElementTree as ET
xml = '''<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ns2:export xmlns:ns3="http://zakupki.gov.ru/oos/common/1" xmlns:ns4="http://zakupki.gov.ru/oos/base/1" xmlns:ns2="http://zakupki.gov.ru/oos/export/1" xmlns:ns10="http://zakupki.gov.ru/oos/printform/1" xmlns:ns11="http://zakupki.gov.ru/oos/control99/1" xmlns:ns9="http://zakupki.gov.ru/oos/SMTypes/1" xmlns:ns7="http://zakupki.gov.ru/oos/pprf615types/1" xmlns:ns8="http://zakupki.gov.ru/oos/EPtypes/1" xmlns:ns5="http://zakupki.gov.ru/oos/TPtypes/1" xmlns:ns6="http://zakupki.gov.ru/oos/CPtypes/1" xmlns="http://zakupki.gov.ru/oos/types/1">
<ns2:fcsProtocolEF3 schemeVersion="10.2">
<id>0000000</id>
<purchaseNumber>0000000000000000</purchaseNumber>
</ns2:fcsProtocolEF3>
</ns2:export>
'''
def get_id_text():
root = ET.fromstring(xml)
fcs = root.find('{http://zakupki.gov.ru/oos/export/1}fcsProtocolEF3')
# assuming there is one fcs element and one id under fcs
return fcs.find('{http://zakupki.gov.ru/oos/types/1}id').text
print(get_id_text())
output
0000000

XML parsing specific values - Python

I've been attempting to parse a list of xml files. I'd like to print specific values such as the userName value.
<?xml version="1.0" encoding="utf-8"?>
<Drives clsid="{8FDDCC1A-0C3C-43cd-A6B4-71A6DF20DA8C}"
disabled="1">
<Drive clsid="{935D1B74-9CB8-4e3c-9914-7DD559B7A417}"
name="S:"
status="S:"
image="2"
changed="2007-07-06 20:57:37"
uid="{4DA4A7E3-F1D8-4FB1-874F-D2F7D16F7065}">
<Properties action="U"
thisDrive="NOCHANGE"
allDrives="NOCHANGE"
userName=""
cpassword=""
path="\\scratch"
label="SCRATCH"
persistent="1"
useLetter="1"
letter="S"/>
</Drive>
</Drives>
My script is working fine collecting a list of xml files etc. However the below function is to print the relevant values. I'm trying to achieve this as suggested in this post. However I'm clearly doing something incorrectly as I'm getting errors suggesting that elm object has no attribute text. Any help would be appreciated.
Current Code
from lxml import etree as ET
def read_files(files):
for fi in files:
doc = ET.parse(fi)
elm = doc.find('userName')
print elm.text
doc.find looks for a tag with the given name. You are looking for an attribute with the given name.
elm.text is giving you an error because doc.find doesn't find any tags, so it returns None, which has no text property.
Read the lxml.etree docs some more, and then try something like this:
doc = ET.parse(fi)
root = doc.getroot()
prop = root.find(".//Properties") # finds the first <Properties> tag anywhere
elm = prop.attrib['userName']
userName is an attribute, not an element. Attributes don't have text nodes attached to them at all.
for el in doc.xpath('//*[#userName]'):
print el.attrib['userName']
You can try to take the element using the tag name and then try to take its attribute (userName is an attribute for Properties):
from lxml import etree as ET
def read_files(files):
for fi in files:
doc = ET.parse(fi)
props = doc.getElementsByTagName('Properties')
elm = props[0].attributes['userName']
print elm.value

Copy attribute information when different element have the same name in XML with python

So, here's my XML tree:
<?xml version="1.0"?>
<api>
<query>
<normalized>
<n from="Brain_cancer" to="Brain cancer" />
</normalized>
<redirects>
<r from="Brain cancer" to="Brain tumor"
/>
</redirects>
<pages>
<page pageid="37284" ns="0" title="Brain tumor">
<revisions>
<rev revid="412658600" parentid="412501243" user="Andycjp" userid="55014" timestamp="2011-02-08T03:35:27Z" size="59870" sha1="fe1ff25c27ebc86572aa4be8201cb813e1bf3d32" comment="/* Psychological and behavioral consequences */" contentformat="text/x-wiki" contentmodel="wikitext" xml:space="preserve">
</rev>
</revisions>
</page>
</pages>
</query>
<warnings>
<revisions xml:space="preserve">
</revisions>
<result xml:space="preserve">
</result>
</warnings>
<query-continue>
<revisions rvcontinue="456175380"
/>
</query-continue>
</api>
So, has you can see, the "revisions" element appears in two differents places, in differents levels. My objective is to reach the attribute "rvcontinue" (who's path is api/query-continue/revisions) to copy it's value in a new variable. It's probably because i'm just not getting it right, but elementTree and xpath didn't work so far.
This is what i've did so far, but it's getting no where
import xml.etree.ElementTree as ET
tree = ET.parse('Brain_tumor_5.xml')
for elem in tree.getiterator():
if elem.tag=='{http://www.namespace.co.uk}query-continue':
output = {}
for elem1 in list(elem):
if elem1.tag=='{http://www.namespace.co.uk}revisions':
output['rvcontinue']=elem1.text
print output
p = tree.find("./api/query-continue/revisions[#rvcontinue=]")
q = p.attrib
print q
I also have mostly used lxml, so I don't know what's up with etree, but it appears
that find from the tree doesn't work, but find from the root does work:
>>> tree.getroot().find( 'query-continue/revisions[#rvcontinue]' ).attrib['rvcontinue']
'456175380'
Also: I don't know if it's just a typo above, but:
p = tree.find("./api/query-continue/revisions[#rvcontinue=]")
will give a SyntaxError: invalid predicate
Added Note: It appears that tree.find( 'api' ) returns None,
but tree.find( '.' ) returns <Element 'api' at 0x1004e5f10>
so tree.find( './query-continue/revisions[#rvcontinue]' )
will also work.
This does not directly answer your question. However, I would use lxml.etree (which supposedly provides the same ElementTree interface) and the following code:
>>> import lxml.etree
>>> doc = lxml.etree.parse('doc.xml')
>>> node = doc.xpath('/api/query-continue/revisions[#rvcontinue]')
>>> node[0].attrib['rvcontinue']
'456175380'
Tried with xml.etree.ElementTree but doesn't appear to work.

Parsing Solr XML into Python Dictionary

I am new to python and am trying to pass an xml document (filled with documents for a solr instance) into a python dictionary. I am having trouble trying to actually accomplish this. I have tried using ElementTree and minidom but I can't seem to get the right results.
Here is my XML Structure:
<add>
<doc>
<field name="genLatitude">45.639968</field>
<field name="carOfficeHoursEnd">2000-01-01T09:00:00.000Z</field>
<field name="genLongitude">5.879745</field>
</doc>
<doc>
<field name="genLatitude">46.639968</field>
<field name="carOfficeHoursEnd">2000-01-01T09:00:00.000Z</field>
<field name="genLongitude">6.879745</field>
</doc>
</add>
And From this I need to turn it into a dictionary that looks like:
doc {
"genLatitude": '45.639968',
"carOfficeHoursEnd": '2000-01-01T09:00:00.000Z',
"genLongitude": '5.879745',
}
I am not too familiar with how dictionaries work but is there also a way to get all the "docs" into one dictionary.
cheers.
import xml.etree.cElementTree as etree
from pprint import pprint
root = etree.fromstring(xmlstr) # or etree.parse(filename_or_file).getroot()
docs = [{f.attrib['name']: f.text for f in doc.iterfind('field[#name]')}
for doc in root.iterfind('doc')]
pprint(docs)
Output
[{'carOfficeHoursEnd': '2000-01-01T09:00:00.000Z',
'genLatitude': '45.639968',
'genLongitude': '5.879745'},
{'carOfficeHoursEnd': '2000-01-01T09:00:00.000Z',
'genLatitude': '46.639968',
'genLongitude': '6.879745'}]
Where xmlstr is:
xmlstr = """
<add>
<doc>
<field name="genLatitude">45.639968</field>
<field name="carOfficeHoursEnd">2000-01-01T09:00:00.000Z</field>
<field name="genLongitude">5.879745</field>
</doc>
<doc>
<field name="genLatitude">46.639968</field>
<field name="carOfficeHoursEnd">2000-01-01T09:00:00.000Z</field>
<field name="genLongitude">6.879745</field>
</doc>
</add>
"""
Solr can return a Python dictionary if you add wt=python to the request parameters. To convert this text response into a Python object, use ast.literal_eval(text_response).
This is much simpler than parsing the XML.
A possible solution using ElementTree, with output pretty formatted for sake of example:
>>> import xml.etree.ElementTree as etree
>>> root = etree.parse(document).getroot()
>>> docs = []
>>> for doc in root.findall('doc'):
... fields = {}
... for field in doc:
... fields[field.attrib['name']] = field.text
... docs.append(fields)
...
>>> print docs
[{'genLongitude': '5.879745',
'genLatitude': '45.639968',
'carOfficeHoursEnd': '2000-01-01T09:00:00.000Z'},
{'genLongitude': '6.879745',
'genLatitude': '46.639968',
'carOfficeHoursEnd': '2000-01-01T09:00:00.000Z'}]
The XML document you show does not provide a way to distinguish each doc from the other, so I would maintain that a list is the best structure to collect each dictionary.
Indeed, if you want to insert each doc data into another dictionary, of course you can, but you need to choose a suitable key for that dictionary. For example, using the id Python provides for each object, you could write:
>>> docs = {}
>>> for doc in root.findall('doc'):
... fields = {}
... for field in doc:
... fields[field.attrib['name']] = field.text
... docs[id(fields)] = fields
...
>>> print docs
{3076930796L: {'genLongitude': '6.879745',
'genLatitude': '46.639968',
'carOfficeHoursEnd': '2000-01-01T09:00:00.000Z'},
3076905540L: {'genLongitude': '5.879745',
'genLatitude': '45.639968',
'carOfficeHoursEnd': '2000-01-01T09:00:00.000Z'}}
This example is designed just to let you see how to use the outer dictionary. If you decide to go down this path, I would suggest you to find a meaningful and usable key instead of the obejct's memory address returned by id, which can change from run to run.
It's risky to eval any string that comes from the outside directly into python. Who knows what's in there.
I'd suggest using the json interface. Something like:
import json
import urllib2
response_dict = json.loads(urllib2.urlopen('http://localhost:8080/solr/combined/select?wt=json&q=*&rows=1').read())
#to view the dict
print json.dumps(answer, indent=1)

Categories

Resources