regular expression match methods with python - python

There is a string text
<R-PORT-PROTOTYPE>
<SHORT-NAME>VtDBGO_ReMotTqReqL2_null</SHORT-NAME>
<REQUIRED-COM-SPECS>
<NONQUEUED-RECEIVER-COM-SPEC>
<DATA-ELEMENT-REF DEST="VARIABLE-DATA-PROTOTYPE">/FS_DBGO_pkg/FS_DBGO_if/VtDBGO_ReMotTqReqL2_null/VtDBGO_ReMotTqReqL2_null</DATA-ELEMENT-REF>
<USES-END-TO-END-PROTECTION>false</USES-END-TO-END-PROTECTION>
<ALIVE-TIMEOUT>0</ALIVE-TIMEOUT>
<ENABLE-UPDATE>false</ENABLE-UPDATE>
<FILTER>
<DATA-FILTER-TYPE>ALWAYS</DATA-FILTER-TYPE>
</FILTER>
<HANDLE-NEVER-RECEIVED>false</HANDLE-NEVER-RECEIVED>
<INIT-VALUE>
<NUMERICAL-VALUE-SPECIFICATION>
<VALUE>0</VALUE>
</NUMERICAL-VALUE-SPECIFICATION>
</INIT-VALUE>
</NONQUEUED-RECEIVER-COM-SPEC>
</REQUIRED-COM-SPECS>
<REQUIRED-INTERFACE-TREF DEST="SENDER-RECEIVER-INTERFACE">/FS_DBGO_pkg/FS_DBGO_if/VtDBGO_ReMotTqReqL2_null</REQUIRED-INTERFACE-TREF>
</R-PORT-PROTOTYPE>
How can I replace the SHORT-NAME of VtDBGO_FrMotTqReqL2_null with the attribute content.
just like the following string text
<R-PORT-PROTOTYPE UUID="E8000CF6-DAFD-49C7-B1D4-D7EC20F43654">
<SHORT-NAME>VtDBGO_FrMotTqReqL2_null</SHORT-NAME>
<REQUIRED-COM-SPECS>
<NONQUEUED-RECEIVER-COM-SPEC>
<DATA-FILTER-TYPE>ALWAYS</DATA-FILTER-TYPE>
</FILTER>
<HANDLE-NEVER-RECEIVED>false</HANDLE-NEVER-RECEIVED>
<INIT-VALUE>
<RECORD-VALUE-SPECIFICATION>
<FIELDS>
<NUMERICAL-VALUE-SPECIFICATION>
<SHORT-LABEL>ElementConstant</SHORT-LABEL>
<VALUE>0</VALUE>
</NUMERICAL-VALUE-SPECIFICATION>
<NUMERICAL-VALUE-SPECIFICATION>
<SHORT-LABEL>ElementConstant_1</SHORT-LABEL>
<VALUE>0</VALUE>
</NUMERICAL-VALUE-SPECIFICATION>
</FIELDS>
</RECORD-VALUE-SPECIFICATION>
</INIT-VALUE>
</NONQUEUED-RECEIVER-COM-SPEC>
</REQUIRED-COM-SPECS>
Here is my two solutions ,but it didn't work
pattern_update=re.compile(r"""<R-PORT -PROTOTYPE>
\s+<SHORT-NAME>{0}</SHORT-NAME>
.*?<INIT-VALUE>((?:.(?!<REQUIRED-COM-SPECS>))*?)</INIT-VALUE>.*?
\s+</R-PORT-PROTOTYPE>""".format(port),re.DOTALL | re.MULTILINE)
_ar_xml_str = re.sub(pattern_update, replace_str, ar_xml_str)
ar_xml_str = _ar_xml_str
pattern_update=re.compile(r"""<SHORT-NAME>VtDBGO_ReMotTqReqL2_null</SHORT-NAME>
.*?<INIT-VALUE>(.*?)</INIT-VALUE>""",re.DOTALL | re.MULTILINE)
_ar_xml_str = re.sub(pattern_update, replace_str, ar_xml_str)
ar_xml_str = _ar_xml_str

Related

invert regex pattern in python

I'm trying to filter from string only the Arabic character but the next function doesn't work for me:
import re
def remove_any_non_arabic_char(text):
non_arabic_char = re.compile('^[\u0627-\u064a]')
text = re.sub(non_arabic_char, "", text)
print(text)
for example:
s = "Kühn xvii, 346] قال جالينوس: [1] قد اتفق جل من فسر هذا الكتا"
The desired output of remove_any_non_arabic_char(s) should be قال جالينوس قد اتفق جل من فسر هذا الكتا but the input stays without changes.
What should I do?
First, you need to fix your regex as suggested in the comments, then for a more efficient solution, you will need to expand your Unicode character selection to include all Arabic character mappings. Finally, you need to keep at least one space between Arabic words to keep the Arabic text legible:
import re
def remove_any_non_arabic_char(text):
non_arabic_char = re.compile('[^\s\\u0600-\u06FF]')
text_with_no_spaces = re.sub(non_arabic_char, "", text)
text_with_single_spaces = " ".join(re.split("\s+", text_with_no_spaces))
return text_with_single_spaces
text_1 = "Kühn xvii, 346] قال جالينوس: [1] قد اتفق جل من فسر هذا الكتا"
text_2 = '''
تغيّر مفهوم كلمة (أدب) من العصر الجاهلي jahili (pre-Islamic) era إلى الآن عبر
مراحل periods التاريخ المتعددة. ففي الجاهلية، كانت كلمة أدب تعني (الدعوة إلى
الطعام). وبعدها، استخدم الرسول محمد (عليه السلام) الكلمة بمعنى "التهذيب والتربية"
education and mannerism. وفي العصر الأموي، اتصلت had to do كلمة أدب
بالتاريخ والفقه والقرآن والحديث. أما في العصرالعباسي، فأصبحت تعني تعلّم الشعر
والنثر prose واتسع الأدب ليشمل أنواع المعرفة وألوانها وخصوصاً علم البلاغة واللغة.
أما في الوقت الحالي، فأصبحت كلمة أدب ذات صلة pertinent بالكلام البليغ
الجميل المؤثر that impacts في أحاسيس القاريء أو السامع.
'''
# Isleem, N. M., & Abuhakema, G. M. (2020). Kalima wa Nagham: A Textbook for
# Teaching Arabic, Volume 2 (Vol. 3). University of Texas Press. (page 5)
print('text_1: \n', remove_any_non_arabic_char(text_1))
print('\ntext_2: \n\n', remove_any_non_arabic_char(text_2))
Running the code on the two texts above in Jupyter, you get:
Notice that punctuation marks shared between Arabic and English (like periods and brackets) have also been removed. To keep those, you would need to introduce more complex conditionals.

Python XML Parser Issue

I am new to python. Sorry for asking this stupid question.
I am trying to read a XML file to python object (preferably to pandas)
For now I am just trying to print the variables, to see if I can read them properly in a tabular form.
I have used xml.etree.ElementTree for this, but I might not be using it as intended.
Code:
import xml.etree.ElementTree as ET
tree = ET.parse("data.xml")
ODM = tree.getroot()
ns = {'xmlns': 'http://www.cdisc.org/ns/odm/v1.3',
'mdsol': 'http://www.mdsol.com/ns/odm/metadata'}
for ClinicalData in ODM:
LocationOID=None
#print(ClinicalData.tag, ClinicalData.attrib)
for SubjectData in ClinicalData:
for SiteRef in SubjectData:
LocationOID=SiteRef.attrib.get('LocationOID')
for StudyEventData in SubjectData:
for AuditRecord in StudyEventData:
print(ClinicalData.attrib.get('MetaDataVersionOID'),
ClinicalData.attrib.get('AuditSubCategoryName'), #null ouptput due to namespace issue
SubjectData.attrib.get('SubjectKey'),
SubjectData.attrib.get('SubjectName'), #null ouptput due to namespace issue
LocationOID, #not sure what is the issue
StudyEventData.attrib.get('StudyEventRepeatKey'),
AuditRecord.find('DateTimeStamp') #not sure what is the issue
)
Input:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<ODM xmlns="http://www.cdisc.org/ns/odm/v1.3"
xmlns:mdsol="http://www.mdsol.com/ns/odm/metadata"
CreationDateTime="2019-08-23T12:59:09" FileOID="3b2b4161-fad8-4239-9c83-03d0e62624dd" FileType="Transactional" ODMVersion="1.3">
<ClinicalData MetaDataVersionOID="1772" StudyOID="0ACC SP3 MAPPING1(DEV)" mdsol:AuditSubCategoryName="Activated">
<SubjectData SubjectKey="7735fd9c-1792-457c-aa58-0ca26ecdc810" mdsol:SubjectKeyType="SubjectUUID" mdsol:SubjectName="ACC-SUBJ-3">
<SiteRef LocationOID="0ACCSP3MAPPING1SITE1"/>
<StudyEventData StudyEventOID="FV" StudyEventRepeatKey="VIST[1]/FV[1]" mdsol:InstanceId="2960580">
<AuditRecord>
<UserRef UserOID="systemuser"/>
<LocationRef LocationOID="0ACCSP3MAPPING1SITE1"/>
<DateTimeStamp>2019-07-10T07:56:54</DateTimeStamp>
<ReasonForChange>Update</ReasonForChange>
<SourceID>394263772</SourceID>
</AuditRecord>
</StudyEventData>
</SubjectData>
</ClinicalData>
</ODM>
I am expecting all the print variables need to have the proper variable assigned values as in XML file. Please let me know is there any other proper way of doing it instead of inner looping multiple times.
Namespaces are a pain using ElementTree. See this discussion.
Short answer:
for ClinicalData in ODM:
#print(ClinicalData.tag, ClinicalData.attrib)
for SubjectData in ClinicalData:
SiteRef = SubjectData.find('{http://www.cdisc.org/ns/odm/v1.3}SiteRef')
LocationOID = SiteRef.attrib.get('LocationOID')
for StudyEventData in SubjectData:
for AuditRecord in StudyEventData:
print(
ClinicalData.attrib.get('MetaDataVersionOID'),
ClinicalData.attrib.
get('{http://www.mdsol.com/ns/odm/metadata}AuditSubCategoryName'
), #null ouptput due to namespace issue
SubjectData.attrib.get('SubjectKey'),
SubjectData.attrib.get(
'{http://www.mdsol.com/ns/odm/metadata}SubjectName'
), #null ouptput due to namespace issue
LocationOID, #not sure what is the issue
StudyEventData.attrib.get('StudyEventRepeatKey'),
AuditRecord.find(
'{http://www.cdisc.org/ns/odm/v1.3}DateTimeStamp').
text #not sure what is the issue
)
I think you can use BeautifulSoup for parsing XML:
from bs4 import BeautifulSoup
temp ="""<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<ODM xmlns="http://www.cdisc.org/ns/odm/v1.3"
xmlns:mdsol="http://www.mdsol.com/ns/odm/metadata"
CreationDateTime="2019-08-23T12:59:09" FileOID="3b2b4161-fad8-4239-9c83-03d0e62624dd" FileType="Transactional" ODMVersion="1.3">
<ClinicalData MetaDataVersionOID="1772" StudyOID="0ACC SP3 MAPPING1(DEV)" mdsol:AuditSubCategoryName="Activated">
<SubjectData SubjectKey="7735fd9c-1792-457c-aa58-0ca26ecdc810" mdsol:SubjectKeyType="SubjectUUID" mdsol:SubjectName="ACC-SUBJ-3">
<SiteRef LocationOID="0ACCSP3MAPPING1SITE1"/>
<StudyEventData StudyEventOID="FV" StudyEventRepeatKey="VIST[1]/FV[1]" mdsol:InstanceId="2960580">
<AuditRecord>
<UserRef UserOID="systemuser"/>
<LocationRef LocationOID="0ACCSP3MAPPING1SITE1"/>
<DateTimeStamp>2019-07-10T07:56:54</DateTimeStamp>
<ReasonForChange>Update</ReasonForChange>
<SourceID>394263772</SourceID>
</AuditRecord>
</StudyEventData>
</SubjectData>
</ClinicalData>
</ODM>"""
temp=BeautifulSoup(temp,"lxml")
ClinicalData = temp.find('ClinicalData'.lower())
SubjectData = ClinicalData.find_all('SubjectData'.lower())
LocationOID=None
for i in SubjectData:
SiteRef = i.find('SiteRef'.lower())
LocationOID = SiteRef.attrs['locationoid']
print('LocationOID',LocationOID)
output:
LocationOID 0ACCSP3MAPPING1SITE1
[Finished in 1.2s]
#Justin
I have applied your suggestions, it worked, until I broke it.
Input:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<ODM xmlns="http://www.cdisc.org/ns/odm/v1.3" xmlns:mdsol="http://www.mdsol.com/ns/odm/metadata" CreationDateTime="2019-08-23T12:59:09" FileOID="3b2b4161-fad8-4239-9c83-03d0e62624dd" FileType="Transactional" ODMVersion="1.3">
<ClinicalData MetaDataVersionOID="2965" StudyOID="0ACC SP3 MAPPING1(DEV)" mdsol:AuditSubCategoryName="Entered">
<SubjectData SubjectKey="481e4653-693c-4e15-8762-d8a66c0d2cf1" mdsol:SubjectKeyType="SubjectUUID" mdsol:SubjectName="ACC-SUBJ-1">
<SiteRef LocationOID="0ACCSP3MAPPING1SITE1"/>
<StudyEventData StudyEventOID="FV" StudyEventRepeatKey="VIST[1]/FV[1]" mdsol:InstanceId="2960564">
<FormData FormOID="VS" FormRepeatKey="1" mdsol:DataPageId="15331229">
<ItemGroupData ItemGroupOID="VS" mdsol:RecordId="17928808">
<ItemData ItemOID="VS.WT" TransactionType="Upsert" Value="45">
<AuditRecord>
<UserRef UserOID="alscrave2"/>
<LocationRef LocationOID="0ACCSP3MAPPING1SITE1"/>
<DateTimeStamp>2018-02-02T09:39:30</DateTimeStamp>
<ReasonForChange/>
<SourceID>122841525</SourceID>
</AuditRecord>
<MeasurementUnitRef MeasurementUnitOID="1761.Weight.1"/>
</ItemData>
</ItemGroupData>
</FormData>
</StudyEventData>
</SubjectData>
</ClinicalData>
<ClinicalData MetaDataVersionOID="2965" StudyOID="0ACC SP3 MAPPING1(DEV)" mdsol:AuditSubCategoryName="Entered">
<SubjectData SubjectKey="481e4653-693c-4e15-8762-d8a66c0d2cf1" mdsol:SubjectKeyType="SubjectUUID" mdsol:SubjectName="ACC-SUBJ-1">
<SiteRef LocationOID="0ACCSP3MAPPING1SITE1"/>
<StudyEventData StudyEventOID="FV" StudyEventRepeatKey="VIST[1]/FV[1]" mdsol:InstanceId="2960564">
<FormData FormOID="VS" FormRepeatKey="1" mdsol:DataPageId="15331229">
<ItemGroupData ItemGroupOID="VS" mdsol:RecordId="17928809">
<ItemData ItemOID="VS.WT" TransactionType="Upsert" Value="46">
<AuditRecord>
<UserRef UserOID="alscrave2"/>
<LocationRef LocationOID="0ACCSP3MAPPING1SITE1"/>
<DateTimeStamp>2018-02-02T09:39:30</DateTimeStamp>
<ReasonForChange/>
<SourceID>122841525</SourceID>
</AuditRecord>
<MeasurementUnitRef MeasurementUnitOID="1761.Weight.1"/>
</ItemData>
</ItemGroupData>
</FormData>
</StudyEventData>
</SubjectData>
</ClinicalData>
</ODM>
Code:
import xml.etree.ElementTree as ET
import pandas as pd
def getvalueofnode(node):
""" return node text or None """
return node.text if node is not None else None
tree = ET.parse("data.xml")
ODM = tree.getroot()
xmlns = "{http://www.cdisc.org/ns/odm/v1.3}"
mdsol = "{http://www.mdsol.com/ns/odm/metadata}"
def data_reader():
dfcols = ['CreationDateTime','StudyOID','MetaDataVersionOID','SubjectName','SUBJECTUUID','LocationOID','StudyEventOID',
'StudyEventRepeatKey','FormOID','FormRepeatKey','DataPageId','ItemgroupOID','RecordId','var_name','Value',
'DateTimeStamp','ASC_Name','Measurement_Unit','SourceID','UserOID','InstanceId']
df_xml = pd.DataFrame(columns=dfcols)
CreationDateTime = ODM.attrib.get('CreationDateTime')
for ClinicalData in ODM:
StudyOID = ClinicalData.attrib.get('StudyOID')
MetaDataVersionOID = ClinicalData.attrib.get('MetaDataVersionOID')
ASC_Name = ClinicalData.attrib.get('{0}AuditSubCategoryName'.format(mdsol))
for SubjectData in ClinicalData:
SubjectName = SubjectData.attrib.get('{0}SubjectName'.format(mdsol))
SUBJECTUUID = SubjectData.attrib.get('SubjectKey')
LocationOID = SubjectData.find('{0}SiteRef'.format(xmlns)).attrib.get('LocationOID')
for StudyEventData in SubjectData:
StudyEventOID = StudyEventData.attrib.get('StudyEventOID')
StudyEventRepeatKey = StudyEventData.attrib.get('StudyEventRepeatKey')
InstanceId = StudyEventData.attrib.get('{0}InstanceId'.format(mdsol))
for FormData in StudyEventData:
FormOID = FormData.attrib.get('FormOID')
FormRepeatKey = FormData.attrib.get('FormRepeatKey')
DataPageId = FormData.attrib.get('{0}DataPageId'.format(mdsol))
for ItemGroupData in FormData:
ItemgroupOID = ItemGroupData.attrib.get('ItemgroupOID')
RecordId = ItemGroupData.attrib.get('{0}RecordId'.format(mdsol))
for ItemData in ItemGroupData:
var_name = ItemData.attrib.get('ItemOID')
Value = ItemData.attrib.get('Value')
Measurement_Unit = ItemData.find('MeasurementUnitRef'.format(xmlns)).attrib.get('MeasurementUnitOID')
for AuditRecord in ItemData:
DateTimeStamp = AuditRecord.find('{0}DateTimeStamp'.format(xmlns)).text;
SourceID = AuditRecord.find('{0}SourceID'.format(xmlns)).text;
UserOID = ItemData.find('{0}UserRef'.format(xmlns)).attrib.get('UserOID')
df_xml = df_xml.append(
pd.Series([CreationDateTime,StudyOID,MetaDataVersionOID,SubjectName,
SUBJECTUUID,LocationOID,StudyEventOID,
StudyEventRepeatKey,FormOID,FormRepeatKey,DataPageId,ItemgroupOID,
RecordId,var_name,Value,DateTimeStamp,ASC_Name,Measurement_Unit,
SourceID,UserOID,InstanceId], index=dfcols),
ignore_index=True)
print(df_xml)
data_reader()
Issue: I am getting duplicate records. And variables DateTimeStamp, SourceID, UserOID and Measurement_Unit are throwing run time errors during assignment.

Create a dataframe from nested xml and generate a csv

I have an XML file like this:
<?xml version="1.0"?>
<PropertySet>
<PropertySet NumOutputObjects="1" >
<Message IntObjectName="Class Def" MessageType="Integration Object">
<ListOf_Class_Def>
<ImpExp Type="CLASS_DEF" Name="lp_pkg_cla" Object_Num="1001p">
<ListOfObject_Def>
<Object_Def Ancestor_Num="" Ancestor_Name="">
</Object_Def>
</ListOfObject_Def>
<ListOfObject_Arrt>
<Object_Arrt Orig_Id="6666p" Attr_Name="LP_Portable">
</Object_Arrt>
</ListOfObject_Arrt>
</ImpExp>
</ListOf_Class_Def>
</Message>
</PropertySet>
<PropertySet NumOutputObjects="1" >
<Message IntObjectName="Class Def" MessageType="Integration Object">
<ListOf_Class_Def>
<ImpExp Type="CLASS_DEF" Name="M_pkg_cla" Object_Num="1023i">
<ListOfObject_Def>
<Object_Def Ancestor_Num="" Ancestor_Name="">
</Object_Def>
</ListOfObject_Def>
<ListOfObject_Arrt>
<Object_Arrt Orig_Id="7010p" Attr_Name="O_Portable">
</Object_Arrt>
<Object_Arrt Orig_Id="7012j" Attr_Name="O_wireless">
</Object_Arrt>
</ListOfObject_Arrt>
</ImpExp>
</ListOf_Class_Def>
</Message>
</PropertySet>
<PropertySet NumOutputObjects="1" >
<Message IntObjectName="Prod Def" MessageType="Integration Object">
<ListOf_Prod_Def>
<ImpExp Type="PROD_DEF" Name="Laptop" Object_Num="2008a">
<ListOfObject_Def>
<Object_Def Ancestor_Num="1001p" Ancestor_Name="lp_pkg_cla">
</Object_Def>
</ListOfObject_Def>
<ListOfObject_Arrt>
</ListOfObject_Arrt>
</ImpExp>
</ListOf_Prod_Def>
</Message>
</PropertySet>
<PropertySet NumOutputObjects="1" >
<Message IntObjectName="Prod Def" MessageType="Integration Object">
<ListOf_Prod_Def>
<ImpExp Type="PROD_DEF" Name="Mouse" Object_Num="2987d">
<ListOfObject_Def>
<Object_Def Ancestor_Num="1023i" Ancestor_Name="M_pkg_cla">
</Object_Def>
</ListOfObject_Def>
<ListOfObject_Arrt>
</ListOfObject_Arrt>
</ImpExp>
</ListOf_Prod_Def>
</Message>
</PropertySet>
<PropertySet NumOutputObjects="1" >
<Message IntObjectName="Prod Def" MessageType="Integration Object">
<ListOf_Prod_Def>
<ImpExp Type="PROD_DEF" Name="Speaker" Object_Num="5463g">
<ListOfObject_Def>
<Object_Def Ancestor_Num="" Ancestor_Name="">
</Object_Def>
</ListOfObject_Def>
<ListOfObject_Arrt>
</ListOfObject_Arrt>
</ImpExp>
</ListOf_Prod_Def>
</Message>
</PropertySet>
</PropertySet>
I am hoping to extract the Name, Object_Num, Orig_Id and Attr_Name tags from it using Python and convert them into a .csv format.
The .csv format I'd like to see it in is simply:
ProductId Product AttributeId Attribute
2008a Laptop 6666p LP_Portable
2987d Mouse 7010p O_Portable
2987d Mouse 7012p O_Wireless
5463g Speaker "" ""
Actually there is a relationship like this in xml tags:
All products are in the tags, "ImpExp Type="PROD_DEF".. "
All attributes are in the tags, "ImpExp Type="CLASS_DEF".. "
If a product has attributes, then there is a tag
<Object_Def Ancestor_Num="1023i".. >
The Ancestor_Num is equal to Object_Num in tags,
Type="CLASS_DEF"..
I have tried this:
from lxml import etree
import pandas
import HTMLParser
inFile = "./newm.xml"
outFile = "./new.csv"
ctx1 = etree.iterparse(inFile, tag=("ImpExp", "ListOfObject_Def", "ListOfObject_Arrt",))
hp = HTMLParser.HTMLParser()
csvData = []
csvData1 = []
csvData2 = []
csvData3 = []
csvData4 = []
csvData5 = []
for event, elem in ctx1:
value1 = elem.get("Type")
value2 = elem.get("Name")
value3 = elem.get("Object_Num")
value4 = elem.get("Ancestor_Num")
value5 = elem.get("Orig_Id")
value6 = elem.get("Attr_Name")
if value1 == "PROD_DEF":
csvData.append(value2)
csvData1.append(value3)
for event, elem in ctx1:
if value4 is not None:
csvData2.append(value4)
elem.clear()
df = pandas.DataFrame({'Product':csvData, 'ProductId':csvData1, 'AncestorId':csvData2})
for event, elem in ctx1:
if value1 == "Class Def":
csvData3.append(value3)
csvData4.append(value5)
csvData5.append(value6)
elem.clear()
df1 = pandas.DataFrame({'AncestorId':csvData3, 'AttribId':csvData4, 'AttribName':csvData5})
dff = pandas.merge(df, df1, on="AncestorId")
dff.to_csv(outFile, index = False)
Consider XSLT, the special purpose language designed to transform XML files and can directly convert XML to CSV (i.e., text file) without the pandas dataframe intermediary. Python's third-party module lxml (which you are already using) can run XSLT 1.0 scripts and do so without for loops or if logic. However, due to the complex alignment of product and attributes, some longer XPath searches are used with XSLT.
XSLT (save as .xsl file, a special .xml file)
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output indent="no" method="text"/>
<xsl:strip-space elements="*"/>
<xsl:param name="delimiter">,</xsl:param>
<xsl:template match="/PropertySet">
<xsl:text>ProductId,Product,AttributeId,Attribute
</xsl:text>
<xsl:apply-templates select="*"/>
</xsl:template>
<xsl:template match="PropertySet|Message|ListOf_Class_Def|ListOf_Prod_Def|ImpExp">
<xsl:apply-templates select="*"/>
</xsl:template>
<xsl:template match="ListOfObject_Arrt">
<xsl:apply-templates select="Object_Arrt"/>
<xsl:if test="name(*) != 'Object_Arrt' and preceding-sibling::ListOfObject_Def/Object_Def/#Ancestor_Name = ''">
<xsl:value-of select="concat(ancestor::ImpExp/#Name, $delimiter,
ancestor::ImpExp/#Object_Num, $delimiter,
'', $delimiter,
'')"/><xsl:text>
</xsl:text>
</xsl:if>
</xsl:template>
<xsl:template match="Object_Arrt">
<xsl:variable name="attrName" select="ancestor::ImpExp/#Name"/>
<xsl:value-of select="concat(/PropertySet/PropertySet/Message[#IntObjectName='Prod Def']/ListOf_Prod_Def/
ImpExp[ListOfObject_Def/Object_Def/#Ancestor_Name = $attrName]/#Name, $delimiter,
/PropertySet/PropertySet/Message[#IntObjectName='Prod Def']/ListOf_Prod_Def/
ImpExp[ListOfObject_Def/Object_Def/#Ancestor_Name = $attrName]/#Object_Num, $delimiter,
#Orig_Id, $delimiter,
#Attr_Name)"/><xsl:text>
</xsl:text>
</xsl:template>
</xsl:stylesheet>
Python
import lxml.etree as et
# LOAD XML AND XSL
xml = et.parse('Input.xml')
xsl = et.parse('XSLT_Script.xsl')
# RUN TRANSFORMATION
transform = et.XSLT(xsl)
result = transform(xml)
# OUTPUT TO FILE
with open('Output.csv', 'wb') as f:
f.write(result)
Output
ProductId,Product,AttributeId,Attribute
Laptop,2008a,6666p,LP_Portable
Mouse,2987d,7010p,O_Portable
Mouse,2987d,7012j,O_wireless
Speaker,5463g,,
You would need to preparse all of the CLASS_DEF entries into a dictionary. These can then be looked up when processing the PROD_DEF entries:
import csv
from lxml import etree
inFile = "./newm.xml"
outFile = "./new.csv"
tree = etree.parse(inFile)
class_defs = {}
# First extract all the CLASS_DEF entries into a dictionary
for impexp in tree.iter("ImpExp"):
name = impexp.get('Name')
if impexp.get('Type') == "CLASS_DEF":
for list_of_object_arrt in impexp.findall('ListOfObject_Arrt'):
class_defs[name] = [(obj.get('Orig_Id'), obj.get('Attr_Name')) for obj in list_of_object_arrt]
with open(outFile, 'wb') as f_output:
csv_output = csv.writer(f_output)
csv_output.writerow(['ProductId', 'Product', 'AttributeId', 'Attribute'])
for impexp in tree.iter("ImpExp"):
object_num = impexp.get('Object_Num')
name = impexp.get('Name')
if impexp.get('Type') == "PROD_DEF":
for list_of_object_def in impexp.findall('ListOfObject_Def'):
for obj in list_of_object_def:
ancestor_num = obj.get('Ancestor_Num')
ancestor_name = obj.get('Ancestor_Name')
csv_output.writerow([object_num, name] + list(class_defs.get(ancestor_name, [['', '']])[0]))
This would produce new.csv containing:
ProductId,Product,AttributeId,Attribute
2008a,Laptop,6666p,LP_Portable
2987d,Mouse,7010p,O_Portable
5463g,Speaker,,
If you are using Python 3.x, use:
with open(outFile, 'w', newline='') as f_output:

lxml: Get all leaf nodes?

Give an XML file, is there a way using lxml to get all the leaf nodes with their names and attributes?
Here is the XML file of interest:
<?xml version="1.0" encoding="UTF-8"?>
<clinical_study>
<!-- This xml conforms to an XML Schema at:
http://clinicaltrials.gov/ct2/html/images/info/public.xsd
and an XML DTD at:
http://clinicaltrials.gov/ct2/html/images/info/public.dtd -->
<id_info>
<org_study_id>3370-2(-4)</org_study_id>
<nct_id>NCT00753818</nct_id>
<nct_alias>NCT00222157</nct_alias>
</id_info>
<brief_title>Developmental Effects of Infant Formula Supplemented With LCPUFA</brief_title>
<sponsors>
<lead_sponsor>
<agency>Mead Johnson Nutrition</agency>
<agency_class>Industry</agency_class>
</lead_sponsor>
</sponsors>
<source>Mead Johnson Nutrition</source>
<oversight_info>
<authority>United States: Institutional Review Board</authority>
</oversight_info>
<brief_summary>
<textblock>
The purpose of this study is to compare the effects on visual development, growth, cognitive
development, tolerance, and blood chemistry parameters in term infants fed one of four study
formulas containing various levels of DHA and ARA.
</textblock>
</brief_summary>
<overall_status>Completed</overall_status>
<phase>N/A</phase>
<study_type>Interventional</study_type>
<study_design>N/A</study_design>
<primary_outcome>
<measure>visual development</measure>
</primary_outcome>
<secondary_outcome>
<measure>Cognitive development</measure>
</secondary_outcome>
<number_of_arms>4</number_of_arms>
<condition>Cognitive Development</condition>
<condition>Growth</condition>
<arm_group>
<arm_group_label>1</arm_group_label>
<arm_group_type>Experimental</arm_group_type>
</arm_group>
<arm_group>
<arm_group_label>2</arm_group_label>
<arm_group_type>Experimental</arm_group_type>
</arm_group>
<arm_group>
<arm_group_label>3</arm_group_label>
<arm_group_type>Experimental</arm_group_type>
</arm_group>
<arm_group>
<arm_group_label>4</arm_group_label>
<arm_group_type>Other</arm_group_type>
<description>Control</description>
</arm_group>
<intervention>
<intervention_type>Other</intervention_type>
<intervention_name>DHA and ARA</intervention_name>
<description>various levels of DHA and ARA</description>
<arm_group_label>1</arm_group_label>
<arm_group_label>2</arm_group_label>
<arm_group_label>3</arm_group_label>
</intervention>
<intervention>
<intervention_type>Other</intervention_type>
<intervention_name>Control</intervention_name>
<arm_group_label>4</arm_group_label>
</intervention>
</clinical_study>
What I would like is a dictionary that looks like this:
{
'id_info_org_study_id': '3370-2(-4)',
'id_info_nct_id': 'NCT00753818',
'id_info_nct_alias': 'NCT00222157',
'brief_title': 'Developmental Effects...'
}
Is this possible with lxml - or indeed any other Python library?
UPDATE:
I ended up doing it this way:
response = requests.get(url)
tree = lxml.etree.fromstring(response.content)
mydict = self._recurse_over_nodes(tree, None, {})
def _recurse_over_nodes(self, tree, parent_key, data):
for branch in tree:
key = branch.tag
if branch.getchildren():
if parent_key:
key = '%s_%s' % (parent_key, key)
data = self._recurse_over_nodes(branch, key, data)
else:
if parent_key:
key = '%s_%s' % (parent_key, key)
if key in data:
data[key] = data[key] + ', %s' % branch.text
else:
data[key] = branch.text
return data
Use the iter method.
http://lxml.de/api/lxml.etree._Element-class.html#iter
Here is a functioning example.
#!/usr/bin/python
from lxml import etree
xml='''
<book>
<chapter id="113">
<sentence id="1" drums='Neil'>
<word id="128160" bass='Geddy'>
<POS Tag="V"/>
<grammar type="STEM"/>
<Aspect type="IMPV"/>
<Number type="S"/>
</word>
<word id="128161">
<POS Tag="V"/>
<grammar type="STEM"/>
<Aspect type="IMPF"/>
</word>
</sentence>
<sentence id="2">
<word id="128162">
<POS Tag="P"/>
<grammar type="PREFIX"/>
<Tag Tag="bi+"/>
</word>
</sentence>
</chapter>
</book>
'''
filename='/usr/share/sri/configurations/saved/test1.xml'
if __name__ == '__main__':
root = etree.fromstring(xml)
# iter will return every node in the document
#
for node in root.iter('*'):
# nodes of length zero are leaf nodes
#
if 0 == len(node):
print node
Here is the output:
$ ./verifyXmlWithDirs.py
<Element POS at 0x176dcf8>
<Element grammar at 0x176da70>
<Element Aspect at 0x176dc20>
<Element Number at 0x176dcf8>
<Element POS at 0x176dc20>
<Element grammar at 0x176dcf8>
<Element Aspect at 0x176da70>
<Element POS at 0x176da70>
<Element grammar at 0x176dc20>
<Element Tag at 0x176dcf8>
Supposed you have done getroot(), something simple like below can construct a dictionary with what you expected:
import lxml.etree
tree = lxml.etree.parse('sample_ctgov.xml')
root = tree.getroot()
d = {}
for node in root:
key = node.tag
if node.getchildren():
for child in node:
key += '_' + child.tag
d.update({key: child.text})
else:
d.update({key: node.text})
Should do the trick, not optimised nor recursively hunt all children nodes, but you get the idea where to start.
Try this out:
from xml.etree import ElementTree
def crawl(root, prefix='', memo={}):
new_prefix = root.tag
if len(prefix) > 0:
new_prefix = prefix + "_" + new_prefix
for child in root.getchildren():
crawl(child, new_prefix, memo)
if len(root.getchildren()) == 0:
memo[new_prefix] = root.text
return memo
e = ElementTree.parse("data.xml")
nodes = crawl(e.getroot())
for k, v in nodes.items():
print k, v
crawl initially takes in the root of an xml tree. It then walks all of its children (recursively) keeping track of all of the tags it went over to get there (that's the whole prefix thing). When it finally finds an element with no children, it saves that data in memo.
Part of the output:
clinical_study_intervention_intervention_name Control clinical_study_phase
N/A clinical_study_arm_group_arm_group_type Other
clinical_study_id_info_nct_id NCT00753818

How to parse xml string having deep structures using python

A similar question is asked here (Python XML Parsing) but I could not reach to the content I am interested in.
I need to extract all the information that is enclosed between the tag patent-classification if the classification-scheme tag value is CPC. There are multiple such element and are enclosed inside patent-classifications tag.
In the example given below, there are three such values: C 07 K 16 22 I , A 61 K 2039 505 A and C 07 K 2317 21 A
<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="/3.0/style/exchange.xsl"?>
<ops:world-patent-data xmlns="http://www.epo.org/exchange" xmlns:ops="http://ops.epo.org" xmlns:xlink="http://www.w3.org/1999/xlink">
<ops:meta name="elapsed-time" value="21"/>
<exchange-documents>
<exchange-document system="ops.epo.org" family-id="39103486" country="US" doc-number="2009234106" kind="A1">
<bibliographic-data>
<publication-reference>
<document-id document-id-type="docdb">
<country>US</country>
<doc-number>2009234106</doc-number>
<kind>A1</kind>
<date>20090917</date>
</document-id>
<document-id document-id-type="epodoc">
<doc-number>US2009234106</doc-number>
<date>20090917</date>
</document-id>
</publication-reference>
<classifications-ipcr>
<classification-ipcr sequence="1">
<text>C07K 16/ 44 A I </text>
</classification-ipcr>
</classifications-ipcr>
<patent-classifications>
<patent-classification sequence="1">
<classification-scheme office="" scheme="CPC"/>
<section>C</section>
<class>07</class>
<subclass>K</subclass>
<main-group>16</main-group>
<subgroup>22</subgroup>
<classification-value>I</classification-value>
</patent-classification>
<patent-classification sequence="2">
<classification-scheme office="" scheme="CPC"/>
<section>A</section>
<class>61</class>
<subclass>K</subclass>
<main-group>2039</main-group>
<subgroup>505</subgroup>
<classification-value>A</classification-value>
</patent-classification>
<patent-classification sequence="7">
<classification-scheme office="" scheme="CPC"/>
<section>C</section>
<class>07</class>
<subclass>K</subclass>
<main-group>2317</main-group>
<subgroup>92</subgroup>
<classification-value>A</classification-value>
</patent-classification>
<patent-classification sequence="1">
<classification-scheme office="US" scheme="UC"/>
<classification-symbol>530/387.9</classification-symbol>
</patent-classification>
</patent-classifications>
</bibliographic-data>
</exchange-document>
</exchange-documents>
</ops:world-patent-data>
Install BeautifulSoup if you don't have it:
$ easy_install BeautifulSoup4
Try this:
from bs4 import BeautifulSoup
xml = open('example.xml', 'rb').read()
bs = BeautifulSoup(xml)
# find patent-classification
patents = bs.findAll('patent-classification')
# filter the ones with CPC
for pa in patents:
if pa.find('classification-scheme', {'scheme': 'CPC'} ):
print pa.getText()
You can use python xml standard module:
import xml.etree.ElementTree as ET
root = ET.parse('a.xml').getroot()
for node in root.iterfind(".//{http://www.epo.org/exchange}classification-scheme[#scheme='CPC']/.."):
data = []
for d in node.getchildren():
if d.text:
data.append(d.text)
print ' '.join(data)

Categories

Resources