I have the data in the bytes type from the request body like the following:
b'0\x80\x06\t*\x86H\x86\xf7\r\x01\x07\x02\xa0\x800\x80\x02\x01\x011\x0b0\t\x06\x05+\x0e\x03\x02\x1a\x05\x000\x80\x06\t*\x86H\x86\xf7\r\x01\x07\x01\xa0\x80$\x80\x04\x82\x04H<?xml version="1.0" encoding="UTF-8"?>\n<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">\n<plist version="1.0">\n<dict>\n<key>PayloadContent</key>\n<dict>\n <key>URL</key>\n<string>***</string>\n <key>DeviceAttributes</key>\n<array>\n<string>UDID</string>\n<string>DEVICE_NAME</string>\n <string>VERSION</string>\n<string>PRODUCT</string>\n<string>MAC_ADDRESS_EN0</string>\n <string>IMEI</string>\n <string>ICCID</string>\n </array>\n</dict>\n<key>PayloadOrganization</key>\n<string>Flybuilds</string>\n<key>PayloadDisplayName</key>\n<string>Device Information (UDID)</string>\n<key>PayloadVersion</key>\n<integer>1</integer>\n<key>PayloadUUID</key>\n<string>*****</string>\n<key>PayloadIdentifier</key>\n<string>******</string>\n<key>PayloadDescription</key>\n<string>Knowing the UDID of my iOS device</string>\n<key>PayloadType</key>\n<string>Profile Service</string>\n</dict>\n</plist>\n\x00\x00\x00\x00\x00\x00\xa0\x82\n#0
Is it possible to extract the data between '<?xml version" and "/plist>" and write to a file in python.
(We need to extract the xml part from the bytes data)
Of course it's possible to extract it knowing the start and end signature of the content you want.
#stream is the variable holding the raw data stream (bytes)
#not repeated here for brevity
start_signature = b'<?xml'
stop_signature = b'</plist>'
xml_start = stream.find(start_signature)
xml_stop = stream.find(stop_signature) + len(stop_signature)
xml_data = stream[xml_start:xml_stop]
While I think this answers the implied question of 'how' to find the data given the start and end, the downside of this solution is that if the xml changes, the script may break. This concern may not be an issue if you know the data will be consistent each time.
If you can learn the meaning of the other bytes in the data you would likely be able to determine the start position and length of the xml without having to know the precise xml contents.
Related
With text given in this link, need to extract data as follows
Each record starts with YYYY Mmm dd hh:mm:ss.ms, for example 2019 Aug 31 09:17:36.550
Each record has a header starting from line #1 above and ending with a blank line
The record data is contained in lines below Interpreted PDU:
The records of interest are the ones with record header first line having 0xB821 NR5G RRC OTA Packet -- RRC_RECONFIG
Is it possible to extract selected record headers and text below #3 above as an array of nested json in the format as below - snipped for brevity, really need to have the entire text data as JSON.
data = [{"time": "2019 Aug 31 09:17:36.550", "PDU Number": "RRC_RECONFIG Message", "Physical Cell ID": 0, "rrc-TransactionIdentifier": 1, "criticalExtensions rrcReconfiguration": {"secondaryCellGroup": {"cellGroupId": 1, "rlc-BearerToAddModList": [{"logicalChannelIdentity": 1, "servedRadioBearer drb-Identity": 2, "rlc-Config am": {"ul-AM-RLC": {"sn-FieldLength": "size18", "t-PollRetransmit": "ms40", "pollPDU": "p32", "pollByte": "kB25", "maxRetxThreshold": "t32"}, "dl-AM-RLC": {"sn-FieldLength": "size18", "t-Reassembly": "ms40", "t-StatusProhibit": "ms20"}}}]}} }, next records data here]
Note that the input text is parsed output of ASN1 data specifications in 3GPP 38.331 section 6.3.2. I'm not sure normal python text parsing is the right way to handle this or should one use something like asn1tools library ? If so an example usage on this data would be helpful.
Unfortunately, it is unlikely that somebody will come with a straight answer to your question (which is very similar to How to extract data from asn1 data file and load it into a dataframe?)
The text of your link is obviously a log file where ASN.1 value notation was used to make the messages human readable. So trying to decode these messages from their textual form is unusual and you will probably not find tooling for that.
In theory, the generic method would be this one:
Gather the ASN.1 DEFINITIONS (schema) that were used to create the ASN.1 messages
Compile these DEFINITIONS with an ASN.1 tool (aka compiler) to generate an object model in your favorite language (python). The tool would provide the specific code to encode and decode ... you would use ASN.1 values decoders.
Add your custom code (either to the object model or plugged in the ASN.1 compiler) to encode your JSON objects
As you see, it is a very long shot (I can expand if this explanation is too short or unclear)
Unless your task is repetivite and/or the number of messages is big, try the methods you already know (manual search, regex) to search the log file.
If you want to see what it takes to create ASN.1 tools, you can find a few (not that many as ASN.1 is not particularly young and popular). Check out https://github.com/etingof/pyasn1 (python)
I created my own for fun in Java and I am adding the ASN.1 value decoders to illustrate my answer: https://github.com/yafred/asn1-tool (branch text-asn-value-support)
Given that you have a textual representation of the input data, you might take a look at the parse library. This allows you to find a pattern in a string and assign contents to variables.
Here is an example for extracting the time, PDU Number and Physical Cell ID data fields:
import parse
with open('w9s2MJK4.txt', 'r') as f:
input = f.read()
data = []
pattern = parse.compile('\n{year:d} {month:w} {day:d} {hour:d}:{min:d}:{sec:d}.{ms:d}{}Physical Cell ID = {pcid:d}{}PDU Number = {pdu:w} {pdutype:w}')
for s in pattern.findall(input):
record = {}
record['time'] = '{} {} {} {:02d}:{:02d}:{:02d}.{:03d}'.format(s.named['year'], s.named['month'], s.named['day'], s.named['hour'], s.named['min'], s.named['sec'], s.named['ms'])
record['PDU Number'] = '{} {}'.format(s.named['pdu'], s.named['pdutype'])
record['Physical Cell ID'] = s.named['pcid']
data.append(record)
Since you have quite a complicated structure and a large number of data fields, this might become a bit cumbersome, but personally I would prefer this approach over regular expressions. Maybe there is also a smarter method to parse the date (which unfortunately seems not to have one of the standard formats supported by the library).
I need to use a PUT request from python's library requests. The XML has to be String, but I need to send the value as an Int.
import requests
bodyXML = """<?xml version='1.0' encoding='utf-8'?><parameters><value>6</value></parameters>"""
bodyHeader = {'Content-Type': 'application/xml'}
p = requests.put('http://test.pt:1891/cash', data=bodyXML, headers=bodyHeader)
The answer is:
<?xml version = "1.0" encoding = "UTF-8"?><reply-text>Value must be numeric.</reply-text>
That's an answer from the program, so I can reach the target and get the response.
If I change 6 to """ + 6 + """, I receive an error in Python
TypeError: must be str, not int
How can I camuflate the integer as String?
When you send data as part of the HTTP request body, that data will always be sent as a string (technically, it's just bytes that happen to encode a string). Furthermore XML is a text based format so that also has to be a string.
If the server does not accept the data you send, then you should talk to the service provider to ask them how to properly format the data in order for the server to accept it.
It is likely that the XML structure requires a different tag for number formats or even requires a special attribute to specify the type.
Since that is specific to your service, we won't be able to answer this for you though.
As stubborn as a programmer can be, I found a way to send values as they are using lxml to build the XML and adding a string "post_xml=" before the XML to send, works just fine:
import requests
from lxml import etree
root = etree.Element("parameters")
child = etree.SubElement(root, "value")
child.text = str(str(6))
xml = str('post_xml=') + etree.tostring(root, encoding='unicode', method='xml')
p = requests.request("PUT", 'http://test.pt:1891/cash', data=xml)
print(p.text)
The value 6 is accepted as an integer by the server.
Easy assuming you are putting the +6+ in the place of the
bodyXML = """<?xml version='1.0' encoding='utf-8'?><parameters><value>6</value></parameters>""" 6 in value. The issue is that the string line notation “””, is terminating before 6. In order to fix this just import requests
bodyXML = """<?xml version='1.0' encoding='utf-8'?><parameters><value>”””+”6”+”””</value></parameters>""" convert the int to a string via quotations or convert by encapsulating the 6 in string() like string(6).
I've downloaded some web pages with requests and saved the content in a postgres database [in a text field] using Django's ORM. For some sudocode of what's going on, here ya go:
art = Article()
page = requests.get("http://example.com")
art.raw_html = page.content
art.save()
I verified that page.content is a bytes object, and I guess I assumed that this object would automatically be decoded upon saving, but it doesn't seem to be... it has been converted to some weird string representation of a bytes object, ostensibly by Django. It looks like this in the interpreter when I call art.raw_html:
'b\'<!DOCTYPE html>\\n<html lang="en" class="pb-page"
And if I call it with print I get this:
b'<!DOCTYPE html>\n<html lang="en" class="pb-page"
And for the life of me I can't re-encode it to a bytes object, even if I trim off the leading b' and trailing '.
I feel like there's an easy solution to this and I feel like an idiot... but after lots of experiments and googling, I'm not figuring it out.
Incidentally, if I manually copy what's returned from the print statement (like with my cursor), I can convert the clipboard contents back to a bytes object just fine and then decode it into some readably-formatted html.
Clearly there is a better way. (And yes, going forward I'll stop saving the content like this in the first place.)
You can use eval or ast.literal_eval as below.
data = "b'gAAAAABc1arg48DmsOwQEbeiuh-FQoNSRnCOk9OvXXOE2cbBe2A46gmP6SPyymDft1yp5HsoHEzXe0KljbsdwTgPG5jCyhMmaA=='"
eval(data)
b'gAAAAABc1arg48DmsOwQEbeiuh-FQoNSRnCOk9OvXXOE2cbBe2A46gmP6SPyymDft1yp5HsoHEzXe0KljbsdwTgPG5jCyhMmaA=='
Using ast.literal_eval
import ast
ast.literal_eval(data)
thanks to #juanpa.arrivillaga. I just added to answer.
I'm having trouble parsing specific body parts of MIME messages.
I have an email client web interface. I want to allow the user to download the attachments of an email. In the past, each time I wanted to download an attachment I would make a call to the IMAP server with the argument RFC822 to obtain the whole message, that I could easily parse with Python.
However, this is not efficient and I need a way to obtain just the required attachment. I'm using the alternative of making a call to the IMAP server with the BODY[1], BODY[2], etc index of the specific bodypart.
When I make this IMAP call I obtain back the correct body part (when I make a call to BODYSTRUCTURE, the number of bytes in the part I'm looking for adds up, so I'm definitely obtaining the correct part).
However, I cannot parse this body part into something useable, or save it for that matter.
A specific example: I make a call to obtain the BODY[1] of an email and obtain back
('4 (UID 26776 BODY[2] {5318}', '/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAoHBwkHBgoJCAkLCwoMDxkQDw4ODx4WFxIZJCAmJSMg\r\nIyIoLTkwKCo2KyIjMkQyNjs9QEBAJjBGS0U+Sjk/QD3/2wBDAQsLCw8NDx0QEB09KSMpPT09PT09\r\nPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT3/wAARCABRAQIDASIA\r\nAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA\r\nAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3\r\nODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm\r\np6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA\r\nAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx\r\nBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK\r\nU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3\r\nuLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD2aiii\r\ngAooooAKKKKACorm4jtLaS4mbbFEpdzjOAOtS1neIf8AkXdR/wCvaT/0E1UVeSRM3yxbRbtbuC9t\r\n1ntZUliboyHIqauA+F+f+JkMnH7vj/vqu/rSvS9lUcE9jLDVvbUlNq1wooorE3CiiigAooooAKKK\r\nKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKzvEP8AyLuo/wDX\r\ntJ/6Cabruu23h/T2urlJZAPuxxLuZv8A63vXlWo+M/EXjC9FnpMUsMRORBb8sR6u3p+QrehSlJ83\r\nRGFerGKcerOp+F/XUv8Atn/7NXfE4GT0FcN4fx4Ss5jqTwS6jPt3w2vRcZxuPQHnnH5VDe6vqOuS\r\n+RGG2t0hi7/X1/HiniZqpVckThKbpUVCW534IYAggg8gilqG0QxWcKMMMsaqR6YFTVznSFFFFABR\r\nRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFITgZPSuen8daJDqLW\r\na3Xmug3SPENyRjIHJ+p7ZqowlLSKuTKcYq8nY6Kio1uIngEyyIYmGQ4PBH1qCLVLSa5Nuky+bjIU\r\n8bh7etSUcn8TGaOz05kYqwlYgg4I4qbRiLXw7ZG3SOFrmLzJmjQKZGyeSRUHxO/48dP/AOurfyqb\r\nTf8AkXNK/wCvf+prtn/usPVnn0/98n6Ikh8KzXt9LPdP5UDOWAXlmH9K6Wy0+20+Ly7WJUHc9z9T\r\n3pXdotOZ0OGWLcPqBXF6V4m1S4k0xpLvzPtTESRvaeWijBPyv0Y8dBXPClKabXQ6qlaNNpPqd5RX\r\nKWuvX8ug6HdPIhlvLsRTHYMFSW6Dt0FM0HxBqF/faZHcSIyXEM7yAIBkq5A+nFU8PNJvt/wf8iVi\r\nYNpd7fjb/M66iuZvrzVZ/Ed3ZWV7HbRQWqzDdAH3E5461lr4w1E2clyRESumLcBNvHmGTZn1x7UR\r\nw8pK6/q4SxMIuzT/AOGO6orjL7XdW0QzR3FzDds1ibmNvJ2bGBAxgHkc1saXJeJewx3+rxTySwea\r\nLcW4Q445yD0HSlKi4q9xxxCk+VJ/h/mbdFcodS1G6uNVnOqw2FnY3Bh+a3D8DHJJPqa09B1G4v7j\r\nVEuHVlt7too8Lj5QBSlSaVxxrxk7W/r+kbFFcW+v6t9in1ZbiEW0V79n+y+T1XcFzvznPNVF8T6q\r\nS8gvBuF55Iia0xGV345l6A4q1hpvqZvFwW6Z39FcRdeJr+P7VeJfWqrBeGBbEoNzoGC7s5znv0rf\r\n0HUbi/l1MXDBhb3jwx4XGFGMD3qZUJRjzMuGIhOXKv6/qxsUVzWva9d6XqsqQ7Gij057gIy9XDYH\r\nPpVOTXNV0lomu7iG8W4sZLlR5Pl+WyqGA4PI5ojQk0muopYmEW0+h2NFcjBrOq2U+nteXMN1HfWs\r\nk+wQ7PLKpvABB5Haq665rMFnpt5LdQzLqKviEQBfKO0kYOeenen9Xl3X9X/yF9aj2f8AVv8ANHbU\r\nVysXiG6e18PyCVHa7jd7gBR821CT9ORVTT/EmoPJpc0t/azi/Yq9skYDQcEjkHPbvR9Xn/Xz/wAh\r\n/Woaf12/zO1orkdI8R6tLpVpdXsFmYZiVEzT7Gc5PATHXjpntUdnrmriHSL+4uYZINSm8s24h2+X\r\nnOCGzk9O9H1eSuhLFQaTSev9fqdlRXB6Z4m1K4Gnyf2lbXM9xcCOSyWEB0XJy2Qc9Bn8as6T4gvp\r\n9Rgj1K+kt5JJin2Y2JCnk4USU5Yaav5Cji4Stbr6f5/8E7OikornOoWoLyWWC0llt4DcSopKxBgp\r\nc+mTwKnrP1LWrPTFInkzJ2jTlj/h+NAHkPiLxL4m8Rak2lyW9xbEnH2GFCGP+8erfyq7F4FvdA8P\r\n3ep6jIiSsixrbpztBdeWPTPHQV1g8Q3OoavDsVYIySMIPmIwTgt6e3SpPFF5JdeD71ZcEoY/m9cs\r\nK7qWIbnGEVZXRwV6CVOc5O7syj4SYnwlgkkLdsACeg2iofFhGn2cF8gkmldSBCi8gKfvZ9Km8I/8\r\nim3/AF+N/wCgiqXjpikGkMpKsElIIOCPmFDpqpinF92TGq6WDjNdkcoPEGu+IZ0SQrc28fAjkHyJ\r\n77uuffOa9JtvIXSbKG1mEqwQ7GI7H0NadnpFnqHh6xWeFQTAjb0G1gSAScj3qnZ+EDb3xka8byl+\r\n6EGGb2NY1qvN7iVkjpo0lH327tm5cyJFpEskgLIkBZgvUgLziuUtLPTrWSEhbyQWsgFtbTXWfnLB\r\nQQmOB82c+hrspII5rd4ZFDRupRl9QRjFVZNHs5m3Sxs5Awu6Rjs5B+Xn5eg6Y6VjGco6JmsqcZWc\r\nkc+ulWulXsDyWtz5UKyXSRG63xQ7cbiq+vzcVAunaUmlwXJkurcwRyGAQ3XzupO5hkDrk11v2C3I\r\nUOhfajRguxY7WxkEnrnA601dOt1tHtdrtA67SjyM3GMY5PAqvbT7k+wp/wApy0thpxa2nM2qCS5z\r\nbl1ucMwWQJ8x78t+VW9T0TSLEW8EkM4iuo1sSUfiNAdwJz/tAc+9bDaJYOWLQH5ju4dhtJYMSvPy\r\n/MAeMVLNplrcWwgnjMsYVlxIxY4YEHknPQmj20+4ewp/ynM3Fxp+rySzGxuJRFF9kIaUIGRpNoP4\r\nkA59DUxt7Tw7qMM5a6muBauQJ7oEKgK5Vc9TzwK310myQyFbdR5hUtgnnacj8jUzWsL3KzvGrSqp\r\nRWPYEg/zApe0la1x+yhfmtqcZfQ6ZdiS4eO+hh1Bt+xbkIkpDqhLA/d5INTQ2dvNfLNajUIzdXD7\r\n/KvNqF1zk8dRgf0rpRotgH3fZwfm3AFiQp3BuBnA+YA8VLHp9tE6tHCqlXaRcZ4Zup/Gn7WdrXF7\r\nCne9jlpLLTJLi2eOC8YXpS6S18/bCXbJyR+GahmsdMgeRZReNAswklhW8BQykb+F7jpz/hXULodg\r\ni4WAjGNpEjZTGcBTn5RyeBjrSLoGmLHsFnGRkHJyWzjHXr0o9tU7h7Cn/KZd1oWkDSWmltmIuZ0m\r\naTI8xWd1/i7AE9PTNU47SyfXrmO1lv4pPPMswW72ITk5IUDn7p49q6ePTreO0e2Cs0DrtKSOzjGM\r\nY5Jpkek2cRjMUbJsQINkjDKgk4bB+bknrnqaPaz7h7Gn2OfvZbHVLuKS7tLsSXlqIYxE4OYny2cd\r\nj8v61JfRaZdxFpYrkrZxC0G1wCVkAB/EVuPo9k6xgw48pFjjKsVKKvTBByOtMGh6eCuLcAKANoZs\r\nHGcEjOCRk8nml7SS2ZTpwd7o55b/AE+efS1FuzSQQbYB9oUpsZMEOR/FgdPerMOj2Wm6rAsUV1N9\r\nkj84JLcEx26kkfKO54NbJ0Wx3xusGx40CI0bshVRkAZBB7mppNPt5Z0mdW8xF2bg7DK9cHB5H1zR\r\n7Se1xeyhvY5nT4dPtbxmtrCYXF9GDCjSgqscgZjt/ufdOR9Kgt00m3h0xl05oHiKSQO0qKXDBgPM\r\nb8Dx7iupg0iyt3R44cNGRsJZm24BAAyeAAx46c0DSbIGEi3T9yAsecnaBnH8zR7WfcPYw7HJ2Mel\r\nB5LiGC6eCxDzfZ5LnKIwBJ2J0I54Oe9T/ZNK0u8siIdSfEp8i3eQlYWJAyEJ/wBr+ddO+mWryyO0\r\nZzKNsih2CuMbeVzg8cdKi/sLTyOYCzZyHZ2LA8YO4nPG0Y9MU/az7iVGmvsmEtppkQstNtrSdpYJ\r\nVmhLOFbdlyQzdcDaePpTLRLK4vrW7dtSnjSdNpnudyxyuMj5fbIGa6EaJYgDbCVYHO9XYPnLHO7O\r\nc/M3PfNPj0qziQJHAqqHWQAZ4ZQAp/AAUvaz7j9jDsW6KKKg0AjIIPQ1zV34NjluxJBcskbHLq/z\r\nEfQ/4101FAGbDpFpptjMttF85QgueWbj1rlNYu7ebSriwDl3mKZZOQu056967zqOawtX8LW17DM9\r\noqwXLL8pyQmfcD+lVB2kmiZpSi01c5HToZLO0aZLlLOyhY75pXwgOORj+I9OKx/EXijTddntrW2e\r\nRFtlZEmkTCyliM8clenGf0pn/CBeJ9W1U2t6iQQRHPmlv3IB7oB1P6+pr0Pw34G0rw4Fkij+0Xg6\r\n3EoBYf7o6L+HPvXYpQoy5+bmkccoSrx9ny2ibGkRtFo9lHIpV0gRWB7EKKuUUVxN3dztSsrBRRRS\r\nGFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQA\r\nneloooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKA\r\nCiiigD//2Q==\r\n').
This specific response corresponds to a JPEG image attachment.
I tried extracting the string representing the body part (so, I'm talking about the string starting in '/9j' and ending in '2Q==\r\n') and saving that to a file as a .jpg, but it's not a valid file.
I then though that, as there are multiple instances of \r\n in that string, that the string might be split with newline/carriage return, so I split the string and stripped it of the \r\n, then joined the substrings and tried to save that to a file. Still not a valid JPEG file.
What can I do to try and parse this response?
Thank you.
You need to parse the BODYRESPONSE string to see what format the data is encoded in, see the IMAP RFC 3501, section 7.4.2. The 5th field is the content encoding:
['IMAGE', 'JPEG', ['NAME', 'image001.jpg'], '<image001.jpg#01CDE914.6E62F850>', None, 'BASE64', 5318, None, None, None]
The fields are, in order, the type and subtype (so image/jpeg in this case), body parameters (such as characterset, format-flowed, or the filename in this case), the attachment id, description, encoding, size, MD5 signature (if any), disposition and language.
In this case the data is base-64 encoded:
>>> imagedata = datastring.decode('base64')
>>> imagedata[:10]
'\xff\xd8\xff\xe0\x00\x10JFIF'
which looks like JPEG data to me.
Im working on a data packet retrieval system which will take a packet, and process the various parts of the packet, based on a system of tags [similar to HTML tags].
[text based files only, no binary files].
Each part of the packet is contained between two identical tags, and here is a sample packet:
"<PACKET><HEAD><ID><ID><SEQ><SEQ><FILENAME><FILENAME><HEAD><DATA><DATA><PACKET>"
The entire packet is contained within the <PACKET><PACKET> tags.
All meta-data is contained within the <HEAD><HEAD> tags and the filename from which the packet is part of is contained within the, you guessed it, the <FILENAME><FILENAME> tags.
Lets say, for example, a single packet is received and stored in a temporary string variable called sTemp.
How do you efficiently retrieve, for example, only the contents of a single pair of tags, for example the contents of the <FILENAME><FILENAME> tags?
I was hoping for such functionality as saying getTagFILENAME( packetX ), which would return the textual string contents of the <FILENAME><FILENAME> tags of the packet.
Is this possible using Python?
Any suggestions or comments appreciated.
If the packet format effectively uses XML-looking syntax (i.e., if the "closing tags" actually include a slash), the xml.etree.ElementTree could be used.
This libray is part of Python Standard Library, starting in Py2.5. I find it a very convenient one to deal with this kind of data. It provides many ways to read and to modify this kind of tree structure. Thanks to the generic nature of XML languages and to the XML awareness built-in the ElementTree library, the packet syntax could evolve easily for example to support repeating elements, element attributes.
Example:
>>> import xml.etree.ElementTree
>>> myPacket = '<PACKET><HEAD><ID>123</ID><SEQ>1</SEQ><FILENAME>Test99.txt</FILE
NAME></HEAD><DATA>spam and cheese</DATA></PACKET>'
>>> xt = xml.etree.ElementTree.fromstring(myPacket)
>>> wrk_ele = xt.find('HEAD/FILENAME')
>>> wrk_ele.text
'Test99.txt'
>>>
Something like this?
import re
def getPacketContent ( code, packetName ):
match = re.search( '<' + packetName + '>(.*?)<' + packetName + '>', code )
return match.group( 1 ) if match else ''
# usage
code = "<PACKET><HEAD><ID><ID><SEQ><SEQ><FILENAME><FILENAME><HEAD><DATA><DATA><PACKET>"
print( getPacketContent( code, 'HEAD' ) )
print( getPacketContent( code, 'SEQ' ) )
As mjv points out, there's not the least sense in inventing an XML-like format if you can just use XML.
But: If you're going to use XML for your packet format, you need to really use XML for it. You should use an XML library to create your packets, not just to parse them. Otherwise you will come to grief the first time one of your field values contains an XML markup character.
You can, of course, write your own code to do the necessary escaping, filter out illegal characters, guarantee well-formedness, etc. For a format this simple, that may be all you need to do. But going down that path is a way to learn things about XML that you perhaps would rather not have to learn.
If using an XML library to create your packets is a problem, you're probably better off defining a custom format (and I'd define one that didn't look anything like XML, to keep people from getting ideas) and building a parser for it using pyparsing.