I have a list of dictionaries that have other dictionaries on them.
Dictionary:
[[{'id': 1, 'networkId': 'L_1111', 'name': 'VLAN1', 'applianceIp': '1.1.1.1', 'subnet': '1.1.1.0/24', 'fixedIpAssignments': {}, 'reservedIpRanges': [], 'dnsNameservers': 'upstream_dns', 'dhcpHandling': 'Run a DHCP server', 'dhcpLeaseTime': '1 day', 'dhcpBootOptionsEnabled': False, 'dhcpOptions': [], 'interfaceId': '1', 'networkName': 'NETWORK1'}, {'id': 2, 'networkId': 'L_2222', 'name': 'VLAN2', 'applianceIp': '2.2.2.2', 'subnet': '2.2.2.0/24', 'fixedIpAssignments': {}, 'reservedIpRanges': [], 'dnsNameservers': 'upstream_dns', 'dhcpHandling': 'Do not respond to DHCP requests', 'interfaceId': '2', 'networkName': 'NETWORK2'}]]
JSON version:
[
[
{
"id": 1,
"networkId": "L_1111",
"name": "VLAN1",
"applianceIp": "1.1.1.1",
"subnet": "1.1.1.0/24",
"fixedIpAssignments": {},
"reservedIpRanges": [],
"dnsNameservers": "upstream_dns",
"dhcpHandling": "Run a DHCP server",
"dhcpLeaseTime": "1 day",
"dhcpBootOptionsEnabled": false,
"dhcpOptions": [],
"interfaceId": "1",
"networkName": "NETWORK1"
},
{
"id": 2,
"networkId": "L_2222",
"name": "VLAN2",
"applianceIp": "2.2.2.2",
"subnet": "2.2.2.0/24",
"fixedIpAssignments": {},
"reservedIpRanges": [],
"dnsNameservers": "upstream_dns",
"dhcpHandling": "Do not respond to DHCP requests",
"interfaceId": "2",
"networkName": "NETWORK2"
},
]
]
I am trying to move this to a CSV file. However, I haven't figured out how to do it. I tried with pandas library but it isn't giving me the output that I am looking for.
Something like this:
id,networkId,name,applianceIp,subnet,fixedIpAssignments,reservedIpRanges,dnsNameservers,dhcpHandling,interfaceId,networkName
1,L_1111,VLAN1,1.1.1.1,1.1.1.0/24,{},[],upstream_dns,Do not respond to DHCP requests,1,NETWORK1
2,L_2222,VLAN2,2.2.2.2,2.2.2.0/24,{},[],upstream_dns,Do not respond to DHCP requests,2,NETWORK2
Expected Output:
id networkId name applianceIP subnet
1 L_1111 VLAN1 1.1.1.1 1.1.1.0/24
2 L_2222 VLAN2 2.2.2.2 2.2.2.0/24
I'd look at using pandas to convert the list to a dataframe and then you'll be able to export that to a csv file.
import pandas as pd
data = [[{'id': 1, 'networkId': 'L_1111', 'name': '1', 'applianceIp': '1.1.1.1', 'subnet': '1.1.1.0/24', 'fixedIpAssignments': {}, 'reservedIpRanges': [], 'dnsNameservers': 'upstream_dns', 'dhcpHandling': 'Run a DHCP server', 'dhcpLeaseTime': '1 day', 'dhcpBootOptionsEnabled': False, 'dhcpOptions': [], 'interfaceId': '1', 'networkName': '1'}, {'id': 2, 'networkId': 'L_2222', 'name': '2', 'applianceIp': '2.2.2.2', 'subnet': '2.2.2.0/24', 'fixedIpAssignments': {}, 'reservedIpRanges': [], 'dnsNameservers': 'upstream_dns', 'dhcpHandling': 'Do not respond to DHCP requests', 'interfaceId': '2', 'networkName': '2'}]]
df = pd.DataFrame(data[0])
df.to_csv("output.csv")
I used the csv module.
import json
import csv
import os
PATH = os.path.dirname(__file__) # Get the path of the used directory
with open(PATH+r"\input.json", "r") as file: # Access the data
json_data = json.load(file)
json_data = [item for item in json_data[0]]
with open(PATH+r"\output.csv", "w+", newline='') as file:
writer = csv.writer(file)
headers = [list(data.keys()) for data in json_data] # Divide the data in
rows = [list(data.values()) for data in json_data] # headers and rows
for i in range(len(json_data)):
writer.writerow(headers[i]) # Write everything
writer.writerow(rows[i])
If you don't want to have headers just remove this line writer.writerow(headers[i])
Here is the data I get as output:
id,networkId,name,applianceIp,subnet,fixedIpAssignments,reservedIpRanges,dnsNameservers,dhcpHandling,dhcpLeaseTime,dhcpBootOptionsEnabled,dhcpOptions,interfaceId,networkName
1,L_1111,VLAN1,1.1.1.1,1.1.1.0/24,{},[],upstream_dns,Run a DHCP server,1 day,False,[],1,NETWORK1
id,networkId,name,applianceIp,subnet,fixedIpAssignments,reservedIpRanges,dnsNameservers,dhcpHandling,interfaceId,networkName
2,L_2222,VLAN2,2.2.2.2,2.2.2.0/24,{},[],upstream_dns,Do not respond to DHCP requests,2,NETWORK2
If you use pandas dataframe, you can easily write csv file. Save each column of DataFrame to seperated column in csv file.
df.to_csv(r'myData.csv',sep=';',encoding="utf-8")
Related
after writing a python script to request some data from a server, I get the response in following structure:
{
'E_AXIS_DATA': {
'item': [
{
'AXIS': '000',
'SET': {
'item': [
{
'TUPLE_ORDINAL': '000000',
'CHANM': '0002',
'CAPTION': 'ECF',
'CHAVL': '0002',
'CHAVL_EXT': None,
'TLEVEL': '00',
'DRILLSTATE': None,
'ATTRIBUTES': None
},
{...
Apparently its not JSON.
After running following command:
results = client.service.RRW3_GET_QUERY_VIEW_DATA("/server")
df = pd.read_json(results)
i get the output meaning that the format is not being accepted as JSON;
ValueError: Invalid file path or buffer object type: <class 'zeep.objects.RRW3_GET_QUERY_VIEW_DATAResponse'>
Any help is welcome.
Thanks
Pandas has DataFrame.read_json() method that can do the trick
import pandas as pd
json_string = '{"content": "a string containing some JSON...." ... etc... }'
df = pd.load_json(json_string)
# Now you can do whatever you like with your dataframe
my_dict has the 1000 values sample -
{0: {'Id': 'd1', 'email': '122as#gmail.com', 'name': 'elpato'},
1: {'Id': 'd2', 'email': 'sss#gmail.com', 'name': 'petoka'},
2: {'Id': 'd3', 'email': 'abcd#gmail.com', 'name': 'hukke'},
3: {'Id': 'd4', 'email': 'bbsss#gmail.com', 'name': 'aetptoka'}}
This code by using name in my_dict and creating json data and json files of it by using faker library random data is generated.
Here by running 1.py 4 json files are created.
i.e., elpato.json, petoka.json, hukke.json, aetptoka.json
Here is 1.py :
import subprocess
import json
import faker
for ids in [g['name'] for g in my_dict.values()]:
fake = Faker('en_US')
ind=ids
sms = {
"user_id": ind ,
"name": fake.name(),
"email": fake.email(),
"gender": "MALE",
"mother_name": fake.name(),
"father_name": fake.name()
}
f_name = '{}.json'.format(ind)
print(f_name)
with open(f_name, 'w') as fp:
json.dump(sms, fp, indent=4)
for grabbing email :
for name in [v['email'] for v in my_dict.values()]:
print(name)
need to use name and email loops in subprocess
output I need : in f_name 4 json files that has been created above should load.
subprocess.call(["....","f_name(json file)","email"])
I need to loop the subprocess so that subprocess will run into loop by calling both f_name and email in a loop. Here it should loop for 4 times as 4 json files are created and 4 emails are in dict.
I read a record from a file and convert it into a dictionary. Later I convert that dictionary to json format so that I could further try to convert it to an avro schema.
Here is my code snippet so far:-
import json
from avro import schema, datafile, io
def json_to_avro():
fo = open("avro_record.txt", "r")
data = fo.readlines()
final_header = []
final_rec = []
for header in data[0:1]:
header = header.strip("\n")
header = header.split(",")
final_header = header
for rec in data[1:]:
rec = rec.strip("\n")
rec = rec.split(" ")
rec = ' '.join(rec).split()
final_rec = rec
final_dict = dict(zip(final_header,final_rec))
#print final_dict
json_dumps = json.dumps(final_dict, ensure_ascii=False)
#print json_dumps
SCHEMA = schema.parse(json_dumps)
json_to_avro()
When I print final_dict, output is:-
{'TransportProtocol': 'udp', 'MSISDN': '+62696174735', 'ResponseCode': 'E6%B8 %B8%E%A%8%E%8%93&pfid=139ver=10.1.2.571title=Air%20fighter_pakage.apk', 'GGSN IP': '202.89.193.185', 'MSTimeZone': '+0008', 'Numbers of time period': '1', 'Mime Type': 'audio/aac', 'EndTime': '1462251588', 'OutBound': '709', 'Inbound': '35','Method': 'GET', 'RAT': 'ph', 'Referer': 'ghijk', 'TAC': '35893783', 'UserAgent': '961', 'MNC': '02', 'OutPayload': '0', 'CI': '34301', 'StartTime': '1462251588', 'DestinationIP':'ef50:5fcd:498e:c265:a37b:10ec:7984:c6a3', 'URL': 'http:///group1/M00/6F/B2/poYBAFYtlqiALni4AG51LNrVFEQ342.apk?pn=com.airfly.fightergame.en1949&caller=9game&m=kxV5msjNq6PPBXxz_cPqzg&t=1451175690&sid=1df9ab75-48c6-41a6-9b86-b0d98976378b&gid=628195&fz=7238956&pid=2&site=%E4%B9%9D%', 'SGSN IP': '202.89.204.5', 'InPayload': '100', 'Protocol': 'http', 'WebDomain': '3', 'Source IP': 'e5df:602a:5a83:eaf1:8049:23c4:0fb7:f78e', 'MCC': '515', 'LAC': '36202', 'FlushFlag': '0', 'APN': '.internet.globe.com.', 'DestinationPort': '80', 'SourcePort': '82', 'LineFormat': 'http7', 'IMSI': '515-02-040687823335'}
When i print json_dumps, output is:-
{"TransportProtocol": "udp", "MSISDN": "+62696174735", "ResponseCode":"E6%B%B8%E5%AE%89%E5%8D%93&pfid=139&ver=10.1.2.571title=Air%20fighter_pakage.apk", "GGSN IP": "202.89.193.185", "MSTimeZone": "+0008", "Numbers of time period": "1", "Mime Type": "audio/aac", "EndTime": "1462251588", "OutBound": "709", "Inbound": "35", "Method": "GET", "RAT": "ph", "Referer": "ghijk", "TAC": "35893783", "UserAgent": "961", "MNC": "02", "OutPayload": "0", "CI": "34301", "StartTime": "1462251588", "DestinationIP": "ef50:5fcd:498e:c265:a37b:10ec:7984:c6a3", "URL": "http:///group1/M00/6F/B2/poYBAFYtlqiALni4AG51LNrVFEQ342.apk?pn=com.airfly.fightergame.en1949&caller=9game&m=kxV5msjNq6PPBXxz_cPqzg&t=1451175690&sid=1df9ab75-48c6-41a6-9b86-b0d98976378b&gid=628195&fz=7238956&pid=2&site=%E4%B9%9D%", "SGSN IP": "202.89.204.5", "InPayload": "100", "Protocol": "http", "WebDomain": "3", "Source IP": "e5df:602a:5a83:eaf1:8049:23c4:0fb7:f78e", "MCC": "515", "LAC": "36202", "FlushFlag": "0", "APN": ".internet.globe.com.", "DestinationPort": "80", "SourcePort": "82", "LineFormat": "http7", "IMSI": "515-02-040687823335"}
Which, I guess is the json format which I further want to convert it to avro schema. But
SCHEMA = schema.parse(json_dumps)
throws an exception:-
Traceback (most recent call last):
File "convertToAvro.py", line 23, in <module>
json_to_avro()
File "convertToAvro.py", line 20, in json_to_avro
SCHEMA = schema.parse(json_dumps)
File "/usr/lib/python2.7/site-packages/avro/schema.py", line 785, in parse
return make_avsc_object(json_data, names)
File "/usr/lib/python2.7/site-packages/avro/schema.py", line 756, in make_avsc_object
raise SchemaParseException('No "type" property: %s' % json_data)
avro.schema.SchemaParseException: No "type" property: {u'TransportProtocol':u'udp', u'MSISDN': u'+62696174735', u'ResponseCode': u'E6%B8%B8%E5%AE%89%E5%8D%93&pfid=139&ver=10.1.2.571&title=Air%20fighter_pakage.apk', u'GGSN IP': u'202.89.193.185', u'EndTime': u'1462251588', u'Method': u'GET',u'Mime Type': u'audio/aac', u'OutBound': u'709', u'Inbound': u'35',u'Numbers of time period': u'1', u'RAT': u'import jsonph', u'Referer':u'ghijk', u'TAC': u'35893783', u'UserAgent': u'961', u'MNC':u'02',u'OutPayload': u'0', u'CI': u'34301', u'DestinationPort': u'80',u'DestinationIP': u'ef50:5fcd:498e:c265:a37b:10ec:7984:c6a3', u'URL':u'http:///group1/M00//B/poYBAFYtlqiALni4AG51LNrVFEQ342.apk?pn=com.airfly.fightergame.en1949&caller=9game&m=kxV5msjNq6PPBXxz_cPqzg&t=1451175690&sid=1df9ab75-48c6-41a6-9b86-b0d98976378b&gid=628195&fz=7238956&pid=2&site=%E4%B9%9D%', u'SGSN IP': u'202.89.204.5', u'InPayload': u'100', u'Protocol': u'http', u'WebDomain': u'3', u'Source IP': u'e5df:602a:5a83:eaf1:8049:23c4:0fb7:f78e', u'MCC': u'515', u'MSTimeZone': u'+0008', u'FlushFlag': u'0', u'APN': u'.internet.globe.com.', u'StartTime': u'1462251588', u'SourcePort': u'82', u'LineFormat': u'http7', u'LAC': u'36202', u'IMSI': u'515-02-040687823335'}
Just in case, here is my input record:-
Protocol,LineFormat,StartTime,EndTime,MSTimeZone,IMSI,MSISDN,TAC,MCC,MNC,LAC,CI,SGSNIP,GGSNIP,APN,RAT,WebDomain,SourceIP,DestinationIP,SourcePort,DestinationPort,TransportProtocol,FlushFlag,Numbers of time period,OutBound,Inbound,Method,URL,ResponseCode,UserAgent,MimeType,Referer,OutPayload,InPayload
http http7 1462251588 1462251588 +0008 515-02-040687823335 +62696174735 35893783 515 02 36202 34301 202.89.204.5 202.89.193.185 .internet.globe.com. ph 3 e5df:602a:5a83:eaf1:8049:23c4:0fb7:f78e ef50:5fcd:498e:c265:a37b:10ec:7984:c6a3 82 80 udp 0 1 709 35 GET http:///group1/M00/6F/B2/poYBAFYtlqiALni4AG51LNrVFEQ342.apk?pn=com.airfly.fightergame.en1949&caller=9game&m=kxV5msjNq6PPBXxz_cPqzg&t=1451175690&sid=1df9ab75-48c6-41a6-9b86-b0d98976378b&gid=628195&fz=7238956&pid=2&site=%E4%B9%9D% E6%B8%B8%E5%AE%89%E5%8D%93&pfid=139&ver=10.1.2.571&title=Air%20fighter_pakage.apk 961 audio/aac ghijk 0 100
This happens because the parameter in schema.parse() function has to be avro-schema (not a record itself) like here (https://avro.apache.org/docs/1.8.0/gettingstartedpython.html):
schema = avro.schema.parse(open("user.avsc", "rb").read())
As you pass a json record, it breaks.
I have a custom data file formatted like this:
{
data = {
friends = {
max = 0 0,
min = 0 0,
},
family = {
cars = {
van = "honda",
car = "ford",
bike = "trek",
},
presets = {
location = "italy",
size = 10,
travelers = False,
},
version = 1,
},
},
}
I want to collect the blocks of data, meaning string between each set of {} while maintaining a hierarhcy. This data is not a typical json format so that is not a possible solution.
My idea was to create a class object like so
class Block:
def __init__(self, header, children):
self.header = header
self.children = children
Where i would then loop through the data line by line 'somehow' collecting the necessary data so my resulting output would like something like this...
Block("data = {}", [
Block("friends = {max = 0 0,\n min = 0 0,}", []),
Block("family = {version = 1}", [...])
])
In short I'm looking for help on ways I can serialize this into useful data I can then easily manipulate. So my approach is to break into objects by using the {} as dividers.
If anyone has suggestions on ways to better approach this I'm all up for ideas. Thank you again.
So far I've just implemented the basic snippets of code
class Block:
def __init__(self, content, children):
self.content = content
self.children = children
def GetBlock(strArr=[]):
print len(strArr)
# blocks = []
blockStart = "{"
blockEnd = "}"
with open(filepath, 'r') as file:
data = file.readlines()
blocks = GetBlock(strArr=data)
You can create a to_block function that takes the lines from your file as an iterator and recursively creates a nested dictionary from those. (Of course you could also use a custom Block class, but I don't really see the benefit in doing so.)
def to_block(lines):
block = {}
for line in lines:
if line.strip().endswith(("}", "},")):
break
key, value = map(str.strip, line.split(" = "))
if value.endswith("{"):
value = to_block(lines)
block[key] = value
return block
When calling it, you have to strip the first line, though. Also, evaluating the "leafs" to e.g. numbers or strings is left as an excercise to the reader.
>>> to_block(iter(data.splitlines()[1:]))
{'data': {'family': {'version': '1,',
'cars': {'bike': '"trek",', 'car': '"ford",', 'van': '"honda",'},
'presets': {'travelers': 'False,', 'size': '10,', 'location': '"italy",'}},
'friends': {'max': '0 0,', 'min': '0 0,'}}}
Or when reading from a file:
with open("data.txt") as f:
next(f) # skip first line
res = to_block(f)
Alternatively, you can do some preprocessing to transform that string into a JSON(-ish) string and then use json.loads. However, I would not go all the way here but instead just wrap the values into "" (and replace the original " with ' before that), otherwise there is too much risk to accidentally turning a string with spaces into a list or similar. You can sort those out once you've created the JSON data.
>>> data = data.replace('"', "'")
>>> data = re.sub(r'= (.+),$', r'= "\1",', data, flags=re.M)
>>> data = re.sub(r'^\s*(\w+) = ', r'"\1": ', data, flags=re.M)
>>> data = re.sub(r',$\s*}', r'}', data, flags=re.M)
>>> json.loads(data)
{'data': {'family': {'version': '1',
'presets': {'size': '10', 'travelers': 'False', 'location': "'italy'"},
'cars': {'bike': "'trek'", 'van': "'honda'", 'car': "'ford'"}},
'friends': {'max': '0 0', 'min': '0 0'}}}
You can also do with ast or json with the help of regex substitutions.
import re
a = """{
data = {
friends = {
max = 0 0,
min = 0 0,
},
family = {
cars = {
van = "honda",
car = "ford",
bike = "trek",
},
presets = {
location = "italy",
size = 10,
travelers = False,
},
version = 1,
},
},
}"""
#with ast
a = re.sub("(\w+)\s*=\s*", '"\\1":', a)
a = re.sub(":\s*((?:\d+)(?: \d+)+)", lambda x:':[' + x.group(1).replace(" ", ",") + "]", a)
import ast
print ast.literal_eval(a)
#{'data': {'friends': {'max': [0, 0], 'min': [0, 0]}, 'family': {'cars': {'car': 'ford', 'bike': 'trek', 'van': 'honda'}, 'presets': {'travelers': False, 'location': 'italy', 'size': 10}, 'version': 1}}}
#with json
import json
a = re.sub(",(\s*\})", "\\1", a)
a = a.replace(":True", ":true").replace(":False", ":false").replace(":None", ":null")
print json.loads(a)
#{u'data': {u'friends': {u'max': [0, 0], u'min': [0, 0]}, u'family': {u'cars': {u'car': u'ford', u'bike': u'trek', u'van': u'honda'}, u'presets': {u'travelers': False, u'location': u'italy', u'size': 10}, u'version': 1}}}
I have around 10 EBS volumes attached to a single instance. Below is e.g., of lsblk for some of them. Here we can't simply mount xvdf or xvdp to some location but actual point is xvdf1, xvdf2, xvdp which are to be mounted. I want to have a script that would allow me to iterate through all the points under xvdf, xvdp etc. using python. I m newbie to python.
[root#ip-172-31-1-65 ec2-user]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdf 202:80 0 35G 0 disk
├─xvdf1 202:81 0 350M 0 part
└─xvdf2 202:82 0 34.7G 0 part
xvdp 202:0 0 8G 0 disk
└─xvdp1 202:1 0 8G 0 part
If you have a relatively new lsblk, you can easily import its json output into a python dictionary, which then open all possibilities for iterations.
# lsblk --version
lsblk from util-linux 2.28.2
For example, you could run the following command to gather all block devices and their children with their name and mount point. Use --help to get a list of all supported columns.
# lsblk --json -o NAME,MOUNTPOINT
{
"blockdevices": [
{"name": "vda", "mountpoint": null,
"children": [
{"name": "vda1", "mountpoint": null,
"children": [
{"name": "pv-root", "mountpoint": "/"},
{"name": "pv-var", "mountpoint": "/var"},
{"name": "pv-swap", "mountpoint": "[SWAP]"},
]
},
]
}
]
}
So you just have to pipe that output into a file and use python's json parser. Or run the command straight within your script as the example below shows:
#!/usr/bin/python3.7
import json
import subprocess
process = subprocess.run("/usr/bin/lsblk --json -o NAME,MOUNTPOINT".split(),
capture_output=True, text=True)
# blockdevices is a dictionary with all the info from lsblk.
# Manipulate it as you wish.
blockdevices = json.loads(process.stdout)
print(json.dumps(blockdevices, indent=4))
#!/usr/bin/python
# -*- coding: utf-8 -*-
import sys
def parse(file_name):
result = []
with open(file_name) as input_file:
for line in input_file:
temp_arr = line.split(' ')
for item in temp_arr:
if '└─' in item or '├─' in item:
result.append(item.replace('└─','').replace('├─',''))
return result
def main(argv):
if len(argv)>1:
print 'Usage: ./parse.py input_file'
return
result = parse(argv[0])
print result
if __name__ == "__main__":
main(sys.argv[1:])
The above is what you need. You can modify it to parse the output of lsblk better.
Usage:
1. Save the output of lsblk to a file.
E.g. run this command: lsblk > output.txt
2. python parse.py output.txt
I remixed minhhn2910's answer for my own purposes to work with encrypted partitions, labels and build the output in a tree-like dict object. I'll probably keep a more updated version as I hit edge-cases on GitHub, but here is the basic code:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import sys
import re
import pprint
def parse_blk(blk_filename):
result = []
with open(blk_filename) as blk_file:
disks = []
for line in blk_file:
if line.startswith('NAME'): # skip first line
continue
blk_list = re.split('\s+', line)
node_type = blk_list[5]
node_size = blk_list[3]
if node_type in set(['disk', 'loop']):
# new disk
disk = {'name': blk_list[0], 'type': node_type, 'size': node_size}
if node_type == 'disk':
disk['partitions'] = []
disks.append(disk)
# get size info if relevant
continue
if node_type in set(['part', 'dm']):
# new partition (or whatever dm is)
node_name = blk_list[0].split('\x80')[1]
partition = {'name': node_name, 'type': node_type, 'size': node_size}
disk['partitions'].append(partition)
continue
if len(blk_list) > 8: # if node_type == 'crypt':
# crypt belonging to a partition
node_name = blk_list[1].split('\x80')[1]
partition['crypt'] = node_name
return disks
def main(argv):
if len(argv)>1:
print 'Usage: ./parse.py blk_filename'
return
result = parse_blk(argv[0])
pprint.PrettyPrinter(indent=4).pprint(result)
if __name__ == "__main__":
main(sys.argv[1:])
It works for your output as well:
$ python check_partitions.py blkout2.txt
[ { 'name': 'xvdf',
'partitions': [ { 'name': 'xvdf1', 'size': '350M', 'type': 'part'},
{ 'name': 'xvdf2', 'size': '34.7G', 'type': 'part'}],
'size': '35G',
'type': 'disk'},
{ 'name': 'xvdp',
'partitions': [{ 'name': 'xvdp1', 'size': '8G', 'type': 'part'}],
'size': '8G',
'type': 'disk'}]
This is how it works on a slightly more complicated scenario with docker loopback devices and encrypted partitions.
$ python check_partitions.py blkout.txt
[ { 'name': 'sda',
'partitions': [ { 'crypt': 'cloudfleet-swap',
'name': 'sda1',
'size': '2G',
'type': 'part'},
{ 'crypt': 'cloudfleet-storage',
'name': 'sda2',
'size': '27.7G',
'type': 'part'}],
'size': '29.7G',
'type': 'disk'},
{ 'name': 'loop0', 'size': '100G', 'type': 'loop'},
{ 'name': 'loop1', 'size': '2G', 'type': 'loop'},
{ 'name': 'mmcblk0',
'partitions': [{ 'name': 'mmcblk0p1',
'size': '7.4G',
'type': 'part'}],
'size': '7.4G',
'type': 'disk'}]