How to compare json file with expected result in Python 3? - python

I need to prepare test which will be comparing content of .json file with expected result (we want to check if values in .json are correctly generated by our dev tool).
For test I will use robot framework or unittests but I don't know yet how to parse correctly json file.
Json example:
{
"Customer": [{
"Information": [{
"Country": "",
"Form": ""
}
],
"Id": "110",
"Res": "",
"Role": "Test",
"Limit": ["100"]
}]
}
So after I execute this:
with open('test_json.json') as f:
hd = json.load(f)
I get dict 'hd' where key is:
dict_keys(['Customer'])
and values:
dict_values([[{'Information': [{'Form': '', 'Country': ''}], 'Role': 'Test', 'Id': '110', 'Res': '', 'Limit': ['100']}]])
My problem is that I don't know how to get to only one value from Dict(e.g: Role: Test), because I can get only extract whole value. I can prepare a long string to compare with but it is not best solution for tests.
Any ideas how I can get to only one row from .json file?

Your JSON has single key 'Customer' and it has a value of list type. So when you ppass dict_keys(['Customer']) you are getting list value.
>>> hd['Customer']
[{'Id': '110', 'Role': 'Test', 'Res': '', 'Information': [{'Form': '', 'Country': ''}], 'Limit': ['100']}]
First element in list:
>>> hd['Customer'][0]
{'Id': '110', 'Role': 'Test', 'Res': '', 'Information': [{'Form': '', 'Country': ''}], 'Limit': ['100']}
Now access inside dict structure using:
>>> hd['Customer'][0]['Role']
'Test'

You can compare the dict that you loaded (say hd) to the expected results dict (say expected_dict) by running
hd.items() == expected_dict.items()

Related

How do I replace single quotes with double quotes in a python array without replacing the single quotes in the json values?

I have an array called managable:
r = requests.get("https://discord.com/api/v8/users/#me/guilds", headers = {
"Authorization": f"Bearer {access_token}"
})
guilds = r.json()
managable = []
for guild in guilds:
if int(guild["permissions"]) & 32 != 0:
managable.append(guild)
where I replace some boolean values in it:
strmanagable = str(managable).replace("True", '"true"').replace("False", '"false"').replace("None", '"none"')
and it returns an array like this:
[{'id': '0', 'name': '\'something\''}, {'id': '1', 'name': '\'two\''}]
I would like to replace the single quotes with double quotes in the array above, without replacing the single quotes in the json values.
I tried using the replace function (strmanagable.replace("'", "\"")), but it replaces single quotes in the json values too, which I don't want.
#snakecharmerb solved my question, I just had to convert managable to json
Are you look for this function json.dumps()? It converts the list into a string literal, then True becomes 'true' automatically
import json
lis1 = [{'id': '0', 'name': True}, {'id': '1', 'name': False}, {'id': '2', 'name': 'two'}]
lis1 = json.dumps(lis1)
Output
'[{"id": "0", "name": true}, {"id": "1", "name": false}, {"id": "2", "name": "two"}]'
Then if you need to convert it back, do this
lis2 = json.loads(lis1)
print(lis2)
[{'id': '0', 'name': True},
{'id': '1', 'name': False},
{'id': '2', 'name': 'two'}]

Save dict as netCDF / xarray

I have a problem. I want to save a dict. But unfortunately I got the following error - TypeError: expected bytes, list found. I want to save my my_dict as netCDF. How could I save my dict? I looked at https://docs.xarray.dev/en/stable/user-guide/io.html , Saving Python dictionary to netCDF4 file and some other links and blogs
from netCDF4 import Dataset
my_dict = {
'_key': '1',
'group': 'test',
'data': {},
'type': '',
'code': '007',
'conType': '1',
'flag': None,
'createdAt': '2021',
'currency': 'EUR',
'detail': {
'selector': {
'number': '12312',
'isTrue': True,
'requirements': [{
'type': 'customer',
'requirement': '1'}]
}
}
'identCode': [],
}
ds = Dataset(my_dict)
[OUT] TypeError: expected bytes, list found
ds.to_netcdf("saved_on_disk.nc")

Python - How to flatten JSON message in pandas/JSON

I have a JSON message as below
JSON_MSG = {
'Header': {
'Timestamp': '2020-10-25T02:49:25.489Z',
'ID': '0422',
'msgName': 'Order',
'Source': 'OrderSys'
},
'CustomerOrderLine': [
{
**'Parameter'**: [
{'ParameterID': 'ACTIVATION_DATE', 'ParameterValue': '2020-10-25'},
{'ParameterID': 'CYCLES', 'ParameterValue': '1'},
{'ParameterID': 'EXPIRY_PERIOD', 'ParameterValue': '30'},
{'ParameterID': 'MAX_NUMBER', 'ParameterValue': '1'}
],
'Subscription': {
'Sub': '3020611',
'LoanAcc': '',
'CustomerAcc': '2020002',
'SubscriptionCreatedDate': '2020-06-23T14:42:30Z',
'BillingAcc': '40010101',
'SubscriptionContractTerm': '12',
'ServiceAcc': '11111',
'SubscriptionStatus': 'Active'
},
'PaymentOpt': 'Upfront',
'OneTimeAmt': '8.0',
'RecurringAmt': '0.0'
'BeneficiaryID': '',
'CustomerOrderID': '111',
'OrderLineCreatedDate': '2020-10-25T02:47:18Z',
'ProductOfferingPriceId': 'PP_6GB_Booster',
'ParentCustomerOrderLineID': '',
'OrderLineRequestedDate': '2020-10-25T00:00:00.000Z',
'ProductCategoryId': 'PRODUCT_OFFER',
'OrderLinePurposeName': 'ADD',
'OrderQuantity': '1.0',
'CustomerOrderLineID': '11111',
'OrderLineDeliveryAddress': {
'OrderLineDeliveryPostCode': '',
'OrderLineDeliveryTown': '',
'OrderLineDeliveryCounty': '',
'OrderLineDeliveryCountryName': ''
},
'ProductInstanceID': '95',
'ProductOfferingId': 'OFF_6GBBOOST_MONTHLY'
}
]
}
I need to flatten the JSON message and convert it into rows and capture the row count/record count
(or)
I need to find out how many elements are present under the nested array Parameter
as this would give me same result as that of flattened JSON(because Parameter is the innermost array)
So far i have tried the below code
data = json.loads(JSON_MSG)
list1 = data['CustomerOrderLine']
rec_count = len(list1)
but this code gives the outer list's result only i.e. 1
as CustomerOrderLine contains only one structs
I need the record/row count as 4 (Parameter array has 4 structs)
Not the prettiest one, but you could try something like:
list1 = JSON_MSG['CustomerOrderLine'][0]['Parameter']
To get 'Parameter' sizes of all elements, you can use list comprehension:
data = json.loads(JSON_MSG)
sizes = [len(order.get('Parameter')) for order in data.get('CustomerOrderLine', [])]

How can i retrieve a specific data from a json object and add in to an new array

I want to make a new array and add some specific keys and values of the json object into the new array. My code adds only the value and not the key. Can someone help me?
CourseGroupCategoriesGroups=[{'GroupId': 11799, 'Name': 'Group 1', 'Description': {'Text': '', 'Html': ''}, 'Enrollments': [264, 265, 266, 50795, 50798]}, {'GroupId': 11928, 'Name': 'Group2', 'Description': {'Text': '', 'Html': ''}, 'Enrollments': [49039, 49040, 49063, 49076, 50720, 50765, 50791]}]
GroupMembership =[]
for record in CourseGroupCategoriesGroups:
GroupMembership.append(record['Name'])
print(GroupMembership)
just add the names you want before the value:
GroupMembership =[]
for record in CourseGroupCategoriesGroups:
GroupMembership.append({"Name": record['Name']})
Here is a solution:
GroupMembership =[]
for record in CourseGroupCategoriesGroups:
GroupMembership.append({"GroupId": record["GroupId"], "Name": record["Name"]})
You could also use a list comprehension:
GroupMembership = [{"GroupId": record["GroupId"], "Name": record["Name"]} for record in CourseGroupCategoriesGroups]
GroupMembership.append({"Name": CourseGroupCategoriesGroups[0]['Name']})

Query nested JSON document in MongoDB collection using Python

I have a MongoDB collection containing multiple documents. A document looks like this:
{
'name': 'sys',
'type': 'system',
'path': 'sys',
'children': [{
'name': 'folder1',
'type': 'folder',
'path': 'sys/folder1',
'children': [{
'name': 'folder2',
'type': 'folder',
'path': 'sys/folder1/folder2',
'children': [{
'name': 'textf1.txt',
'type': 'file',
'path': 'sys/folder1/folder2/textf1.txt',
'children': ['abc', 'def']
}, {
'name': 'textf2.txt',
'type': 'file',
'path': 'sys/folder1/folder2/textf2.txt',
'children': ['a', 'b', 'c']
}]
}, {
'name': 'text1.txt',
'type': 'file',
'path': 'sys/folder1/text1.txt',
'children': ['aaa', 'bbb', 'ccc']
}]
}],
'_id': ObjectId('5d1211ead866fc19ccdf0c77')
}
There are other documents containing similar structure. How can I query this collection to find part of one document among multiple documents where path matches sys/folder1/text1.txt?
My desired output would be:
{
'name': 'text1.txt',
'type': 'file',
'path': 'sys/folder1/text1.txt',
'children': ['aaa', 'bbb', 'ccc']
}
EDIT:
What I have come up with so far is this. My Flask endpoint:
class ExecuteQuery(Resource):
def get(self, collection_name):
result_list = [] # List to store query results
query_list = [] # List to store the incoming queries
for k, v in request.json.items():
query_list.append({k: v}) # Store query items in list
cursor = mongo.db[collection_name].find(*query_list) # Execute query
for document in cursor:
encoded_data = JSONEncoder().encode(document) # Encode the query results to String
result_list.append(json.loads(encoded_data)) # Update dict by iterating over Documents
return result_list # Return query result to client
My client side:
request = {"name": "sys"}
response = requests.get(url, json=request, headers=headers)
print(response.text)
This gives me the entire document but I cannot extract a specific part of the document by matching the path.
I don't think mongodb supports recursive or deep queries within a document (neither recursive $unwind). What it does provide however, are recursive queries across documents referencing another, i.e. aggregating elements from a graph ($graphLookup).
This answer explains pretty well, what you need to do to query a tree.
Although it does not directly address your problem, you may want to reevaluate your data structure. It certainly is intuitive, but updates can be painful -- as well as queries for nested elements, as you just noticed.
Since $graphLookup allows you to create a view equal to your current document, I cannot think of any advantages the explicitly nested structure has over one document per path. There will be a slight performance loss for reading and writing the entire tree, but with proper indexing it should be ok.

Categories

Resources