Parse JSON using python to fetch value based on condition - python

I am new to python and trying to parse the json file and fetch the required field based on condition.
eg., if status = true, then
print name
Json file:
[
{
"id": "12345",
"name": "London",
"active": true,
"status": "true",
"version": "1.0",
"tags": [
]
},
{
"id": "12457",
"name": "Newyork",
"active": true,
"status": "false",
"version": "1.1",
"tags": [
]
},
]
expected output:
name : London
Please help me on this. Thank you in advance.

>>> import json
>>> obj = json.loads('[ { "id": "12345", "name": "London", "active": true, "status": "true", "version": "1.0", "tags": [ ] }, { "id": "12457", "name": "Newyork", "active": true, "status": "false", "version": "1.1", "tags": [ ] } ]')
>>> print "Names:", ",".join(x["name"] for x in obj if x["status"] == "true")
Names: London
Your JSON is invalid. Remove the comma as below:
[
{
"id": "12345",
"name": "London",
"active": true,
"status": "true",
"version": "1.0",
"tags": [
]
},
{
"id": "12457",
"name": "Newyork",
"active": true,
"status": "false",
"version": "1.1",
"tags": [
]
},
^__________Remove this comma!
]

You can get all info about json parsing here.
http://docs.python.org/3.3/library/json.html

Related

Loading variables into json string using python for MS teams

The 3rd party system I am using (vendor product) still uses Python 2.7 and doesn't support Python 3+ so bear with me, I'm fully aware Python 3 is out and this is a limitation of the system I have to use rather than a choice.
I am trying to do an integration between this third party product and MS teams - basically, the third party system provides data, I read this into my Python script and output a message to Teams using a webhook. It mostly works, but I'm struggling to load in some of the variables from the systems data.
For example, in my code, I use the following:
messageID='"{}"'.format(item["messageId"])
recipient='"{}"'.format(item["recipient"]["email"])
subject='"{}"'.format(item["subject"])
sender='"{}"'.format(item["sender"]["email"])
which has output like this:
messageId="34239482030783472#test.net"
recipient="testuser#domain.com"
subject="Email subject here"
sender="sender#domain2.com"
This is all fine, the trouble comes when I need to format my string to post to the Teams webhook.
It currently looks like:
teams_card='{"#type": "MessageCard","#context": "http://schema.org/extensions","themeColor": "0076D7","summary": “PTR”,”sections": [{"activityTitle": "PTR Incident Created","activitySubtitle": “End “User Exposed to Phishing Threat,”facts": [{"name": “Message” ID,”value": %s}, {"name": "Subject”,”value": %s},{“name": "End User","value": %s},{“name": “sender”,”value": %s}],”markdown": true}],"potentialAction": [{"#type": "OpenUri","name": "View Related Emails","targets": [{"os": "default","uri": "https://maskedurlhere.com”}]}]}’ % (messageId,subject,recipient,sender)
which throws an error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: not enough arguments for format string
I tried to use .format option also, but this fails with a different error:
teams_card='{"#type": "MessageCard","#context": "http://schema.org/extensions","themeColor": "0076D7","summary": “PTR”,”sections": [{"activityTitle": "PTR Incident Created","activitySubtitle": “End “User Exposed to Phishing Threat,”facts": [{"name": “Message” ID,”value": %s}, {"name": "Subject”,”value": %s},{“name": "End User","value": %s},{“name": “sender”,”value": %s}],”markdown": true}],"potentialAction": [{"#type": "OpenUri","name": "View Related Emails","targets": [{"os": "default","uri": "https://maskedurlhere.com”}]}]}’.format(messageId,subject,recipient,sender)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyError: '"#type"'
The teams card variable is fine and posts to Teams successfully when it's just text, but trying to load in these variables doesn't seem to work at all.
Any ideas?
To pass dynamic values in Json you need to use format like ${value}
Please follow below example json format
Template JSON
{
"type": "AdaptiveCard",
"body": [
{
"type": "Container",
"style": "emphasis",
"items": [
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"size": "Large",
"weight": "Bolder",
"text": "**EXPENSE APPROVAL**",
"wrap": true
}
],
"width": "stretch"
},
{
"type": "Column",
"items": [
{
"type": "Image",
"url": "${status_url}",
"altText": "${status}",
"height": "30px"
}
],
"width": "auto"
}
]
}
],
"bleed": true
},
{
"type": "Container",
"items": [
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"size": "ExtraLarge",
"text": "${purpose}",
"wrap": true
}
],
"width": "stretch"
},
{
"type": "Column",
"items": [
{
"type": "ActionSet",
"actions": [
{
"type": "Action.OpenUrl",
"title": "EXPORT AS PDF",
"url": "https://adaptivecards.io"
}
]
}
],
"width": "auto"
}
]
},
{
"type": "TextBlock",
"spacing": "Small",
"size": "Small",
"weight": "Bolder",
"color": "Accent",
"text": "[${code}](https://adaptivecards.io)",
"wrap": true
},
{
"type": "FactSet",
"spacing": "Large",
"facts": [
{
"title": "Submitted By",
"value": "**${created_by_name}** ${creater_email}"
},
{
"title": "Duration",
"value": "${formatTicks(min(select(expenses, x, int(x.created_by))), 'yyyy-MM-dd')} - ${formatTicks(max(select(expenses, x, int(x.created_by))), 'yyyy-MM-dd')}"
},
{
"title": "Submitted On",
"value": "${formatDateTime(submitted_date, 'yyyy-MM-dd')}"
},
{
"title": "Reimbursable Amount",
"value": "$${formatNumber(sum(select(expenses, x, if(x.is_reimbursable, x.total, 0))), 2)}"
},
{
"title": "Awaiting approval from",
"value": "**${approver}** ${approver_email}"
},
{
"title": "Submitted to",
"value": "**${other_submitter}** ${other_submitter_email}"
}
]
}
]
},
{
"type": "Container",
"spacing": "Large",
"style": "emphasis",
"items": [
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"weight": "Bolder",
"text": "DATE",
"wrap": true
}
],
"width": "auto"
},
{
"type": "Column",
"spacing": "Large",
"items": [
{
"type": "TextBlock",
"weight": "Bolder",
"text": "CATEGORY",
"wrap": true
}
],
"width": "stretch"
},
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"weight": "Bolder",
"text": "AMOUNT",
"wrap": true
}
],
"width": "auto"
}
]
}
],
"bleed": true
},
{
"$data": "${expenses}",
"type": "Container",
"items": [
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"text": "${formatDateTime(created_time, 'MM-dd')}",
"wrap": true
}
],
"width": "auto"
},
{
"type": "Column",
"spacing": "Medium",
"items": [
{
"type": "TextBlock",
"text": "${description}",
"wrap": true
}
],
"width": "stretch"
},
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"text": "$${formatNumber(total, 2)}",
"wrap": true
}
],
"width": "auto"
},
{
"type": "Column",
"spacing": "Small",
"selectAction": {
"type": "Action.ToggleVisibility",
"targetElements": [
"cardContent${$index}",
"chevronDown${$index}",
"chevronUp${$index}"
]
},
"verticalContentAlignment": "Center",
"items": [
{
"type": "Image",
"id": "chevronDown${$index}",
"url": "https://adaptivecards.io/content/down.png",
"width": "20px",
"altText": "${description} $${total} collapsed"
},
{
"type": "Image",
"id": "chevronUp${$index}",
"url": "https://adaptivecards.io/content/up.png",
"width": "20px",
"altText": "${description} $${total} expanded",
"isVisible": false
}
],
"width": "auto"
}
]
},
{
"type": "Container",
"id": "cardContent${$index}",
"isVisible": false,
"items": [
{
"type": "Container",
"items": [
{
"$data": "${custom_fields}",
"type": "TextBlock",
"text": "* ${value}",
"isSubtle": true,
"wrap": true
},
{
"type": "Container",
"items": [
{
"type": "Input.Text",
"id": "comment${$index}",
"placeholder": "Add your comment here."
}
]
}
]
},
{
"type": "Container",
"items": [
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"items": [
{
"type": "ActionSet",
"actions": [
{
"type": "Action.Submit",
"title": "Send",
"data": {
"id": "_qkQW8dJlUeLVi7ZMEzYVw",
"action": "comment",
"lineItem": 1
}
}
]
}
],
"width": "auto"
}
]
}
]
}
]
}
]
},
{
"type": "ColumnSet",
"spacing": "Large",
"separator": true,
"columns": [
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"horizontalAlignment": "Right",
"text": "Total Expense Amount \t",
"wrap": true
},
{
"type": "TextBlock",
"horizontalAlignment": "Right",
"text": "Non-reimbursable Amount",
"wrap": true
},
{
"type": "TextBlock",
"horizontalAlignment": "Right",
"text": "Advance Amount",
"wrap": true
}
],
"width": "stretch"
},
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"text": "$${formatNumber(sum(select(expenses, x, x.total)), 2)}",
"wrap": true
},
{
"type": "TextBlock",
"text": "(-) $${formatNumber(sum(select(expenses, x, if(x.is_reimbursable, 0, x.total))), 2)} \t",
"wrap": true
},
{
"type": "TextBlock",
"text": "(-) 0.00 \t",
"wrap": true
}
],
"width": "auto"
},
{
"type": "Column",
"width": "auto"
}
]
},
{
"type": "Container",
"style": "emphasis",
"items": [
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"horizontalAlignment": "Right",
"text": "Amount to be Reimbursed",
"wrap": true
}
],
"width": "stretch"
},
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"weight": "Bolder",
"text": "$${formatNumber(sum(select(expenses, x, if(x.is_reimbursable, x.total, 0))), 2)}",
"wrap": true
}
],
"width": "auto"
},
{
"type": "Column",
"width": "auto"
}
]
}
],
"bleed": true
},
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"selectAction": {
"type": "Action.ToggleVisibility",
"targetElements": [
"cardContent4",
"showHistory",
"hideHistory"
]
},
"verticalContentAlignment": "Center",
"items": [
{
"type": "TextBlock",
"id": "showHistory",
"horizontalAlignment": "Right",
"color": "Accent",
"text": "Show history",
"wrap": true
},
{
"type": "TextBlock",
"id": "hideHistory",
"horizontalAlignment": "Right",
"color": "Accent",
"text": "Hide history",
"wrap": true,
"isVisible": false
}
],
"width": 1
}
]
},
{
"type": "Container",
"id": "cardContent4",
"isVisible": false,
"items": [
{
"type": "Container",
"items": [
{
"type": "TextBlock",
"text": "* Expense submitted by **${created_by_name}** on {{DATE(${formatDateTime(created_date, 'yyyy-MM-ddTHH:mm:ssZ')}, SHORT)}}",
"isSubtle": true,
"wrap": true
},
{
"type": "TextBlock",
"text": "* Expense ${expenses[0].status} by **${expenses[0].approver}** on {{DATE(${formatDateTime(approval_date, 'yyyy-MM-ddTHH:mm:ssZ')}, SHORT)}}",
"isSubtle": true,
"wrap": true
}
]
}
]
},
{
"type": "Container",
"items": [
{
"type": "ActionSet",
"actions": [
{
"type": "Action.Submit",
"title": "Approve",
"style": "positive",
"data": {
"id": "_qkQW8dJlUeLVi7ZMEzYVw",
"action": "approve"
}
},
{
"type": "Action.ShowCard",
"title": "Reject",
"style": "destructive",
"card": {
"type": "AdaptiveCard",
"body": [
{
"type": "Input.Text",
"id": "RejectCommentID",
"placeholder": "Please specify an appropriate reason for rejection.",
"isMultiline": true
}
],
"actions": [
{
"type": "Action.Submit",
"title": "Send",
"data": {
"id": "_qkQW8dJlUeLVi7ZMEzYVw",
"action": "reject"
}
}
],
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json"
}
}
]
}
]
}
],
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"version": "1.2",
"fallbackText": "This card requires Adaptive Cards v1.2 support to be rendered properly."
}
Data Json
{
"code": "ER-13052",
"message": "success",
"created_by_name" : "Matt Hidinger",
"created_date" : "2019-07-15T18:33:12+0800",
"submitted_date": "2019-04-14T18:33:12+0800",
"creater_email" : "matt#contoso.com",
"status" : "Pending",
"status_url" : "https://adaptivecards.io/content/pending.png",
"approver": "Thomas",
"purpose" : "Trip to UAE",
"approval_date" : "2019-07-15T22:33:12+0800",
"approver" : "Thomas",
"approver_email" : "thomas#contoso.com",
"other_submitter" : "David",
"other_submitter_email" : "david#contoso.com",
"expenses": [
{
"expense_id": "16367000000083065",
"approver" : "Thomas",
"date": "2017-02-21",
"description": "Air Travel Expense",
"created_by": "636965431200000000",
"created_by_name": "PATRICIA",
"employee_number": "E001",
"currency_id": "16367000000000097",
"currency_code": "USD",
"paid_through_account_id": "16367000000036003",
"paid_through_account_name": "Employee Reimbursements",
"bcy_total": 13900.79,
"bcy_subtotal": 13900.79,
"total": 300,
"total_without_tax": 300,
"is_billable": true,
"is_reimbursable": true,
"reference_number": "DD145",
"due_days": "Due in 15 days",
"merchant_id": "16367000000074027",
"merchant_name": "ABS Solutions",
"status": "approved",
"created_time": "2019-06-19T18:33:12+0800",
"last_modified_time": "2017-02-21T18:42:46+0530",
"receipt_name": "receipt1.jpg",
"report_id": "16367000000083075",
"mileage_type": "non_mileage",
"report_name": "Purchase",
"is_receipt_only": false,
"distance": 0,
"per_diem_rate": 0,
"per_diem_days": 0,
"per_diem_id": "",
"per_diem_name": "",
"expense_type": "non_mileage",
"location": "Washington",
"receipt_type": "jpg",
"policy_violated": false,
"comments_count": 0,
"report_status": "submitted",
"price_precision": 2,
"mileage_rate": 0,
"mileage_unit": "km",
"receipt_status": "processed",
"is_uncategorized": false,
"is_expired": false,
"gl_code": "LG001",
"exchange_rate": 66.943366,
"start_reading": "",
"end_reading": "",
"payment_mode": "Check",
"customer_id": "27927000000075081",
"customer_name": "ACME Corp.",
"custom_fields": [
{
"customfield_id": "16367000000277001",
"label": "Other Name",
"value": "Leg 1 on Tue, Jun 19th, 2019 at 6:00 AM."
},
{
"customfield_id": "16367000000277001",
"label": "Other Name",
"value": "Leg 2 on Tue, Jun 19th, 2019 at 7:15 PM."
}
],
"project_id": "27927000001243001",
"project_name": "Coffee Research",
"transaction_description": "",
"tax_id": "16367000000086001",
"tax_name": "Sales Tax",
"tax_percentage": 2,
"amount": 207.65,
"is_inclusive_tax": false,
"vehicle_type": "Bike",
"vehicle_id": "17456000000078029",
"fuel_type": "lpg",
"engine_capacity_range": "between_1401cc_and_1600cc",
"is_personal": false,
"policy_id": "16367000000092011",
"policy_name": "LIC",
"documents": [
{
"file_name": "receipt1.jpg",
"file_size_formatted": "71.8 KB",
"attachment_order": 1,
"document_id": "16367000000083071"
}
],
"reimbursement_reference": "",
"reimbursement_date": "",
"reimbursement_paid_through_account_id": "",
"reimbursement_paid_through_account_name": "",
"reimbursement_currency_id": "",
"reimbursement_currency_code": ""
},
{
"expense_id": "16367000000083065",
"date": "2019-06-19",
"description": "Auto Mobile Expense",
"created_by": "636965431200000000",
"created_by_name": "PATRICIA",
"employee_number": "E001",
"currency_id": "16367000000000097",
"currency_code": "USD",
"paid_through_account_id": "16367000000036003",
"paid_through_account_name": "Employee Reimbursements",
"bcy_total": 13900.79,
"bcy_subtotal": 13900.79,
"total": 100,
"total_without_tax": 100,
"is_billable": true,
"is_reimbursable": true,
"reference_number": "DD145",
"due_days": "Due in 15 days",
"merchant_id": "16367000000074027",
"merchant_name": "ABS Solutions",
"status": "submitted",
"created_time": "2019-06-19T18:33:12+0800",
"last_modified_time": "2017-02-21T18:42:46+0530",
"receipt_name": "receipt1.jpg",
"report_id": "16367000000083075",
"mileage_type": "non_mileage",
"report_name": "Purchase",
"is_receipt_only": false,
"distance": 0,
"per_diem_rate": 0,
"per_diem_days": 0,
"per_diem_id": "",
"per_diem_name": "",
"expense_type": "non_mileage",
"location": "Washington",
"receipt_type": "jpg",
"policy_violated": false,
"comments_count": 0,
"report_status": "submitted",
"price_precision": 2,
"mileage_rate": 0,
"mileage_unit": "km",
"receipt_status": "processed",
"is_uncategorized": false,
"is_expired": false,
"gl_code": "LG001",
"exchange_rate": 66.943366,
"start_reading": "",
"end_reading": "",
"payment_mode": "Check",
"customer_id": "27927000000075081",
"customer_name": "ACME Corp.",
"custom_fields": [
{
"customfield_id": "16367000000277001",
"label": "Other Name",
"value": " Contoso Car Rentrals, Tues 6/19 at 7:00 AM"
}
],
"project_id": "27927000001243001",
"project_name": "Coffee Research",
"transaction_description": "",
"tax_id": "16367000000086001",
"tax_name": "Sales Tax",
"tax_percentage": 2,
"amount": 207.65,
"is_inclusive_tax": false,
"vehicle_type": "Bike",
"vehicle_id": "17456000000078029",
"fuel_type": "lpg",
"engine_capacity_range": "between_1401cc_and_1600cc",
"is_personal": false,
"policy_id": "16367000000092011",
"policy_name": "LIC",
"documents": [
{
"file_name": "receipt1.jpg",
"file_size_formatted": "71.8 KB",
"attachment_order": 1,
"document_id": "16367000000083071"
}
],
"reimbursement_reference": "",
"reimbursement_date": "",
"reimbursement_paid_through_account_id": "",
"reimbursement_paid_through_account_name": "",
"reimbursement_currency_id": "",
"reimbursement_currency_code": ""
},
{
"expense_id": "16367000000083065",
"date": "2019-06-21",
"description": "Excess Baggage Cost",
"created_by": "636967159200000000",
"created_by_name": "PATRICIA",
"employee_number": "E001",
"currency_id": "16367000000000097",
"currency_code": "USD",
"paid_through_account_id": "16367000000036003",
"paid_through_account_name": "Employee Reimbursements",
"bcy_total": 13900.79,
"bcy_subtotal": 13900.79,
"total": 50.38,
"total_without_tax": 4.3,
"is_billable": true,
"is_reimbursable": false,
"reference_number": "DD145",
"due_days": "Due in 15 days",
"merchant_id": "16367000000074027",
"merchant_name": "ABS Solutions",
"status": "submitted",
"created_time": "2019-06-21T18:33:12+0800",
"last_modified_time": "2017-02-21T18:42:46+0530",
"receipt_name": "receipt1.jpg",
"report_id": "16367000000083075",
"mileage_type": "non_mileage",
"report_name": "Purchase",
"is_receipt_only": false,
"distance": 0,
"per_diem_rate": 0,
"per_diem_days": 0,
"per_diem_id": "",
"per_diem_name": "",
"expense_type": "non_mileage",
"location": "Washington",
"receipt_type": "jpg",
"policy_violated": false,
"comments_count": 0,
"report_status": "submitted",
"price_precision": 2,
"mileage_rate": 0,
"mileage_unit": "km",
"receipt_status": "processed",
"is_uncategorized": false,
"is_expired": false,
"gl_code": "LG001",
"exchange_rate": 66.943366,
"start_reading": "",
"end_reading": "",
"payment_mode": "Check",
"customer_id": "27927000000075081",
"customer_name": "ACME Corp.",
"custom_fields": [
],
"project_id": "27927000001243001",
"project_name": "Coffee Research",
"transaction_description": "",
"tax_id": "16367000000086001",
"tax_name": "Sales Tax",
"tax_percentage": 2,
"amount": 207.65,
"is_inclusive_tax": false,
"vehicle_type": "Bike",
"vehicle_id": "17456000000078029",
"fuel_type": "lpg",
"engine_capacity_range": "between_1401cc_and_1600cc",
"is_personal": false,
"policy_id": "16367000000092011",
"policy_name": "LIC",
"documents": [
{
"file_name": "receipt1.jpg",
"file_size_formatted": "71.8 KB",
"attachment_order": 1,
"document_id": "16367000000083071"
}
],
"reimbursement_reference": "",
"reimbursement_date": "",
"reimbursement_paid_through_account_id": "",
"reimbursement_paid_through_account_name": "",
"reimbursement_currency_id": "",
"reimbursement_currency_code": ""
}
]
}
Please go through this for more info.

Traversing through a tree (multi nest)

There is a parent, children, grandchildren and many more relationship like the json given below (id's are unique inside each list), if the nesting is up to 10 levels with each level having different lengths. What is the best way to find a uni_code to insert another object (child) into its list?
{
"id": "1",
"status": "active",
"created": "",
"children": [{
"id": "1",
"status": "active",
"created": "",
"children": [{
"id": "1",
"status": "active",
"created": "",
"children": [{
"id": "1",
"status": "active",
"created": "",
"children": [
],
"uni_code": "EGCFJ1"
},
{
"id": "1",
"status": "active",
"created": "",
"children": [
],
"uni_code": "D356RY"
},
],
"uni_code": "EGCFJ1"
},
],
"uni_code": "Y7TUP8"
},
{
"id": "4",
"status": "active",
"created": "",
"children": [
],
"uni_code": "WA1JNS"
},
],
"uni_code": "I429TD"
}
Based on your follow-on question in a comment, I believe I now understand what your were asking…and have updated my answer accordingly.
Often you can traverse a recursive data-structure like a tree by using a function that calls itself recursively.
What I mean:
def insert_into_tree(tree, new_child, target):
if tree['uni_code'] == target:
tree['children'].append(new_child)
return True
for child in tree['children']:
if insert_into_tree(child, new_child, target):
return True
return False # `target` not found.
data = {
"id": "1",
"status": "active",
"created": "",
"children": [
{
"id": "1",
"status": "active",
"created": "",
"children": [
{
"id": "1",
"status": "active",
"created": "",
"children": [
{
"id": "1",
"status": "active",
"created": "",
"children": [],
"uni_code": "EGCFJ1"
},
{
"id": "1",
"status": "active",
"created": "",
"children": [],
"uni_code": "D356RY"
}
],
"uni_code": "EGCFJ1"
}
],
"uni_code": "Y7TUP8"
},
{
"id": "4",
"status": "active",
"created": "",
"children": [],
"uni_code": "WA1JNS"
}
],
"uni_code": "I429TD"
}
new_object = {
"id": "1",
"status": "active",
"created": "",
"children": [],
"uni_code": "THX1138"
}
target = "EGCFJ1"
if insert_into_tree(data, new_object, target):
print(f'new child added to {target!r}')
else:
print(f'{target!r} not found, new child not added')

Best way to build denormilazed dataframe with pandas from spotify API

I just downloaded some json from spotify and took a look into the pd.normalize_json().
But if I normalise the data i still have dictionaries within my dataframe. Also setting the level doesnt help.
DATA I want to have in my dataframe:
{
"collaborative": false,
"description": "",
"external_urls": {
"spotify": "https://open.spotify.com/playlist/5"
},
"followers": {
"href": null,
"total": 0
},
"href": "https://api.spotify.com/v1/playlists/5?additional_types=track",
"id": "5",
"images": [
{
"height": 640,
"url": "https://i.scdn.co/image/a",
"width": 640
}
],
"name": "Another",
"owner": {
"display_name": "user",
"external_urls": {
"spotify": "https://open.spotify.com/user/user"
},
"href": "https://api.spotify.com/v1/users/user",
"id": "user",
"type": "user",
"uri": "spotify:user:user"
},
"primary_color": null,
"public": true,
"snapshot_id": "M2QxNTcyYTkMDc2",
"tracks": {
"href": "https://api.spotify.com/v1/playlists/100&additional_types=track",
"items": [
{
"added_at": "2020-12-13T18:34:09Z",
"added_by": {
"external_urls": {
"spotify": "https://open.spotify.com/user/user"
},
"href": "https://api.spotify.com/v1/users/user",
"id": "user",
"type": "user",
"uri": "spotify:user:user"
},
"is_local": false,
"primary_color": null,
"track": {
"album": {
"album_type": "album",
"artists": [
{
"external_urls": {
"spotify": "https://open.spotify.com/artist/1dfeR4Had"
},
"href": "https://api.spotify.com/v1/artists/1dfDbWqFHLkxsg1d",
"id": "1dfeR4HaWDbWqFHLkxsg1d",
"name": "Q",
"type": "artist",
"uri": "spotify:artist:1dfeRqFHLkxsg1d"
}
],
"available_markets": [
"CA",
"US"
],
"external_urls": {
"spotify": "https://open.spotify.com/album/6wPXmlLzZ5cCa"
},
"href": "https://api.spotify.com/v1/albums/6wPXUJ9LzZ5cCa",
"id": "6wPXUmYJ9zZ5cCa",
"images": [
{
"height": 640,
"url": "https://i.scdn.co/image/ab676620a47",
"width": 640
},
{
"height": 300,
"url": "https://i.scdn.co/image/ab67616d0620a47",
"width": 300
},
{
"height": 64,
"url": "https://i.scdn.co/image/ab603e6620a47",
"width": 64
}
],
"name": "The (Deluxe ",
"release_date": "1920-07-17",
"release_date_precision": "day",
"total_tracks": 15,
"type": "album",
"uri": "spotify:album:6m5cCa"
},
"artists": [
{
"external_urls": {
"spotify": "https://open.spotify.com/artist/1dg1d"
},
"href": "https://api.spotify.com/v1/artists/1dsg1d",
"id": "1dfeR4HaWDbWqFHLkxsg1d",
"name": "Q",
"type": "artist",
"uri": "spotify:artist:1dxsg1d"
}
],
"available_markets": [
"CA",
"US"
],
"disc_number": 1,
"duration_ms": 21453,
"episode": false,
"explicit": false,
"external_ids": {
"isrc": "GBU6015"
},
"external_urls": {
"spotify": "https://open.spotify.com/track/5716J"
},
"href": "https://api.spotify.com/v1/tracks/5716J",
"id": "5716J",
"is_local": false,
"name": "Another",
"popularity": 73,
"preview_url": null,
"track": true,
"track_number": 3,
"type": "track",
"uri": "spotify:track:516J"
},
"video_thumbnail": {
"url": null
}
}
],
"limit": 100,
"next": null,
"offset": 0,
"previous": null,
"total": 1
},
"type": "playlist",
"uri": "spotify:playlist:fek"
}
So what are best practices to read nested data like this into one dataframe in pandas?
I'm glad for any advice.
EDIT:
so basically I want all keys as columns in my dataframe. But with normalise it stops at "tracks.items" and if I normalise this again i have the recursive problem again.
It depends on the information you are looking for. Take a look at pandas.read_json() to see if that can work. Also you can select data as such
json_output = {"collaborative": 'false',"description": "", "external_urls": {"spotify": "https://open.spotify.com/playlist/5"}}
df['collaborative'] = json_output['collaborative'] #set value of your df to value of returned json values

Fast way of adding fields to a nested dict

I need a help with improving my code.
I've got a nested dict with many levels:
{
"11": {
"FacLC": {
"immty": [
"in_mm",
"in_mm"
],
"moood": [
"in_oo",
"in_oo"
]
}
},
"22": {
"FacLC": {
"immty": [
"in_mm",
"in_mm",
"in_mm"
]
}
}
}
And I want to add additional fields on every level, so my output looks like this:
[
{
"id": "",
"name": "11",
"general": [
{
"id": "",
"name": "FacLC",
"specifics": [
{
"id": "",
"name": "immty",
"characteristics": [
{
"id": "",
"name": "in_mm"
},
{
"id": "",
"name": "in_mm"
}
]
},
{
"id": "",
"name": "moood",
"characteristics": [
{
"id": "",
"name": "in_oo"
},
{
"id": "",
"name": "in_oo"
}
]
}
]
}
]
},
{
"id": "",
"name": "22",
"general": [
{
"id": "",
"name": "FacLC",
"specifics": [
{
"id": "",
"name": "immty",
"characteristics": [
{
"id": "",
"name": "in_mm"
},
{
"id": "",
"name": "in_mm"
},
{
"id": "",
"name": "in_mm"
}
]
}
]
}
]
}
]
I managed to write a 4-times nested for loop, what I find inefficient and inelegant:
for main_name, general in my_dict.items():
generals = []
for general_name, specific in general.items():
specifics = []
for specific_name, characteristics in specific.items():
characteristics_dicts = []
for characteristic in characteristics:
characteristics_dicts.append({
"id": "",
"name": characteristic,
})
specifics.append({
"id": "",
"name": specific_name,
"characteristics": characteristics_dicts,
})
generals.append({
"id": "",
"name": general_name,
"specifics": specifics,
})
my_new_dict.append({
"id": "",
"name": main_name,
"general": generals,
})
I am wondering if there is more compact and efficient solution.
In the past I created a function to do it. Basically you call this function everytime that you need to add new fields to a nested dict, independently on how many levels this nested dict have. You only have to inform the 'full path' , that I called the 'key_map'.
Like ['node1','node1a','node1apart3']
def insert_value_using_map(_nodes_list_to_be_appended, _keys_map, _value_to_be_inserted):
for _key in _keys_map[:-1]:
_nodes_list_to_be_appended = _nodes_list_to_be_appended.setdefault(_key, {})
_nodes_list_to_be_appended[_keys_map[-1]] = _value_to_be_inserted

Cannot import grafana dashboard via Grafana API

I am trying to import the Grafana dashboard using HTTP API by following Grafana
Grafana Version: 5.1.3
OS -Windows 10
This is what i tried
curl --user admin:admin "http://localhost:3000/api/dashboards/db" -X POST -H "Content-Type:application/json;charset=UTF-8" --data-binary #c:/Users/Mahadev/Desktop/Dashboard.json
and
Here is my python code
import requests
headers = {
'Content-Type': 'application/json;charset=UTF-8',
}
data = open('C:/Users/Mahadev/Desktop/Dashboard.json', 'rb').read()
response = requests.post('http://admin:admin#localhost:3000/api/dashboards/db', headers=headers, data=data)
print (response.text)
And output of both is:
[{"fieldNames":["Dashboard"],"classification":"RequiredError","message":"Required"}]
It is asking for root property called dashboard in my json payload. Can anybody suggest me how to use that porperty and what data should i provide.
If any one want to dig more here are some links.
https://github.com/grafana/grafana/issues/8193
https://github.com/grafana/grafana/issues/2816
https://github.com/grafana/grafana/issues/8193
https://community.grafana.com/t/how-can-i-import-a-dashboard-from-a-json-file/669
https://github.com/grafana/grafana/issues/273
https://github.com/grafana/grafana/issues/5811
https://stackoverflow.com/questions/39968111/unable-to-post-to-grafana-using-python3-module-requests
https://stackoverflow.com/questions/39954475/post-request-works-in-postman-but-not-in-python/39954514#39954514
https://www.bountysource.com/issues/44431991-use-api-to-import-json-file-error
https://github.com/grafana/grafana/issues/7029
Maybe you should try to download your dashboard from the API so you will a "proper" json model to push after?
You can download it with the following command :
curl -H "Authorization: Bearer $TOKEN" https://grafana.domain.tld/api/dashboards/uid/$DASHBOARD_UID
An other way to do it , you can download a dashboard JSON on grafana website => grafana.com/dashboards and try to upload it with your current code? ;)
The dashboard field contain everything that will be display, alerts, graph etc....
Here is an example of dashboard.json :
{
"meta": {
"type": "db",
"canSave": true,
"canEdit": true,
"canAdmin": false,
"canStar": true,
"slug": "status-app",
"url": "/d/lOy3lIImz/status-app",
"expires": "0001-01-01T00:00:00Z",
"created": "2018-06-04T11:40:20+02:00",
"updated": "2018-06-14T17:51:23+02:00",
"updatedBy": "jean",
"createdBy": "jean",
"version": 89,
"hasAcl": false,
"isFolder": false,
"folderId": 0,
"folderTitle": "General",
"folderUrl": "",
"provisioned": false
},
"dashboard": {
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"id": 182,
"links": [],
"panels": [
{
"alert": {
"conditions": [
{
"evaluator": {
"params": [
1
],
"type": "lt"
},
"operator": {
"type": "and"
},
"query": {
"params": [
"A",
"5m",
"now"
]
},
"reducer": {
"params": [],
"type": "avg"
},
"type": "query"
}
],
"executionErrorState": "alerting",
"frequency": "60s",
"handler": 1,
"name": "Status of alert",
"noDataState": "alerting",
"notifications": [
{
"id": 7
}
]
},
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "Collectd",
"fill": 1,
"gridPos": {
"h": 7,
"w": 8,
"x": 0,
"y": 0
},
"id": 4,
"legend": {
"alignAsTable": true,
"avg": true,
"current": true,
"max": false,
"min": false,
"rightSide": false,
"show": true,
"total": false,
"values": true
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "connected",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"alias": "Status",
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"null"
],
"type": "fill"
}
],
"measurement": "processes_processes",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT mean(value) FROM \"processes_processes\" WHERE (\"instance\" = '' AND \"host\" = 'Webp01') AND $timeFilter GROUP BY time($interval) fill(null)",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"value"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": [
{
"key": "instance",
"operator": "=",
"value": ""
},
{
"condition": "AND",
"key": "host",
"operator": "=",
"value": "Webp01"
}
]
}
],
"thresholds": [
{
"colorMode": "critical",
"fill": true,
"line": true,
"op": "lt",
"value": 1
}
],
"timeFrom": null,
"timeShift": null,
"title": "Status of ",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
}
],
"refresh": "5m",
"schemaVersion": 16,
"style": "dark",
"tags": [
"web",
"nodejs"
],
"templating": {
"list": []
},
"time": {
"from": "now/d",
"to": "now"
},
"timepicker": {
"hidden": false,
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"time_options": [
"5m",
"15m",
"1h",
"6h",
"12h",
"24h",
"2d",
"7d",
"30d"
]
},
"timezone": "",
"title": "Status APP",
"uid": "lOy3lIImz",
"version": 89
},
}
Edit:
Here is a JSON snipper for templating your dashboard :
"templating": {
"list": [
{
"allValue": null,
"current": {
"text": "PRD_Web01",
"value": "PRD_Web01"
},
"datasource": "Collectd",
"hide": 0,
"includeAll": false,
"label": null,
"multi": false,
"name": "host",
"options": [],
"query": "SHOW TAG VALUES WITH KEY=host",
"refresh": 1,
"regex": "",
"sort": 0,
"tagValuesQuery": "",
"tags": [],
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": null,
"current": {
"text": "sda",
"value": "sda"
},
"datasource": "Collectd",
"hide": 0,
"includeAll": false,
"label": null,
"multi": false,
"name": "device",
"options": [],
"query": "SHOW TAG VALUES FROM \"disk_read\" WITH KEY = \"instance\"",
"refresh": 1,
"regex": "",
"sort": 0,
"tagValuesQuery": "",
"tags": [],
"tagsQuery": "",
"type": "query",
"useTags": false
}
]
},
As I read your answer, I guess you will be OK with this ;). I will try to keep a better eye on this thread
Can you show how your dashboard json looks like ? The json MUST contain a key dashboard in it with all the details inside its value like the following:
{
"dashboard": {
"id": null,
"uid": null,
"title": "Production Overview",
"tags": [ "templated" ],
"timezone": "browser",
"schemaVersion": 16,
"version": 0
},
"folderId": 0,
"overwrite": false
}

Categories

Resources