I would like to populate a json by iterating through a list of list with python.
Currently the list of list looks like this:
bookmark_apps = [['Google App','https://google.com'], ['Yahoo App','https://yahoo.com'], ['Espn App','https://espn.com']]
I would like to populate the JSON that looks like this:
{
"name": **Populate App Name Here**,
"label": **Populate App Name Here**,
"signOnMode": "BOOKMARK",
"settings": {
"app": {
"requestIntegration": false,
"url": **Populate App URL Here**
}
}
}
Im confident that there is a better way to approach it then way I'm trying. The way I tried approaching this was breaking it down within 2 lists and iterating through that with zip like this:
app_name_label = []
for sub_list in bookmark_apps:
app_name_label.append(sub_list[0])
bookmark_url = []
for sub_list2 in bookmark_apps:
bookmark_url.append(sub_list2[1])
for i, j in zip(name_label, needed_bookmark_url):
Any suggestions or approaches to get to the solution of populating the json will be greatly appreciated. Thanks.
Use a list comprehension that creates dictionaries from the nested lists.
result = [{
"name": name,
"label": name,
"signOnMode": "BOOKMARK",
"settings": {
"app": {
"requestIntegration": False,
"url": url
}
}
} for name, url in bookmark_apps]
DEMO
Related
Here I set a json object inside a key in a redis. Later I want to perform search on the json file stored in the redis. My search key will always be a json string like in the example below and i want to match this inside the stored json file.
Currently here i am doing this by iterating and comparing but instead i want to do it with redis. How can I do it ?
rd = redis.StrictRedis(host="localhost",port=6379, db=0)
if not rd.get("mykey"):
with open(os.path.join(BASE_DIR, "my_file.josn")) as fl:
data = json.load(fl)
rd.set("mykey", json.dumps(data))
else:
key_values = json.loads(rd.get("mykey"))
search_json_key = {
"key":"value",
"key2": {
"key": "val"
}
}
# here i am searching by iterating and comparing instead i want to do it with redis
for i in key_values['all_data']:
if json.dumps(i) == json.dumps(search_json_key):
# return
# mykey format looks like this:
{
"all_data": [
{
"key":"value",
"key2": {
"key": "val"
}
},
{
"key":"value",
"key2": {
"key": "val"
}
},
{
"key":"value",
"key2": {
"key": "val"
}
},
]
}
To do search with Redis and JSON you have two options - you can use the FT CREATE command to create an index that you can then use FT SEARCH over, (while both of these web pages show the CLI syntax you can do
rd.ft().create() / search() in your python script)
OR you can check out the python OM client that will take care of that to some extent for you.
Either way you'll have to do a bit of a rework to fully take advantage of Redis' search capabilities.
I am using Python requests library to execute GraphQL mutation. I need to pass requests library a query parameter which should contain a string which should be constructed from the Python list of Python dictionaries.
Python list of dictionaries looks like:
my_list_of_dicts = [{"custom_module_id": "23", "answer": "some text 2", "user_id": "111"},
{"custom_module_id": "24", "answer": "a", "user_id": "111"}]
Now I need to convert this list of dictionaries in a string so it should look like this:
my_list_of_dicts = [{custom_module_id: "23", answer: "some text 2", user_id: "111"},
{custom_module_id: "24", answer: "a", user_id: "111"}]
Basically I need to get the string that looks like a Python list of dictionaries except that keys of the dictionaries does not have quotations around dictionary key names. I did this and it works:
my_query_string = json.dumps(my_list_of_dicts).replace("\"custom_module_id\"", "custom_module_id")
my_query_string = my_query_string.replace("\"answer\"", "answer")
my_query_string = my_query_string.replace("\"user_id\"", "user_id")
But I was wondering maybe there is better way to achieve this? By "better" I mean some function call that will prepare json/dictionary format for ready to be used GraphQL string.
I think this may help you find your final answer.
Follow this article
gq = """
mutation ReorderProducts($id: ID!, $moves: [MoveInput!]!) {
collectionReorderProducts(id: $id, moves: $moves) {
job {
id
}
userErrors {
field
message
}
}
}
"""
resp = self.sy_graphql_client.execute(
query=gq,
variables={
"id": before_collection_meta.coll_meta.id,
"moves": list(map(lambda mtc:
{
"id": mtc.id, "newPosition": mtc.new_position
}, move_to_commands))
}
)
reorder_job_id = resp["data"]["collectionReorderProducts"]["job"]["id"]
self.sy_graphql_client.wait_for_job(reorder_job_id)
I have a .json file with many entries looking like this:
{
"name": "abc",
"time": "20220607T190731.442",
"id": "123",
"relatedIds": [
{
"id": "456",
"source": "sourceA"
},
{
"id": "789",
"source": "sourceB"
}
],
}
I am saving each entry in a python object, however, I only need the related ID from source A. Problem is, the related ID from source A is not always first place in that nested list.
So data['relatedIds'][0]['id'] is not reliable to yield the right Id.
Currently I am solving the issue like this:
import json
with open("filepath", 'r') as file:
data = json.load(file)
for value in data['relatedIds']:
if(value['source'] == 'sourceA'):
id_from_a = value['id']
entry = Entry(data['name'], data['time'], data['id'], id_from_a)
I don't think this approach is the optimal solution though, especially if relatedIds list gets longer and more entries appended to the JSON file.
Is there a more sophisticated way of singling out this 'id' value from a specified source without looping through all entries in that nested list?
For a cleaner solution, you could try using python's filter() function with a simple lambda:
import json
with open("filepath", 'r') as file:
data = json.load(file)
filtered_data = filter(lambda a : a["source"] == "sourceA", data["relatedIds"])
id_from_a = next(filtered_data)['id']
entry = Entry(data['name'], data['time'], data['id'], id_from_a)
Correct me if I misunderstand how your json file looks, but it seems to work for me.
One step at a time, in order to get to all entries:
>>> data["relatedIds"]
[{'id': '789', 'source': 'sourceB'}, {'id': '456', 'source': 'sourceA'}]
Next, in order to get only those entries with source=sourceA:
>>> [e for e in data["relatedIds"] if e["source"] == "sourceA"]
[{'id': '456', 'source': 'sourceA'}]
Now, since you don't want the whole entry, but just the ID, we can go a little further:
>>> [e["id"] for e in data["relatedIds"] if e["source"] == "sourceA"]
['456']
From there, just grab the first ID:
>>> [e["id"] for e in data["relatedIds"] if e["source"] == "sourceA"][0]
'456'
Can you get whatever generates your .json file to produce the relatedIds as an object rather than a list?
{
"name": "abc",
"time": "20220607T190731.442",
"id": "123",
"relatedIds": {
"sourceA": "456",
"sourceB": "789"
}
}
If not, I'd say you're stuck looping through the list until you find what you're looking for.
I am trying to use Python to extract pricePerUnit from JSON. There are many entries, and this is just 2 of them -
{
"terms": {
"OnDemand": {
"7Y9ZZ3FXWPC86CZY": {
"7Y9ZZ3FXWPC86CZY.JRTCKXETXF": {
"offerTermCode": "JRTCKXETXF",
"sku": "7Y9ZZ3FXWPC86CZY",
"effectiveDate": "2020-11-01T00:00:00Z",
"priceDimensions": {
"7Y9ZZ3FXWPC86CZY.JRTCKXETXF.6YS6EN2CT7": {
"rateCode": "7Y9ZZ3FXWPC86CZY.JRTCKXETXF.6YS6EN2CT7",
"description": "Processed translation request in AWS GovCloud (US)",
"beginRange": "0",
"endRange": "Inf",
"unit": "Character",
"pricePerUnit": {
"USD": "0.0000150000"
},
"appliesTo": []
}
},
"termAttributes": {}
}
},
"CQNY8UFVUNQQYYV4": {
"CQNY8UFVUNQQYYV4.JRTCKXETXF": {
"offerTermCode": "JRTCKXETXF",
"sku": "CQNY8UFVUNQQYYV4",
"effectiveDate": "2020-11-01T00:00:00Z",
"priceDimensions": {
"CQNY8UFVUNQQYYV4.JRTCKXETXF.6YS6EN2CT7": {
"rateCode": "CQNY8UFVUNQQYYV4.JRTCKXETXF.6YS6EN2CT7",
"description": "$0.000015 per Character for TextTranslationJob:TextTranslationJob in EU (London)",
"beginRange": "0",
"endRange": "Inf",
"unit": "Character",
"pricePerUnit": {
"USD": "0.0000150000"
},
"appliesTo": []
}
},
"termAttributes": {}
}
}
}
}
}
The issue I run into is that the keys, which in this sample, are 7Y9ZZ3FXWPC86CZY, CQNY8UFVUNQQYYV4.JRTCKXETXF, and CQNY8UFVUNQQYYV4.JRTCKXETXF.6YS6EN2CT7 are a changing string that I cannot just type out as I am parsing the dictionary.
I have python code that works for the first level of these random keys -
with open('index.json') as json_file:
data = json.load(json_file)
json_keys=list(data['terms']['OnDemand'].keys())
#Get the region
for i in json_keys:
print((data['terms']['OnDemand'][i]))
However, this is tedious, as I would need to run the same code three times to get the other keys like 7Y9ZZ3FXWPC86CZY.JRTCKXETXF and 7Y9ZZ3FXWPC86CZY.JRTCKXETXF.6YS6EN2CT7, since the string changes with each JSON entry.
Is there a way that I can just tell python to automatically enter the next level of the JSON object, without having to parse all keys, save them, and then iterate through them? Using JQ in bash I can do this quite easily with jq -r '.terms[][][]'.
If you are really sure, that there is exactly one key-value pair on each level, you can try the following:
def descend(x, depth):
for i in range(depth):
x = next(iter(x.values()))
return x
You can use dict.values() to iterate over the values of a dict. You can also use next(iter(dict.values())) to get a first (only) element of a dict.
for demand in data['terms']['OnDemand'].values():
next_level = next(iter(demand.values()))
print(next_level)
If you expect other number of children than 1 in the second level, you can just nest the fors:
for demand in data['terms']['OnDemand'].values():
for sub_demand in demand.values()
print(sub_demand)
If you are insterested in the keys too, you can use dict.items() method to iterate over dict keys and values at the same time:
for demand_key, demand in data['terms']['OnDemand'].items():
for sub_demand_key, sub_demand in demand.items()
print(demand_key, sub_demand_key, sub_demand)
I'm pretty new to Python, so just working my way through understanding the data sets.
I'm having a little trouble producing the JSON output that is required for the API I am working with.
I am using
import json
json.load(data_file)
Working with Python dictionary and then doing
json.dump(dict, json_data)
My data needs to look like the following when it is output.
{
"event":{
"id":10006,
"event_name":"My Event Name",
},
"sub event":[
],
"attendees":[
{
"id":11201,
"first_name":"Jeff",
"last_name":"Smith",
},
{
"id":10002,
"first_name":"Victoria",
"last_name":"Baker",
},
]
}
I have been able to create the arrays in python and dump to json, but I am having difficulty creating the event "object" in the dictionary. I am using the below:
attendees = ['attendees']
attendeesdict = {}
attendeesdict['first_name'] = "Jeff"
attendees.append(attendeesdict.copy())
Can anyone help me add the "event" object properly?
In general, going from JSON to dictionary is almost no work because the two are very similar, if not identical:
attendees = [
{
"first_name": "Jeff"
# Add other fields which you need here
},
{
"first_name": "Victoria"
}
]
In this instance, attendees is a list of dictionaries. For the event:
event = {
"id": 10006,
"event_name": "My Event Name"
}
Putting it all together:
data = {
"event": event,
"sub event": [],
"attendees": attendees
}
Now, you can convert it to a JSON object, ready to send to your API:
json_object = json.dumps(data)
Assuming you have built all the values elsewhere and now you're just putting them together:
result = {'event':event_dict, 'sub event':subevent_list, 'attendees':attendees_list}
If you want just to statically create a nested dict, you can use a single literal. If you paste the JSON above into python code, you would get a valid dict literal.
Construct your dicts and add like below
{
"event":"add your dict"
"sub event":["add your dict"],
"attendees":["add your dict"]
}