Need help on converting Ruby function to Python function - python

I'm trying to create a Python function to convert the lists (Objects of arrays in ELK term) to dictionary. I found a sample Ruby function which does that and I'm trying to convert it to Python function for my usage. I'm finding hard time to get the output. The output will be inserted back to Elastic Search.
Ruby Function - I found in Internet
def arrays_to_hash(h)
h.each do |k,v|
# If v is nil, an array is being iterated and the value is k.
# If v is not nil, a hash is being iterated and the value is v.
value = v || k
if value.is_a?(Array)
# "value" is replaced with "value_hash" later.
value_hash = {}
value.each_with_index do |v, i|
value_hash[i.to_s] = v
end
h[k] = value_hash
end
if value.is_a?(Hash) || value.is_a?(Array)
arrays_to_hash(value)
end
end
end
Python Function - I'm trying - Upon seeing the O/P i can see the first list inside the dictionary is getting converted but the nested list inside that is still present
def array_path(my_dict):
for k,v in my_dict.items():
if isinstance(v,list):
print (len(v))
for i, item in enumerate(v):
my_dict2[str(i)] = item
my_dict[k] = my_dict2
elif isinstance(v,dict):
array_path(v)
else:
my_dict[k] = v
Input
{
"foo": "bar",
"test": {
"steps": [
{
"response_time": "100"
},
{
"response_time": "101",
"more_nested": [
{
"hello": "world"
},
{
"hello2": "world2"
}
]
}
]
}
}
**
Expected Output
**
{
"foo": "bar",
"test": {
"steps": {
"0": {
"response_time": "100"
},
"1": {
"response_time": "101",
"more_nested": {
"0": {
"hello": "world"
},
"1": {
"hello2": "world2"
}
}
}
}
}
}
Current O/P
{'0': {'response_time': '100'},
'1': {'more_nested': [{'hello': 'world'}, {'hello2': 'world2'}],
'response_time': '101'}}

the original script stopped its check to list, not implementing a solution for a list of dicts. looks ok now
def array_path(my_dict):
if type(my_dict) is dict:
for k, v in my_dict.items():
my_dict[k] = array_path(v)
elif type(my_dict) is list:
return {str(i): array_path(item) for i, item in enumerate(my_dict)}
return my_dict

Related

Having an issue parsing through this json in python

I have created a var that is equal to t.json. The JSON file is a follows:
{
"groups": {
"customerduy": {
"nonprod": {
"name": "customerduynonprod",
"id": "529646781943",
"owner": "cloudops#coerce.com",
"manager_email": ""
},
"prod": {
"name": "phishing_duyaccountprod",
"id": "241683454720",
"owner": "cloudops#coerce.com",
"manager_email": ""
}
},
"customerduyprod": {
"nonprod": {
"name": "phishing_duyaccountnonprod",
"id": "638968214142",
"owner": "cloudops#coerce.com",
"manager_email": ""
}
},
"ciasuppliergenius": {
"prod": {
"name": "ciasuppliergeniusprod",
"id": "220753788760",
"owner": "cia_developers#coerce.com",
"manager_email": "jarks#coerce.com"
}
}
}
}
my goal was to pars this JSON file and get value for "owner" and output it to a new var. Example below:
t.json = group_map
group_id_aws = group(
group.upper(),
"accounts",
template,
owner = group_map['groups']['prod'],
manager_description = "Groups for teams to access their product accounts.",
The error I keep getting is: KeyError: 'prod'
Owner occurs 4 times, so here is how to get all of them.
import json
# read the json
with open("C:\\test\\test.json") as f:
data = json.load(f)
# get all 4 occurances
owner_1 = data['groups']['customerduy']['nonprod']['owner']
owner_2 = data['groups']['customerduy']['prod']['owner']
owner_3 = data['groups']['customerduyprod']['nonprod']['owner']
owner_4 = data['groups']['ciasuppliergenius']['prod']['owner']
# print results
print(owner_1)
print(owner_2)
print(owner_3)
print(owner_4)
the result:
cloudops#coerce.com
cloudops#coerce.com
cloudops#coerce.com
cia_developers#coerce.com
You get a key error since the key 'prod' is not in 'groups'
What you have is
group_map['groups']['customerduy']['prod']
group_map['groups']['ciasuppliergenius']['prod']
So you will have to extract the 'owner' from each element in the tree:
def s(d,t):
for k,v in d.items():
if t == k:
yield v
try:
for i in s(v,t):
yield i
except:
pass
print(','.join(s(j,'owner')))
If your JSON is loaded in variable data, you can use a recursive function
that deals with the two containers types (dict and list) that can occur
in a JSON file, recursively:
def find_all_values_for_key(d, key, result):
if isinstance(d, dict):
if key in d:
result.append(d[key])
return
for k, v in d.items():
find_all_values_for_key(v, key, result)
elif isinstance(d, list):
for elem in d:
find_all_values_for_key(elem, key, result)
owners = []
find_all_values_for_key(data, 'owner', owners)
print(f'{owners=}')
which gives:
owners=['cloudops#coerce.com', 'cloudops#coerce.com', 'cloudops#coerce.com', 'cia_developers#coerce.com']
This way you don't have to bother with the names of intermediate keys, or in general the structure of your JSON file.
You don't have any lists in your example, but it is trivial to recurse through
them to any dict with an owner key that might "lurk" somewhere nested
under a a list element, so it is better to deal with potential future changes
to the JSON.

unable to update JSON using python

I am trying to update transaction ID from the following json:
{
"locationId": "5115",
"transactions": [
{
"transactionId": "1603804404-5650",
"source": "WEB"
} ]
I have done following code for the same, but it does not update the transaction id, but it inserts the transaction id to the end of block:-
try:
session = requests.Session()
with open(
"sales.json",
"r") as read_file:
payload = json.load(read_file)
payload["transactionId"] = random.randint(0, 5)
with open(
"sales.json",
"w") as read_file:
json.dump(payload, read_file)
Output:-
{
"locationId": "5115",
"transactions": [
{
"transactionId": "1603804404-5650",
"source": "WEB"
} ]
}
'transactionId': 1
}
Expected Outut:-
{
"locationId": "5115",
"transactions": [
{
"transactionId": "1",
"source": "WEB"
} ]
This would do it, but only in your specific case:
payload["transactions"][0]["transactionId"] = xxx
There should be error handling for cases like "transactions" key is not int the dict, or there are no records or there are more than one
also, you will need to assign =str(your_random_number) not the int if you wish to have the record of type string as the desired output suggests
If you just want to find the transactionId key and you don't know exactly where it may exist. You can do-
from collections.abc import Mapping
def update_key(key, new_value, jsondict):
new_dict = {}
for k, v in jsondict.items():
if isinstance(v, Mapping):
# Recursive traverse if value is a dict
new_dict[k] = update_key(key, new_value, v)
elif isinstance(v, list):
# Traverse through all values of list
# Recursively traverse if an element is a dict
new_dict[k] = [update_key(key, new_value, innerv) if isinstance(innerv, Mapping) else innerv for innerv in v]
elif k == key:
# This is the key to replace with new value
new_dict[k] = new_value
else:
# Just a regular value, assign to new dict
new_dict[k] = v
return new_dict
Given a dict-
{
"locationId": "5115",
"transactions": [
{
"transactionId": "1603804404-5650",
"source": "WEB"
} ]
}
You can do-
>>> update_key('transactionId', 5, d)
{'locationId': '5115', 'transactions': [{'transactionId': 5, 'source': 'WEB'}]}
Yes because transactionId is inside transactions node. So your code should be like:
payload["transactions"][0].transactionId = random.randint(0, 5)
or
payload["transactions"][0]["transactionId"] = random.randint(0, 5)

Accessing a nested node in a json dictionary with an attribute [duplicate]

I have a dictionary which contains the following json elements.
myjsonDictionary = \
{
"Teams": {
"TeamA": {
"#oid": "123.0.0.1",
"dataRequestList": {
"state": {
"#default": "0",
"#oid": "2"
}
},
"TeamSub": {
"#oid": "3",
"dataRequestList": {
"state": {
"#default": "0",
"#oid": "2"
}
}
}
},
# ....many nested layers
}
}
I have the following issue and am currently very confused on how to solve this problem.
I want to be able to parse this dictionary and get the concatenation of the "#oid" value and the respective "#oid" when I request the "key" such as "TeamA" or "TeamSub".
I have a function which takes in the gettheiDLevelConcatoid(myjsonDictionary, key).
I can call this function like this:
gettheiDLevelConcatoid(myjsonDictionary, key) where "key" is like "TeamA"
And the expected output should be "123.0.0.1.2". Note the 2 appended to the 123.0.0.1.
gettheiDLevelConcatoid(myjsonDictionary, key) where "key" is like TeamSub
Output is "123.0.0.1.3.2". Note the "3.2" added to the "123.0.0.1".
My current implementation:
def gettheiDLevelConcatoid(myjsonDictionary, key)
for item in myjsonDictionary:
if (item == key):
#not sure what to do
I am so lost on how to implement a generic method or approach for this.
With recursive traversal for specific keys:
def get_team_idlvel_oid_pair(d, search_key):
for k, v in d.items():
if k.startswith('Team'):
if k == search_key:
return '{}{}.{}'.format(d['#oid'] + '.' if '#oid' in d else '',
v['#oid'], v['dataRequestList']['state']['#oid'])
elif any(k.startswith('Team') for k_ in v):
return get_team_idlvel_oid_pair(v, search_key)
print(get_team_idlvel_oid_pair(myjsonDictionary['Teams'], 'TeamA'))
print(get_team_idlvel_oid_pair(myjsonDictionary['Teams'], 'TeamSub'))
Sample output:
123.0.0.1.2
123.0.0.1.3.2

How to sort all lists in a deeply nested dictionary in python?

I want to sort all lists within a deeply nested dictionary. It is basically a JSON object which deep nesting of dictionaries within lists and then lists within dictionaries. All I want to do is, parse through all dictionary keys to all leaf nodes and sort all the lists that i encounter on the way. Basically, any list directly available or deep down within that given dictionary object should get sorted and the same dictionary with all sorted lists should be returned.
I tried doing recursion on the dict object to pass any dict object encountered to the recursion method and sorting the lists when encountered. But they fail to produce results when there is a dict inside a list and then another list inside that dict object.
Sample JSON below:
my_json = {
a: {
b: {
c: [
{
d: [
{ f: 'some_string' }
]
},
{
e: {
g: [
h: 'another string'
]
}
}
]
}
}
z: [
b: {
c: [
{
d: [
{ f: 'some_string1' }
]
},
{
e: {
g: [
h: 'another string1'
]
}
}
]
},
x: {
c: [
{
d: [
{ f: 'some_string2' }
]
},
{
e: {
g: [
h: 'another string2'
]
}
}
]
}
]
}
def gen_dict_extract(input_dict):
result_obj = input_dict;
if hasattr(var, 'iteritems'):
for k, v in var.iteritems():
if isinstance(v, dict):
for result in gen_dict_extract(v):
yield result
elif isinstance(v, list):
v.sort();
for d in v:
for result in gen_dict_extract(d):
yield result
The output expectation is just to have all lists sorted irrespective of where they lie. I am even okay with sorting every item in the dictionary but list sorting is what I require.
Taking a smaller example here to explain the output:
old_json = {
'x': [
{
'z': {
'y': ['agsd', 'xef', 'sdsd', 'erer']
}
},
{
's': {
'f': 'ererer',
'd': [5, 6, 2, 3, 1]
}
}
]
}
new_json = {
'x': [
{
's': {
'f': 'ererer',
'd': [1, 2, 3, 5, 6]
}
},
{
'z': {
'y': ['agsd', 'erer', 'sdsd','xef']
}
}
]
}
Something like above.
If you want the output to be a different dictionary (i.e. not sorting the original), the function should be written like this:
def sortedDeep(d):
if isinstance(d,list):
return sorted( sortedDeep(v) for v in d )
if isinstance(d,dict):
return { k: sortedDeep(d[k]) for k in sorted(d)}
return d
This way you can use sortedDeep() the same way you would use the buil-in sorted() function:
new_json = sortedDeep(old_json)
[EDIT] Improved version that will also sort lists of dictionaries (or list of lists) based on the smallest key/value of the embedded object:
def sortedDeep(d):
def makeTuple(v): return (*v,) if isinstance(v,(list,dict)) else (v,)
if isinstance(d,list):
return sorted( map(sortedDeep,d) ,key=makeTuple )
if isinstance(d,dict):
return { k: sortedDeep(d[k]) for k in sorted(d)}
return d
I believe the code snippet here will do the job for sorting nested dictionaries.
def nested_sort(d:dict):
for v in d.values():
if isinstance(v,dict):
nested_sort(v)
elif isinstance(v,list):
v.sort()
However, I cannot test the code because the example you gave is not in legal JSON format or a legal python dictionary.

Why isn't this nested dict comprehension working in Python?

nested_dict = { b: { a: some_other_source_dict[b][a] or {} for a in a_list } for b in b_list }
If some_other_source_dict[b][a] exists, the correct output should be:
nested_dict = { b_key_1: { a_key_1: a_val_1, a_key_2: a_val_2 },
b_key_2: { a_key_1: a_val_3, a_key_2: a_val_4 } }
If it doesn't exist, the output should be:
nested_dict = { b_key_1: { a_key_1: {}, a_key_2: {} },
b_key_2: { a_key_1: {}, a_key_2: {} } }
some_other_source_dict[b][a] doesn't return a falsy value if it doesn't exist, it just errors. You want something like { a: some_other_source_dict[b][a] for a in a_list } if "some_other_source_dict" in globals() else {}. Preferably, you should have some way of determining whether or not it's defined without needing to check globals().

Categories

Resources