This question already has answers here:
json.loads allows duplicate keys in a dictionary, overwriting the first value
(3 answers)
Closed 12 days ago.
If I have JSON with duplicate keys and different values in each of the duplicate keys, how can I extract both in python?
ex:
{
'posting': {
'content': 'stuff',
'timestamp': '123456789'
}
'posting': {
'content': 'weird stuff',
'timestamp': '93828492'
}
}
If I wanted to grab both timestamps, how would I do so?
I tried a a = json.loads(json_str) and then a['posting']['timestamp'] but that only returns one of the values.
You can't have duplicate keys. You can change the object to array instead.
[
{
'content': 'stuff',
'timestamp': '123456789'
},
{
'content': 'weird stuff',
'timestamp': '93828492'
}
]
Duplicate keys actually overwrite the previous entry. Instead you maintain an array for that key. Example json is as below
{
'posting' : [
{
'content': 'stuff',
'timestamp': '123456789'
},
{
'content': 'weird stuff',
'timestamp': '93828492'
}
]
}
you can now access different elements in posting key like this
json.posting[0] , json.posting[1]
As has already been covered: it is against the standard, and the outcome across systems is undefined, so avoid duplicate keys.
Yet, if a third party software component forces this upon you, note the section abut this topic from the standard library https://docs.python.org/3/library/json.html#repeated-names-within-an-object
By default, this module does not raise an exception; instead, it ignores all but the last name-value pair for a given name [...] The object_pairs_hook parameter can be used to alter this behavior.
So let's do it!
import itertools, json
def duplicate_object_pairs_hook(pairs):
def _key(pair):
(k, v) = pair
return k
def gpairs():
for (k, group) in itertools.groupby(pairs, _key):
ll = [v for (_, v) in group]
(v, *extra) = ll
yield (k, ll if extra else v)
return dict(gpairs())
badj = """{
"posting": {"content": "stuff", "timestamp": "123456789"},
"posting": {"content": "weird stuff", "timestamp": "93828492"}
}"""
data = json.loads(badj, object_pairs_hook=duplicate_object_pairs_hook)
Now data evals to
{
'posting': [
{'content': 'stuff', 'timestamp': '123456789'},
{'content': 'weird stuff', 'timestamp': '93828492'},
],
}
Remember that this hook will be called for every json node parsed, with the list of tuples of key-value pairs parsed. The default behavior should be equivalent to the dict constructor given a key-value tuple iterable.
Also, I assumed duplicate keys are adjacent, as that's my use-case, but you might have to sort the pairs before grouping them.
Related
I have two JSON files, one that contains a fully defined object with multiple levbels of nesting, the other contains a stripped back version of the same object that lists just elements that need to be changed
File 1 example
{
"toplevel": {
"value": {
"settings": [
{
"name": "A Default Value",
"region": "US",
"inner": {
"name": "Another Default",
"setting": "help"
}
}
]
}
}
}
File 2 example
{
"toplevel": {
"value": {
"settings": [
{
"name": "A Real Value",
"inner": {
"name": "Another Real Value",
}
}
]
}
}
}
I want to merge the updates from file 2 into file 1.
my output should look like
{
"toplevel": {
"value": {
"settings": [
{
"name": "A Real Value",
"region": "US",
"inner": {
"name": "Another Real Value",
"setting": "help"
}
}
]
}
}
}
so far I've tried
f1 = json_load(file1)
f2 = json_load(file2)
f1['toplevel']['value']['settings'][0].update(f2['toplevel']['value']['settings'][0].items())
it works perfectly for the top level items, but obviously it overwrites the whole of the "inner" object, removing the "setting" key inside it.
Is there a way to traverse the whole tree and replace only the non-dictionary values? I don't have access to external libraries other than json and collections (for the ordered dict)
It depends slightly on what you want
Solution 1
If you simply want to replace all values by the new dictionary, you can use the following options:
result = {**file_1, **file_2}
from pprint import pprint
pprint(result)
This will result in:
{'toplevel': {'value': {'settings': [{'inner': {'name': 'Another Real Value'},
'name': 'A Real Value'}]}}}
Alternatively you can use
file_1.update(file_2)
pprint(file_1)
Which will lead to the same outcome, but will update file_1 in place.
Solution 2
If you only want to update the specific key in the nesting, and leave all other values intact, you can do this using recursion. In your example you are using dict, list and str values. So I will build the recursion using the same types.
def update_dict(original, update):
for key, value in update.items():
# Add new key values
if key not in original:
original[key] = update[key]
continue
# Update the old key values with the new key values
if key in original:
if isinstance(value, dict):
update_dict(original[key], update[key])
if isinstance(value, list):
update_list(original[key], update[key])
if isinstance(value, (str, int, float)):
original[key] = update[key]
return original
def update_list(original, update):
# Make sure the order is equal, otherwise it is hard to compare the items.
assert len(original) == len(update), "Can only handle equal length lists."
for idx, (val_original, val_update) in enumerate(zip(original, update)):
if not isinstance(val_original, type(val_update)):
raise ValueError(f"Different types! {type(val_original)}, {type(val_update)}")
if isinstance(val_original, dict):
original[idx] = update_dict(original[idx], update[idx])
if isinstance(val_original, (tuple, list)):
original[idx] = update_list(original[idx], update[idx])
if isinstance(val_original, (str, int, float)):
original[idx] = val_update
return original
The above might be a bit harder to understand, but I will try to explain it.
There are two methods, one which will merge two dictionaries and one that tries to merge two lists.
Merging dictionaries
In order to merge the two dictionaries I go over all the keys and values of the update dictionary, because this will probably be the smaller of the two.
The first block puts new keys in the original dictionary, this is updating values that weren't in the original dictionary at the start.
The second block is updating the nested values. There I distinguish three cases:
If the value is another dict, run the dictionary merge again, but one level deeper.
If the value is a list (or tuple), run the list merge function.
If the value is a str (or int, float), replace the original value with the updated value.
Merging lists
This is a bit trickier than dictionaries, because lists do not have an order or keys that I can compare. Therefore I have to make a heavy assumption that the list updates will always contain the same elements, see limitations on how to handle lists with more than 1 element.
Since the lists are of the same length, I can assume that the indices of the lists are matching. Now in order to check if all the values are the same, we have to do the following:
Make sure that the value types are the same, otherwise we will throw an error since I am not sure how to handle that case.
If the values are dictionaries, use the merging of dictionaries.
If the values are list (or tuple) us the list merging.
If the values are str (or int, float), override the original in place.
Result
using:
from pprint import pprint
pprint(update_dict(file_1, file_2))
The final result will be:
{'toplevel': {'value': {'settings': [{'inner': {'name': 'Another Real Value',
'setting': 'help'},
'name': 'A Real Value',
'region': 'US'}]}}}
Note that in contrast with the first solution the values 'setting': 'help' and 'region': 'US'} are now still in the original dictionary.
Limitations
Due to the same length constraint, if you do not want to update an element in the list you have to pass the same element type, but empty.
Example on how to ignore a list update:
... {'settings': [
{} # do not update the first element.
{'name': 'A new name'} # update second element.
]
}
I want to get an API from somewhere such as a news website but I have an issue which took me a lot of times and I asked here before but I got no clear answer so, I touched the problem now, the overview of the task is I want to create an API but the data always change by the time.
so, my task is so: I need to create a new JSON file to save all data whether it was old or is a new one, the old one will appear in old.html and the new data will appear in news.html therefore, I need to create a dictionary to has the objects but I want the key when it's come in from the request if the same will override the old as Python does with keys but their values will be included in the object key for instance:
d = {
"2020-12-16": {
"name": "Joe"
}
}
so, the above example is a simple dict but if I reload the page will be getting the new request if there is one and then, the dict will include another named object with the key "2020-12-16" which override the first key so, I want to add the value and Ignore the existing key unless if it has a new data so, in this case, will be added.
also, the last condition is the values of dict will not be repeated so, How can I do that?
sorry for talking long and thanks in advance.
Is this what you mean?
d = {
"2020-12-16": {
"name": "Joe"
}
}
d2 = {
"2020-12-16": {
"name2": "Smith"
}
}
for key, val in d2.items():
new_dict = d.get(key, {})
new_dict.update(val)
d[key] = new_dict
print(d)
Will output:
{'2020-12-16': {'name': 'Joe', 'name2': 'Smith'}}
EDIT:
If you would like each dict to remain independent you need your entries in d to be a list of dicts:
d = {
"2020-12-16": [{
"name": "Joe"
}]
}
d2 = {
"2020-12-16": {
"name": "Smith"
}
}
for key, val in d2.items():
entry = d.get(key, [])
entry.append(val)
print(d)
Output:
{'2020-12-16': [{'name': 'Joe'}, {'name': 'Smith'}]}
Aim
For an assignment I need to put some building information (geometries and their properties) into a GeoJSON data structure.
Approach
My general approach is as follows:
create an empty dictionary with the necessary GeoJSON data structure and .append the data to this structure
(see: below)
output_buildings = {
'type': "FeatureCollection",
'features': [
{
'type': "Feature",
'geometry': {
'type': ,
'coordinates':
},
'properties': {
'building_id': ,
'building_year': ,
'building_status': ,
'building_occurance':
}
}
]
}
Issue
I know how to create a simple empty dictionary, but my problem is that I don't know how to create a key structure without values (as I want to append those values later on). I receive an error at type (the second one within features) at the ,.
Previous efforts
I found this topic on StackOverflow:
Method for Creating a Nested Dictionary from a List of Keys;
but it doesn't fit my purpose.
I searched in the python docs with terms like "create empty nested dictionary", "create dictionary with empty values", but didn't find what I was looking for.
I tried to use placeholders (%s / %d), but they are not suitable for this purpose
Finally I tried to use pass (didn't work).
Question
I haven't found a solution yet. Can you please provide me with some suggestions?
Thanks in advance!
Your current dictionary structure is invalid, since you must have key-value pairs existent.
You can get around this by filling in None placeholders for keys that don't have a designated value. The key coordinates can be set to an empty list however, since you can have multiple coordinates.
Valid GeoJSON structure:
output_buildings = {
'type': "FeatureCollection",
'features': [
{
'type': "Feature",
'geometry': {
'type': None,
'coordinates': []
},
'properties': {
'building_id': None,
'building_year': None,
'building_status': None,
'building_occurance': None
}
}
]
}
I am newbie to programming ,need some inputs/direction to build a smart code.
I have 10 ec2 instances, each instance have a Tag which contains a dictionary of 3 key/val pairs. Some instances have same keys and may be few have different key,I want to find out which instances have different keys within the Tag.
Comparing every key with rest 9 instances's keys is not the best way to go I think.
Please let me how to approach this issue and do I need to use json dumper to parse the data?
Here is the example of a single instance, I have 10 of these.
"tags": [
{
"depid": 18,
"key": "sales",
"value": "31"
},
{
"depid": 239,
"key": "eng",
"value": "steve"
},
Is it what you were looking for?
data = {'tags': [{'key': 'key1', 'value': 'value1'},
{'key': 'key2', 'value': 'value2'}]}
keys = set([tag['key'] for tag in data['tags']])
required_keys = set(['key1', 'key2'])
print keys == required_keys # check whether matches exactly
print keys >= required_keys # check whether contains all required keys
I have my JSON code below being stored in jso variable.
jso = {
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {
"Abbrev": "ISO 8879:1986",
"GlossDef": {
"GlossSeeAlso": ["GML", "XML"]
},
"GlossSee": "markup"
}
}
}
}
Whenever I'm trying to fetch the data or iterate over the JSON Object, it's printing the data in the reverse order i.e object first and then the other parameters.
For eg. I execute:
>>> for k,v in jso.iteritems():
... print v
...
AND THE OUTPUT I GOT:
OUTPUT GETTING
{'GlossList': {'GlossEntry': {'Abbrev': 'ISO 8879:1986', 'GlossDef': {'GlossSeeAlso': ['GML', 'XML']}, 'GlossSee': 'markup'}}, 'title': 'S'}
It can be seen that though 'title':'S' was written before the 'GlossList' Object still the data is printing in the reverse order. I mean it should have:
OUTPUT EXPECTED
{ 'title': 'S', 'GlossList': {'GlossEntry': {'Abbrev': 'ISO 8879:1986', 'GlossDef': {'GlossSeeAlso': ['GML', 'XML']}, 'GlossSee': 'markup'}}}
Dictionaries in python are unordered collections:
It is best to think of a dictionary as an unordered set of key: value
pairs, with the requirement that the keys are unique (within one
dictionary).
But, if you've loaded json from the string, you can load it directly to the OrderedDict, see:
Can I get JSON to load into an OrderedDict in Python?