Due to some poor planning I have a script that expects a python dict with certain keys however, the other script that creates this dict is using a different naming convention.
Unfortunately, due to translations that have already taken place it looks like I'll need to convert the dict keys.
Basically go from
{'oldKey':'data'}
to
{'newKey':'data'}
I was thinking of creating a dict:
{'oldKey':'newKey'}
and iterate through the dict to convert from oldKey to newKey however is this the most efficient/pythonic way to do it?
I can think of a couple of ways to do this which use dictionaries, but one of them might be more efficient depending on the coverage of the key usage.
a) With a dictionary comprehension:
old_dict = {'oldkey1': 'val1', 'oldkey2': 'val2',
'oldkey3': 'val3', 'oldkey4': 'val4',
'oldkey5': 'val5'}
key_map = {'oldkey1': 'newkey1', 'oldkey2': 'newkey2',
'oldkey3': 'newkey3', 'oldkey4': 'newkey4',
'oldkey5': 'newkey5'}
new_dict = {newkey: old_dict[oldkey] for (oldkey, newkey) in key_map.iteritems()}
print new_dict['newkey1']
b) With a simple class that does the mapping. (Note that I have switched the order of key/value in key_map in this example.) This might be more efficient because it will use lazy evaluation - no need to iterate through all the keys - which may save time if not all the keys are used.
class DictMap(object):
def __init__(self, key_map, old_dict):
self.key_map = key_map
self.old_dict = old_dict
def __getitem__(self, key):
return self.old_dict[self.key_map[key]]
key_map = {'newkey1': 'oldkey1',
'newkey2': 'oldkey2',
'newkey3': 'oldkey3',
'newkey4': 'oldkey4',
'newkey5': 'oldkey5'}
new_dict2 = DictMap(key_map, old_dict)
print new_dict2['newkey1']
This will solve your problem:
new_dict={key_map[oldkey]: vals for (oldkey, vals) in old_dict.items()}
Related
Sorry if this is trivial I'm still learning but I have a list of dictionaries that looks as follow:
[{'1102': ['00576', '00577', '00578', '00579', '00580', '00581']},
{'1102': ['00582', '00583', '00584', '00585', '00586', '00587']},
{'1102': ['00588', '00589', '00590', '00591', '00592', '00593']},
{'1102': ['00594', '00595', '00596', '00597', '00598', '00599']},
{'1102': ['00600', '00601', '00602', '00603', '00604', '00605']}
...]
it contains ~89000 dictionaries. And I have a list containing 4473208 paths. example:
['/****/**/******_1102/00575***...**0CT.csv',
'/****/**/******_1102/00575***...**1CT.csv',
'/****/**/******_1102/00575***...**2CT.csv',
'/****/**/******_1102/00575***...**3CT.csv',
'/****/**/******_1102/00575***...**4CT.csv',
'/****/**/******_1102/00578***...**1CT.csv',
'/****/**/******_1102/00578***...**2CT.csv',
'/****/**/******_1102/00578***...**3CT.csv',
...]
and what I want to do is group each path that contains the grouped values in the dict in the folder containing the key together.
I tried using for loops like this:
grpd_cts = []
for elem in tqdm(dict_list):
temp1 = []
for file in ct_paths:
for key, val in elem.items():
if (file[16:20] == key) and (any(x in file[21:26] for x in val)):
temp1.append(file)
grpd_cts.append(temp1)
but this takes around 30hours. is there a way to make it more efficient? any itertools function or something?
Thanks a lot!
ct_paths is iterated repeatedly in your inner loop, and you're only interested in a little bit of it for testing purposes; pull that out and use it to index the rest of your data, as a dictionary.
What does make your problem complicated is that you're wanting to end up with the original list of filenames, so you need to construct a two-level dictionary where the values are lists of all originals grouped under those two keys.
ct_path_index = {}
for f in ct_paths:
ct_path_index.setdefault(f[16:20], {}).setdefault(f[21:26], []).append(f)
grpd_cts = []
for elem in tqdm(dict_list):
temp1 = []
for key, val in elem.items():
d2 = ct_path_index.get(key)
if d2:
for v in val:
v2 = d2.get(v)
if v2:
temp1 += v2
grpd_cts.append(temp1)
ct_path_index looks like this, using your data:
{'1102': {'00575': ['/****/**/******_1102/00575***...**0CT.csv',
'/****/**/******_1102/00575***...**1CT.csv',
'/****/**/******_1102/00575***...**2CT.csv',
'/****/**/******_1102/00575***...**3CT.csv',
'/****/**/******_1102/00575***...**4CT.csv'],
'00578': ['/****/**/******_1102/00578***...**1CT.csv',
'/****/**/******_1102/00578***...**2CT.csv',
'/****/**/******_1102/00578***...**3CT.csv']}}
The use of setdefault (which can be a little hard to understand the first time you see it) is important when building up collections of collections, and is very common in these kinds of cases: it makes sure that the sub-collections are created on demand and then re-used for a given key.
Now, you've only got two nested loops; the inner checks are done using dictionary lookups, which are close to O(1).
Other optimizations would include turning the lists in dict_list into sets, which would be worthwhile if you made more than one pass through dict_list.
What is the best way to replace the first character of all keys in a dictionary?
old_cols= {"~desc1":"adjustment1","~desc23":"adjustment3"}
I am trying to get
new_cols= {"desc1":"adjustment1","desc23":"adjustment3"}
I have tried:
for k,v in old_cols.items():
new_cols[k]=old_cols.pop(k[1:])
old_cols = {"~desc1":"adjustment1", "~desc23":"adjustment3"}
new_cols = {}
for k, v in old_cols.items():
new_key = k[1:]
new_cols[new_key] = v
Here it is with a dictionary comprehension:
old_cols= {"~desc1":"adjustment1","~desc23":"adjustment3"}
new_cols = {k.replace('~', ''):old_cols[k] for k in old_cols}
print(new_cols)
#{'desc1': 'adjustment1', 'desc23': 'adjustment3'}
There are many ways to do this with list-comprehension or for-loops. What is important is to understand is that dictionaries are mutable. This basically means that you can either modify the existing dictionary or create a new one.
If you want to create a new one (and I would recommend it! - see 'A word of warning ...' below), both solutions provided in the answers by #Ethem_Turgut and by #pakpe do the trick. I would probably wirte:
old_dict = {"~desc1":"adjustment1","~desc23":"adjustment3"}
# get the id of this dict, just for comparison later.
old_id = id(old_dict)
new_dict = {k[1:]: v for k, v in old_dict.items()}
print(f'Is the result still the same dictionary? Answer: {old_id == id(new_dict)}')
Now, if you want to modify the dictionary in place you might loop over the keys and adapt the key/value to your liking:
old_dict = {"~desc1":"adjustment1","~desc23":"adjustment3"}
# get the id of this dict, just for comparison later.
old_id = id(old_dict)
for k in [*old_dict.keys()]: # note the weird syntax here
old_dict[k[1:]] = old_dict.pop(k)
print(f'Is the result still the same dictionary? Answer: {old_id == id(old_dict)}')
A word of warning for the latter approach:
You should be aware that you are modifying the keys while looping over them. This is in most of the cases problematic and can even lead to a RuntimeError if you loop directly over old_dict. I avoided this by explicitly unpacking the keys into a list and the looping over that list with [*old_dict.keys()].
Why can modifying keys while looping over them be problematic? Imagine for example that you have the keys '~key1' and 'key1' in your old_dict. Now when your loop handles '~key1' it will modify it to 'key1' which already exists in old_dict and thus it will overwrite the value of 'key1' with the value from '~key1'.
So, only use the latter approach if you are certain to not produce issues like the example mentioned here before. If you are uncertain, simply create a new dictionary!
Just a general question....
I need to initialise at least 50 different dictionaries, which then goes as one of the arguments for a function (make_somefunction) I made that involves returning a dictionary with customised keys and a list as values
Is there a way to initialise a dictionary and output it directly to the function?
Something like this?
from collections import defaultdict
def initialise(dict):
dict =defaultdict(list)
return (dict)
initialise(dict).make_somefunction(dict, custom_keys, custom_listofvalues)
Instead of
dict1 = defaultdict(list)
dict2 = defaultdict(list)
dict2 = defaultdict(list)
...
dict49 = defaultdict(list)
dict50 = defaultdict(list)
which would individually go as an argument creating different customised dictionaries
make_somefunction(dict1, animals, foods)
make_somefunction(dict2, patients, drugs)
...
make_somefunction(dict50, fridge, fruits)
You can pass a list of the defaultdicts by using a list comprehension.
make_somefunction([defaultdict() for i in range(50)])
By the way, read the python docs to get a better understanding of python.
I need to parse a json file which unfortunately for me, does not follow the prototype. I have two issues with the data, but i've already found a workaround for it so i'll just mention it at the end, maybe someone can help there as well.
So i need to parse entries like this:
"Test":{
"entry":{
"Type":"Something"
},
"entry":{
"Type":"Something_Else"
}
}, ...
The json default parser updates the dictionary and therfore uses only the last entry. I HAVE to somehow store the other one as well, and i have no idea how to do this. I also HAVE to store the keys in the several dictionaries in the same order they appear in the file, thats why i am using an OrderedDict to do so. it works fine, so if there is any way to expand this with the duplicate entries i'd be grateful.
My second issue is that this very same json file contains entries like that:
"Test":{
{
"Type":"Something"
}
}
Json.load() function raises an exception when it reaches that line in the json file. The only way i worked around this was to manually remove the inner brackets myself.
Thanks in advance
You can use JSONDecoder.object_pairs_hook to customize how JSONDecoder decodes objects. This hook function will be passed a list of (key, value) pairs that you usually do some processing on, and then turn into a dict.
However, since Python dictionaries don't allow for duplicate keys (and you simply can't change that), you can return the pairs unchanged in the hook and get a nested list of (key, value) pairs when you decode your JSON:
from json import JSONDecoder
def parse_object_pairs(pairs):
return pairs
data = """
{"foo": {"baz": 42}, "foo": 7}
"""
decoder = JSONDecoder(object_pairs_hook=parse_object_pairs)
obj = decoder.decode(data)
print obj
Output:
[(u'foo', [(u'baz', 42)]), (u'foo', 7)]
How you use this data structure is up to you. As stated above, Python dictionaries won't allow for duplicate keys, and there's no way around that. How would you even do a lookup based on a key? dct[key] would be ambiguous.
So you can either implement your own logic to handle a lookup the way you expect it to work, or implement some sort of collision avoidance to make keys unique if they're not, and then create a dictionary from your nested list.
Edit: Since you said you would like to modify the duplicate key to make it unique, here's how you'd do that:
from collections import OrderedDict
from json import JSONDecoder
def make_unique(key, dct):
counter = 0
unique_key = key
while unique_key in dct:
counter += 1
unique_key = '{}_{}'.format(key, counter)
return unique_key
def parse_object_pairs(pairs):
dct = OrderedDict()
for key, value in pairs:
if key in dct:
key = make_unique(key, dct)
dct[key] = value
return dct
data = """
{"foo": {"baz": 42, "baz": 77}, "foo": 7, "foo": 23}
"""
decoder = JSONDecoder(object_pairs_hook=parse_object_pairs)
obj = decoder.decode(data)
print obj
Output:
OrderedDict([(u'foo', OrderedDict([(u'baz', 42), ('baz_1', 77)])), ('foo_1', 7), ('foo_2', 23)])
The make_unique function is responsible for returning a collision-free key. In this example it just suffixes the key with _n where n is an incremental counter - just adapt it to your needs.
Because the object_pairs_hook receives the pairs exactly in the order they appear in the JSON document, it's also possible to preserve that order by using an OrderedDict, I included that as well.
Thanks a lot #Lukas Graf, i got it working as well by implementing my own version of the hook function
def dict_raise_on_duplicates(ordered_pairs):
count=0
d=collections.OrderedDict()
for k,v in ordered_pairs:
if k in d:
d[k+'_dupl_'+str(count)]=v
count+=1
else:
d[k]=v
return d
Only thing remaining is to automatically get rid of the double brackets and i am done :D Thanks again
If you would prefer to convert those duplicated keys into an array, instead of having separate copies, this could do the work:
def dict_raise_on_duplicates(ordered_pairs):
"""Convert duplicate keys to JSON array."""
d = {}
for k, v in ordered_pairs:
if k in d:
if type(d[k]) is list:
d[k].append(v)
else:
d[k] = [d[k],v]
else:
d[k] = v
return d
And then you just use:
dict = json.loads(yourString, object_pairs_hook=dict_raise_on_duplicates)
I am currently trying to find a more pythonic way of filtering a dictionary using another dictionary. Currently I have the following code:
def filter_respondents(data_dict, tolerance):
NaN_dict = diagnostic_tools.get_NaN_ratio(data_dict)
final_dict = {}
for respondent in data_dict:
if NaN_dict[respondent]<=tolerance:
final_dict[respondent] = data_dict[respondent]
return final_dict
The code does what I want it to do but I'm looking for a better way of doing it. Basically I have 2 dictionaries. data_dict is a dictionary with the key-value pairs id:response and NaN_dict has the key-value pairs id:value. If value is below tolerance, I want the key-value pair with the same ID in data_dict to be included in final_dict.
I came up with something like:
final_dict = {k:v for k,v in data_dict if NaN_dict[k]<=tolerance}
Which I know is wrong, but I'm not sure how to proceed. Thanks!
I think you are almost right. Seems that the only thing missing is to call .items() for getting key-value pairs:
{k: v for k, v in data_dict.items() if NaN_dict[k] <= tolerance}