I have the following frozenset:
f_set = [frozenset({8, 14, 15, 18}), frozenset({1, 2, 3, 7, 8}), frozenset({0, 4, 5})]
I need to convert f_set into a dictionary as the following
For the first set, I need the dictionary to have a value of 0.
For the second set, I need the dictionary to have a value of 1.
For the third set, I need the dictionary to have a value of 2.
Now, in case some keys are existed in multiple set, assign a new values to them. In this case 8 existed in both set 1 and set 2, so assign a value of 3.
dict1 = {8:3, 14:0, 15:0, 18:0, 1:1, 2:1, 3:1, 7:1, 0:2, 4:2, 5:2}
Note: my actual f_set contains more than three sets, so I'd like to avoid doing that manually.
You can use dict comprehension with enumerate:
f_set = [frozenset({8, 14, 15, 18}), frozenset({1, 2, 3, 7, 8}), frozenset({0, 4, 5})]
dict1 = {x: i for i, s in enumerate(f_set) for x in s}
print(dict1)
# {8: 1, 18: 0, 14: 0, 15: 0, 1: 1, 2: 1, 3: 1, 7: 1, 0: 2, 4: 2, 5: 2}
Note that, if the sets are not mutually disjoint, some keys will be discarded, since a dict cannot have duplicate keys.
You can simply loop over the frozensets to set each of them in an output dictionary:
output = dict()
for i in range(len(f_set)):
for s in f_set[i]:
output[s] = i
Note although the order may be different from what you have, order shouldn't matter in the dictionary.
Related
I need to replace dictionary values based on their index
Ex:
mydict = {'abc':10, 'iji':9, 'sls': 8, 'eie': 2, 'wlw': 6, 'pwp': 5}
Here i need to replace the value of the key at 4th index with new value 10
Expected output:
{'abc':10, 'iji':9, 'sls': 8, 'eie': 2, 'wlw': 10, 'pwp': 5}
You can use dict.keys() to get the keys of the dictionary, then find the key at the 4th index of that list and set that value in the dictionary:
mydict = {'abc':10, 'iji':9, 'sls': 8, 'eie': 2, 'wlw': 6, 'pwp': 5}
keys = list(mydict.keys())
mydict[keys[4]] = 10
print(mydict)
Output:
{'abc': 10, 'iji': 9, 'sls': 8, 'eie': 2, 'wlw': 10, 'pwp': 5}
Note
This will only work in Python < 3.7 if the dictionary is not modified (entries added or deleted after creation) as otherwise mydict.keys() is not guaranteed to return the same order of result each time. For versions of Python < 3.7 it is safer to use an OrderedDict.
I wanted to create an mapping between two arrays. But in python, doing this resulted in a mapping with last element getting picked.
array_1 = [0,0,0,1,2,3]
array_2 = [4,4,5,6,8,7]
mapping = dict(zip(array_1, array_2))
print(mapping)
The mapping resulted in {0: 5, 1: 6, 2: 8, 3: 7}
How to pick the most occurring element in this case 4 for key 0.
You can create a dictionary with key and a list of values for the key. Then you can go over the list of values in this dictionary, and update the value to be the most frequent item in the list using Counter.most_common
from collections import defaultdict, Counter
array_1 = [0,0,0,1,2,3]
array_2 = [4,4,5,6,8,7]
mapping = defaultdict(list)
#Create the mapping with a list of values
for key, value in zip(array_1, array_2):
mapping[key].append(value)
print(mapping)
#defaultdict(<class 'list'>, {0: [4, 4, 5], 1: [6], 2: [8], 3: [7]})
res = defaultdict(int)
#Iterate over mapping and chose the most frequent element in the list, and make it the value
for key, value in mapping.items():
#The most frequent element will be the first element of Counter.most_common
res[key] = Counter(value).most_common(1)[0][0]
print(dict(res))
The output will be
{0: 4, 1: 6, 2: 8, 3: 7}
You can count frequencies of all mappings using Counter and then sort those mappings by key and frequency:
from collections import Counter
array_1 = [0,0,0,1,2,3]
array_2 = [4,4,5,6,8,7]
c = Counter(zip(array_1, array_2))
dict(i for i, _ in sorted(c.items(), key=lambda x: (x[0], x[1]), reverse=True))
# {3: 7, 2: 8, 1: 6, 0: 4}
Here is a list a=[1,1,1,2,4,2,4,32,1,4,35,23,24,23]
I do this in python:
unique_number=list(set(a))
ans=map(lambda x:a.index(x),unique_number)
output:
<map at 0x2b98b307828>
I want to know what's wrong with my code and find an more efficient way to achieve this.
This code would work as you expected in Python 2. In Python 3, map returns an iterator. You could, e.g., convert it to a list:
>>> ans=map(lambda x:a.index(x),unique_number)
>>> list(ans)
[7, 0, 3, 10, 4, 11, 12]
You can avoid keep re-indexing and building a set first - simply build a dict iterating over a backwards as the dictionary will only keep the last value for a key (in this case - the earliest appearing index), eg:
a=[1,1,1,2,4,2,4,32,1,4,35,23,24,23]
first_index = {v:len(a) - k for k,v in enumerate(reversed(a), 1)}
# {1: 0, 2: 3, 4: 4, 23: 11, 24: 12, 32: 7, 35: 10}
This way you're only scanning the sequence once.
Try this:
for value in map(lambda x:a.index(x),unique_number):
print(value)
or append this:
for var in ans:
print(var)
I have this function:
def find_nearest(array,value):
idx = (np.abs(array-value)).argmin()
return array[idx]
def df_to_count_dict(df):
count_dict = Counter(df.values)
holder = []
for i in range(1,max(list(count_dict.keys()))):
if i in count_dict.keys(): continue
holder.append(i)
for i in holder:
j = find_nearest(np.array(list(count_dict.keys())),i)
count_dict.update({i:count_dict[j]})
return count_dict
What it does is it takes a data series and uses the Counter function from collection to return back a dictionary. It also replaces values which are not in the dictionary with the closest value.
Now, I want to amend this function to return the same object, the count_dict but replace values not in the keys of the dictionary with the average between what it the missing value is between.
This is best explained by an example:
Take
test = pd.Series([1,2,3,3,7,7,7,8])
Without the function above we get:
Counter(test.values)
Out[459]: Counter({1: 1, 2: 1, 3: 2, 7: 3, 8: 1})
Using the function we get
df_to_count_dict(test)
Out[458]: Counter({1: 1, 2: 1, 3: 2, 4: 2, 5: 2, 6: 2, 7: 3, 8: 1})
As you can see it has added keys 4,5,6 with values 2 as 2 is the value of the closest key (the closest key is 3).
What I have it to return is the AVERAGE between the value of lower closest key and the upper closest key, so the upper closest key is 3, which has value 2, and the upper closest key is 7, which has value 3, so I want the final product to look something like:
df_to_count_dict(test)
Out[458]: Counter({1: 1, 2: 1, 3: 2, 4: 2.5, 5: 2.5, 6: 2.5, 7: 3, 8: 1})
I hope someone can help
This look a lot like school work. So you should figure it out your self. But here is a hint. The query you are being asked to develop is finding the mean between the predecessor's count and the successor's count. The predessor is the largest key smaller or equal to the input and the successor is the smallest key larger than the input.
If you need O(log(n))-complexity then you might look at binary search trees bintrees is a good package https://pypi.python.org/pypi/bintrees/2.0.4.
I need to unite two lists in Python3,where duplicates can exist,and for one set of these the resulting list will contain as many as max in both lists.An example might clarify it:
[1,2,2,5]( some operator)[2,5,5,5,9]=[1,2,2,5,5,5,9]
Ideas?
You can use the collections.Counter class:
>>> from collections import Counter
>>> combined = Counter([1,2,2,5]) | Counter([2,5,5,5,9])
>>> list(combined.elements())
[1, 2, 2, 5, 5, 5, 9]
It functions as a multiset (an unordered collection where each element can appear multiple times). The | operator gives you the union of the multisets, where each element appears max(apperances_in_counter1, appearances_in_counter2) times.
This class was added in Python 2.7 and 3.1.
Why use lists in the first place? That data looks like a dict to me:
[1,2,2,5] -> {1: 1, 2: 2, 5: 1}
[2,5,5,5,9] -> {2: 1, 5: 3, 9: 1}
Then it's simple:
keys = set(a.keys()+b.keys())
vals = [max(a.get(n, 0), b.get(n, 0)) for n in keys]
d = dict(zip(keys, vals))
print d
Result:
{1: 1, 2: 2, 5: 3, 9: 1}
Convert arrays to dictionaries with a[key] = count
Create new dictionary with rules c[key] = a.get(key, 0) > b.get(key, 0) and a[key] or b[key]. You need to iterate through both keys in a and in b dicts.
Expand dictionary, result += [value] * key