How to increment d['a']['b']['c'][1][2][3] if d is defaultdict of defaultdict without code dublication?
from collections import defaultdict
nested_dict_type = lambda: defaultdict(nested_dict_type)
nested_dict = nested_dict_type()
# incrementation
if type(nested_dict['a']['b']['c']['d'][1][2][3][4][5][6]) != int:
nested_dict['a']['b']['c']['d'][1][2][3][4][5][6] = 0
nested_dict['a']['b']['c']['d'][1][2][3][4][5][6] += 1 # ok, now it contains 1
Here we can see that we duplicated (in the code) a chain of keys 3 times.
Question: Is it possible to write a function inc that will take nested_dict['a']['b']...[6] and do the same job as above? So:
def inc(x):
if type(x) != int:
x = 0
x += 1
inc(nested_dict['a']['b']['c']['d'][1][2][3][4][5][6]) # ok, now it contains 1
Update (20 Aug 2018):
There is still no answer to the question. It's clear that there are options "how to do what I want", but the question is straightforward: there is "value", we pass it to a function, function modifies it. It looks that it's not possible.
Just a value, without any "additional keys", etc.
If it is so, can we make an answer more generic?
Notes:
What is defaultdict of defaultdicts - SO.
This question is not about "storing of integers in a defaultdict", so I'm not looking for a hierarchy of defaultdicts with an int type at the leaves.
Assume that type (int in the examples) is known in advance / can be even parametrized (including the ability to perform += operator) - the question is how to dereference the object, pass it for modification and store back in the context of defaultdict of defaultdicts.
Is the answer to this question related to the mutability? See example below:
Example:
def inc(x):
x += 1
d = {'a': int(0)}
inc(d['a'])
# d['a'] == 0, immutable
d = {'a': Int(0)}
inc(d['a'])
# d['a'] == 1, mutated
Where Int is:
class Int:
def __init__(self, value):
self.value = value
def __add__(self, v):
self.value += v
return self
def __repr__(self):
return str(self.value)
It's not exactly abut mutability, more about how assignment performs name binding.
When you do x = 0 in your inc function you bind a new object to the name x, and any connection between that name and the previous object bound to that name is lost. That doesn't depend on whether or not x is mutable.
But since x is an item in a mutable object we can achieve what you want by passing the parent mutable object to inc along with the key needed to access the desired item.
from collections import defaultdict
nested_dict_type = lambda: defaultdict(nested_dict_type)
nested_dict = nested_dict_type()
# incrementation
def inc(ref, key):
if not isinstance(ref[key], int):
ref[key] = 0
ref[key] += 1
d = nested_dict['a']['b']['c']['d'][1][2][3][4][5]
inc(d, 6)
print(d)
output
defaultdict(<function <lambda> at 0xb730553c>, {6: 1})
Now we aren't binding a new object, we're merely mutating an existing one, so the original d object gets updated correctly.
BTW, that deeply nested dict is a bit painful to work with. Maybe there's a better way to organize your data... But anyway, one thing that can be handy when working with deep nesting is to use lists or tuples of keys. Eg,
q = nested_dict
keys = 'a', 'b', 'c', 'd', 1, 2, 3, 4, 5
for k in keys:
q = q[k]
q now refers to nested_dict['a']['b']['c']['d'][1][2][3][4][5]
You can't have multiple default types with defaultdict. You have the following options:
Nested defaultdict of defaultdict objects indefinitely;
defaultdict of int objects, which likely won't suit your needs;
defaultdict of defaultdict down to a specific level with int defined for the last level, e.g. d = defaultdict(lambda: defaultdict(int)) for a single nesting;
Similar to (3), but for counting you can use collections.Counter instead, i.e. d = defaultdict(Counter).
I recommend the 3rd or 4th options if you are always going to go down to a set level. In other words, a scalar value will only be supplied at the nth level, where n is constant.
Otherwise, one manual option is to have a function perform the type-testing. In this case, try / except may be a good alternative. Here we also define a recursive algorithm to allow you to feed a list of keys rather than defining manual __getitem__ calls.
from collections import defaultdict
from functools import reduce
from operator import getitem
nested_dict_type = lambda: defaultdict(nested_dict_type)
d = nested_dict_type()
d[1][2] = 10
def inc(d_in, L):
try:
reduce(getitem, L[:-1], d_in)[L[-1]] += 1
except TypeError:
reduce(getitem, L[:-1], d_in)[L[-1]] = 1
inc(d, [1, 2])
inc(d, [1, 3])
print(d)
defaultdict({1: defaultdict({2: 11, 3: 1})})
Related
I found something strange that I couldn't understand.
This is the case:
from collections import defaultdict
a = defaultdict(lambda: len(a))
This is just the part of the code, and the code has never defined 'a' above.
The questions are:
Is it possible to use defaultdict as is, not specifying the variable previously?
If possible, what is the meaning of that code?
Maybe it is best explained in an example:
>>> a = defaultdict(lambda: len(b))
>>> b = 'abcd'
>>> a[0]
4
As you can see, it is possible to use b in the lambda even though the b does not yet exist at that point. What is important is that b exists at the time when the lambda is executed. At that point, Python will look for a variable named b and use it.
Note also that the original code does not necessarily use length of the defaultdict itself. It simply evaluates whatever a is at that point. See this example:
>>> a = defaultdict(lambda: len(a))
>>> a['a']
0
>>> a['b']
1
So far, so good. But then rename some things:
>>> x = a
>>> a = []
>>> x['c']
0
>>> x['d']
0
Now the deaultdict is named x, but it does not use len(x). It still uses len(a). This caveat may become important if you sent the defaultdict to a function where a does not mean anything.
you are saying to default dict, when i try to do something with a key and it doesnt exist, use this lambda as the inital value for the key. since your lambda is using a (i.E the dict its self) and you say the length of it. It means when you perform operations using a key thats not in the dict then the dict will use the lambda instead or in this case the length of the dict as the value
from collections import defaultdict
a = defaultdict(lambda: len(a))
a['one'] += 5 #here dict length is 0 so value is 0 + 5 = 5
a['two'] += 2 #jere dict length is 1 so value is 1 + 2 = 3
a['three'] += 1 #here dict length is 2 so value is 2 + 1 = 3
print(a.items())
print(a['newval']) #here newval doesnt exist so will use default value which is length of dict I.E 3
OUTPUT
dict_items([('one', 5), ('two', 3), ('three', 3)])
3
Here's how defaultdict works. Say you have a dict of lists and you're setting values for keys that might not exist. In that case you'd do something like this:
d = dict()
if some_key not in d:
d[some_key] = list()
d[some_key].append(some_value)
defaultdict does this automatically for you by passing it a callable, e.g., int, list, set, which will call int() (default value 0), list() (default value empty list), and set() (default value empty set) respectively. Your lambda is also a callable, which returns integers, so you'll have a dict with int values. But the value you get from the expression will depend on the size of the dict.
Can you do a = defaultdict(lambda: len(a))?
Yes, you can. The lambda will not be executed until called which is when it'll look up the name a. Compare these two cases.
f = lambda: len(a)
a = defaultdict(f)
a[0] # this is when the lambda is called for the first time
But,
g = lambda: len(b)
g() # this will raise a NameError
b = defauldict(g)
I have to convert a bunch of strings into numbers, process the numbers and convert back.
I thought of a map where I will add 2 keys when I've provided string:
Key1: (string, number);
Key2: (number, string).
But this is not optimal in terms of memory.
What I need to archieve in example:
my_cool_class.get('string') # outputs 1
my_cool_class.get(1) # outputs 'string'
Is there better way to do this in python?
Thanks in advance!
You can implement your own twoway dict like
class TwoWayDict(dict):
def __len__(self):
return dict.__len__(self) / 2
def __setitem__(self, key, value):
dict.__setitem__(self, key, value)
dict.__setitem__(self, value, key)
my_cool_class = TwoWayDict()
my_cool_class[1] = 'string'
print my_cool_class[1] # 'string'
print my_cool_class['string'] # 1
Instead of allocate another memory for the second dict, you can get the key from the value, consider that it will cost you with run-time.
mydict = {'george':16,'amber':19}
print (mydict.keys()[mydict.values().index(16)])
>>> 'george'
EDIT:
Notice that In Python 3, dict.values() (along with dict.keys() and dict.items()) returns a view, rather than a list. You therefore need to wrap your call to dict.values() in a call to list like so:
mydict = {'george':16,'amber':19}
print (list(mydict.keys())[list(mydict.values()).index(16)])
If optimal memory usage is an issue, you may not want to use Python in the first place. To solve your immediate problem, just add both the string and the number as keys to the dictionary. Remember that only a reference to the original objects will be stored. Additional copies will not be made:
d = {}
s = '123'
n = int(s)
d[s] = n
d[n] = s
Now you can access the value by the opposite key just like you wanted. This method has the advantage of O(1) lookup time.
You can create a dictionary of tuples this way you just need to check against the type of the variable to decide which one you should return.
Example:
class your_cool_class(object):
def __init__(self):
# example of dictionary
self.your_dictionary = {'3': ('3', 3), '4': ('4', 4)}
def get(self, numer):
is_string = isinstanceof(number, str)
number = str(number)
n = self.your_dictionary.get(number)
if n is not None:
return n[0] if is_string else n[1]
>>>> my_cool_class = your_cool_class()
>>>> my_cool_class.get(3)
>>>> '3'
>>>> my_cool_class.get('3')
>>>> 3
I need to compare hundreds of objects stored in a unique list to find duplicates:
object_list = {Object_01, Object_02, Object_03, Object_04, Object_05, ...}
I've written a custom function, which returns True, if the objects are equal and False if not:
object_01.compare(object_02)
>>> True
Compare method works well, but takes a lot of time per execution. I'm currently using itertools.combinations(x, 2) to iterate through all combinations. I've thought it's a good idea to use a dict for storing already compared objects and create new sets dynamically like:
dct = {'Compared': {}}
dct['Compared'] = set()
import itertools
for a, b in itertools.combinations(x, 2):
if b.name not in dct['Compared']:
if compare(a,b) == True:
#print (a,b)
key = a.name
value = b.name
if key not in dct:
dct[key] = set()
dct[key].add(value)
else:
dct[key].add(value)
dct[key].add(key)
dct['Compared'].add(b)
Current Output:
Compared: {'Object_02', 'Object_01', 'Object_03', 'Object_04', 'Object_05'}
Object_01: {'Object_02', 'Object_03', 'Object_01'}
Object_04: {'Object_05', 'Object_04'}
Object_05: {'Object_04'}
...
I would like to know: Is there a faster way to iterate through all combinations and how to break/prevent the iteration of an object, which is already assigned to a list of duplicates?
Desired Output:
Compared: {'Object_02', 'Object_01', 'Object_03', 'Object_04', 'Object_05'}
Object_01: {'Object_02', 'Object_03', 'Object_01'}
Object_04: {'Object_05', 'Object_04'}
...
Note: Compare method is a c-wrapper. Requirement is to find an algorithm around it.
You don't need to calculate all combinations, you just need to check if a given item is a duplicate:
for i, a in enumerate(x):
if any(a.compare(b) for b in x[:i]):
# a is a duplicate of an already seen item, so do something
This is still technically O(n^2), but you've cut out at least half the checks required, and should be a bit faster.
In short, x[:i] returns all items in the list before index i. If the item x[i] appears in that list, you know it's a duplicate. If not, there may be a duplicate after it in the list, but you worry about that when you get there.
Using any is also important here: if it finds any true item, it will immediately stop, without checking the rest of the iterable.
You could also improve the number of checks by removing known duplicates from the list you're checking against:
x_copy = x[:]
removed = 0
for i, a in enumerate(x):
if any(a.compare(b) for b in x_copy[:i-removed]):
del x_copy[i-removed]
removed += 1
# a is a duplicate of an already seen item, so do something
Note that we use a copy, to avoid changing the sequence we're iterating over, and we need to take account for the number of items we've removed when using indexes.
Next, we just need to figure out how to build the dictionary.
THis might be a little more complex. The first step is to figure out exactly which element is a duplicate. This can be done by realising any is just a wrapper around a for loop:
def any(iterable):
for item in iterable:
if item: return True
return False
We can then make a minor change, and pass in a function:
def first(iterable, fn):
for item in iterable:
if fn(item): return item
return None
Now, we change our duplicate finder as follows:
d = collections.defaultdict(list)
x_copy = x[:]
removed = 0
for i, a in enumerate(x):
b = first(x_copy[:i-removed], a.compare):
if b is not None:
# b is the first occurring duplicate of a
del x_copy[i-removed]
removed += 1
d[b.name].append(a)
else:
# we've not seen a yet, but might see it later
d[a.name].append(a)
This will put every element in the list into a dict(-like). If you only want the duplicates, it's then just a case of getting all the entries with a length greater than 1.
Group the objects by name if you want to find the dups grouping by attributes
class Foo:
def __init__(self,i,j):
self.i = i
self.j = j
object_list = {Foo(1,2),Foo(3,4),Foo(1,2),Foo(3,4),Foo(5,6)}
from collections import defaultdict
d = defaultdict(list)
for obj in object_list:
d[(obj.i,obj.j)].append(obj)
print(d)
defaultdict(<type 'list'>, {(1, 2): [<__main__.Foo instance at 0x7fa44ee7d098>, <__main__.Foo instance at 0x7fa44ee7d128>],
(5, 6): [<__main__.Foo instance at 0x7fa44ee7d1b8>],
(3, 4): [<__main__.Foo instance at 0x7fa44ee7d0e0>, <__main__.Foo instance at 0x7fa44ee7d170>]})
If not the name then use a tuple to store all the attributes you use to check for comparison.
Or sort the list by the attributes that matter and use groupby to group:
class Foo:
def __init__(self,i,j):
self.i = i
self.j = j
object_list = {Foo(1,2),Foo(3,4),Foo(1,2),Foo(3,4),Foo(5,6)}
from itertools import groupby
from operator import attrgetter
groups = [list(v) for k,v in groupby(sorted(object_list, key=attrgetter("i","j")),key=attrgetter("i","j"))]
print(groups)
[[<__main__.Foo instance at 0x7f794a944d40>, <__main__.Foo instance at 0x7f794a944dd0>], [<__main__.Foo instance at 0x7f794a944d88>, <__main__.Foo instance at 0x7f794a944e18>], [<__main__.Foo instance at 0x7f794a944e60>]]
You could also implement lt, eq and hash to make your objects sortable and hashable:
class Foo(object):
def __init__(self,i,j):
self.i = i
self.j = j
def __lt__(self, other):
return (self.i, self.j) < (other.i, other.j)
def __hash__(self):
return hash((self.i,self.j))
def __eq__(self, other):
return (self.i, self.j) == (other.i, other.j)
print(set(object_list))
object_list.sort()
print(map(lambda x: (getattr(x,"i"),getattr(x,"j")),object_list))
set([<__main__.Foo object at 0x7fdff2fc08d0>, <__main__.Foo object at 0x7fdff2fc09d0>, <__main__.Foo object at 0x7fdff2fc0810>])
[(1, 2), (1, 2), (3, 4), (3, 4), (5, 6)]
Obviously the attributes need to be hashable, if you had lists you could change to tuples etc..
The Dictionary __getitem__ method does not seem to work the same way as it does for List, and it is causing me headaches. Here is what I mean:
If I subclass list, I can overload __getitem__ as:
class myList(list):
def __getitem__(self,index):
if isinstance(index,int):
#do one thing
if isinstance(index,slice):
#do another thing
If I subclass dict, however, the __getitem__ does not expose index, but key instead as in:
class myDict(dict):
def __getitem__(self,key):
#Here I want to inspect the INDEX, but only have access to key!
So, my question is how can I intercept the index of a dict, instead of just the key?
Example use case:
a = myDict()
a['scalar'] = 1 # Create dictionary entry called 'scalar', and assign 1
a['vector_1'] = [1,2,3,4,5] # I want all subsequent vectors to be 5 long
a['vector_2'][[0,1,2]] = [1,2,3] # I want to intercept this and force vector_2 to be 5 long
print(a['vector_2'])
[1,2,3,0,0]
a['test'] # This should throw a KeyError
a['test'][[0,2,3]] # So should this
Dictionaries have no order; there is no index to pass in; this is why Python can use the same syntax ([..]) and the same magic method (__getitem__) for both lists and dictionaries.
When you index a dictionary on an integer like 0, the dictionary treats that like any other key:
>>> d = {'foo': 'bar', 0: 42}
>>> d.keys()
[0, 'foo']
>>> d[0]
42
>>> d['foo']
'bar'
Chained indexing applies to return values; the expression:
a['vector_2'][0, 1, 2]
is executed as:
_result = a['vector_2'] # via a.__getitem__('vector_2')
_result[0, 1, 2] # via _result.__getitem__((0, 1, 2))
so if you want values in your dictionary to behave in a certain way, you must return objects that support those operations.
I want to define a list of an integer and defaultdict in python.
I am creating a parent dictionary which shall return the above list.
I am being unable to define the list type.
def index_struct():return defaultdict(list_struct)
def list_struct(): return list(int,post_struct)
def post_struct(): return defaultdict(list)
Currently getting an error as list cant take two args..
Thanks for the help in advance
You're right that list() only takes one argument. Use the square brackets notation instead. Also note the [int, post_struct] won't work because nothing is calling the two constructors. You need to call the constructors manually by adding parentheses:
from collections import defaultdict
def index_struct():return defaultdict(list_struct)
def list_struct(): return [int(), post_struct()]
def post_struct(): return defaultdict(list)
>>> d = index_struct()
>>> d['somekey'][0] = 5
>>> d['somekey'][1]['anotherkey'] = 6
>>> d
defaultdict(<function list_struct at 0x10252ff50>, {'somekey': [5, defaultdict(<type 'list'>, {'anotherkey': 6})]})