This question already has answers here:
What is the best way to implement nested dictionaries?
(22 answers)
Closed 9 years ago.
I'm currently using the method below to define a multidimensional dictionary in python. My question is: Is this the preferred way of defining multidimensional dicts?
from collections import defaultdict
def site_struct():
return defaultdict(board_struct)
def board_struct():
return defaultdict(user_struct)
def user_struct():
return dict(pageviews=0,username='',comments=0)
userdict = defaultdict(site_struct)
to get the following structure:
userdict['site1']['board1']['username'] = 'tommy'
I'm also using this to increment counters on the fly for a user without having to check if a key exists or is set to 0 already. E.g.:
userdict['site1']['board1']['username']['pageviews'] += 1
Tuples are hashable. Probably I'm missing the point, but why don't you use a standard dictionary with the convention that the keys will be triples? For example:
userdict = {}
userdict[('site1', 'board1', 'username')] = 'tommy'
You can create a multidimensional dictionary of any type like this:
from collections import defaultdict
from collections import Counter
def multi_dimensions(n, type):
""" Creates an n-dimension dictionary where the n-th dimension is of type 'type'
"""
if n<=1:
return type()
return defaultdict(lambda:multi_dimensions(n-1, type))
>>> m = multi_dimensions(5, Counter)
>>> m['d1']['d2']['d3']['d4']
Counter()
This is a pretty subjective question from my perspective. For me, the real question would be at what point do you promote this nested data structure to objects with methods to insulate you from changes. However, I've been known to create large prototyping namespaces with the following:
from collections import defaultdict
def nesteddict():
return defaultdict(nesteddict)
This is highly subjective - but if your dict is getting to be more than 2-n deep, I would revert to creating a class and defining methods/properties to do what you want.
I think your end goal would be to create a "multi-column" data structure so you can store any number of attributes for a particular object (in this case, a site). Creating a site class would accomplish that and then you can stick instances wherever you would any other object (variable) - with some noted exceptions.
Related
This question already has answers here:
Accessing class variables from a list comprehension in the class definition
(8 answers)
Closed 2 years ago.
Basically, why do these work:
class MyClass:
Dict={['A','B','C'][i]:{['a','b','c'][j]:[1,2,3][j] for j in range(3)} for i in range(3)}
and
class MyClass:
Table = ['A','B','C']
Dict={Table[0]:'a',Table[1]:'b',Table[2]:'c'}
but not this one?
class MyClass:
Table = ['A','B','C']
Dict={Table[i]:['a','b','c'][i] for i in range(3)}
I'm trying to consolidate a bunch of arrays, interpolation functions, and solvers by putting them all in classes. Each array contains a number of different properties so I figured the best way to sort this data was through nested dictionaries (Many of the tables aren't complete so I can't use numpy arrays or even lists very effectively because the indices change depending on the line of the data). I've worked out all the other kinks, but for some reason moving it into a class gives me an error:
NameError: name 'Table' is not defined
I'm an engineering student and basically only learned how to use scipy solvers and integrators; everything else is self taught. Don't be afraid to tell me I'm doing everything wrong :)
I think you are trying to do a Dictionary Comprehension, but weird enough, this message error you receive does not make much sense to me.
Anyway, with this implementation it worked just fine for me:
class MyClass:
Table = ['A','B','C']
Dict= {i:j for i,j in zip(Table, ['a','b', 'c'])}
This is a class variable + comprehension scope issue. Table is not defined inside your dictionary comprehension, and in the other definition which uses Table you are not doing a comprehension.
You may want to use __init__ here.
This question already has answers here:
Type annotations for Enum attribute
(5 answers)
Closed 2 years ago.
I was curious how I'd type-hint an enumeration of strings, for example:
["keyword1", "keyword2"]
I'd want some variable, v, to be equal to any of these string literals. I could accomplish this with a union of literals - Union[Literal["keyword1"], Literal["keyword2"]] but it'd be make maintainability difficult if one of these keywords gets changed in the future.
Ideally, I'd want to define things like this:
class Keywords(enum):
keywordOne = "keyword1"
keywordTwo = "keyword2"
v: valueOf[Keywords] = Keywords.keywordOne.value # v = "keyword1"
But I'm not sure how to accomplish something like this in MyPy
You're nearly there. It seems like what you're looking for is a custom enum object that is itself typed and then type annotations that dictate the use of that enum. Something like this:
from enum import Enum
from typing import Literal
class CustomKeyword(Enum):
keywordOne: Literal["keyword1"] = "keyword1"
keywordTwo: Literal["keyword2"] = "keyword2"
v: CustomKeyword = CustomKeyword.keywordOne
Does this not give you the expected outcome?
i am searching a method or library that use a dict that allow method like:
element_dict_in({'polo':789}),and element_dict_out() that return me the first relation that was put in the dictionary, the 2 method that i mentioned before are not implemented,it is for clarify my idea:
for example:
dict={}
element_dict_in({'polo1':789})
element_dict_in({'polo2':123})
element_dict_in({'polo3':4556})#{'polo1':789,'polo2':123,'polo3':4556}
element_dict_out()#return {'polo1':789}
i find this link pythonic way for FIFO order in Dictionary but for my it is not enough clear, so
exist something like that?
Python actually already has this in the standard library - collections.OrderedDict.
from collections import OrderedDict
my_dict = OrderedDict()
my_dict['polo1'] = 789
my_dict['polo2'] = 123
my_dict['polo3'] = 4556
print(my_dict.popitem(last=False))
# ('polo1', 789)
Notably, the built-in dict type can do LIFO popping but not FIFO popping if that's acceptable to you, and is generally faster than the OrderedDict for most things.
You can do dict FIFO pop with the built-in dict type as follows:
my_dict.pop(list(my_dict)[0])
with list(my_dict) you get all the keys
with list(my_dict)[0] you get the first one
with my_dict.pop(list(my_dict)[0]) you get the key of the first one, and pop it
in Python 3.7+
This question already has answers here:
Is there a way to store a function in a list or dictionary so that when the index (or key) is called it fires off the stored function?
(3 answers)
Closed 8 months ago.
I need to store a large numbers of functions rules in Python (around 100000 ), to be used after....
def rule1(x,y) :...
def rule2(x,y): ...
What is the best way to store, manage those function rules instance into Python structure ?
What about using Numpy dtype=np.object array ?
(list are bad when they become too large...)
Main goal is to access in the fastest and minimum memory footprint when storing in memory.
Thanks
Functions are first class objects in Python - you can store them just like you'd store any other variable or value:
def a():
pass
def b():
pass
funcs = [a,b]
funcs[0]() # calls `a()`.
When you use those rules, you're going to have to reference them somehow. If your example is a hint of the naming convention, then go with a list. Calling them in sequence would be easy via map or in a loop.
rules = [rule1, rule2, ...]
for fn in rules:
fn(arg1, arg2) # this calls rule1 and rule2 with args (as an example)
If you may also reference them by name, then use a dict, like:
rules = {'rule1': rule1, 'rule2': rule2, ...}
something = rules['rule5'](arg1, arg2)
# or
for rule in rules: # iterates over the dict's keys
rules[rule](arg1, arg2)
This question already has answers here:
Dictionary vs Object - which is more efficient and why?
(8 answers)
Closed 9 years ago.
Refer to the following code as an example:
import numpy as np
N = 200
some_prop = np.random.randint(0,100, [N, N, N])
#option 1
class ObjectThing():
def __init__(self, some_prop):
self.some_prop = some_prop
object_thing = ObjectThing(some_prop)
#option 2
pseudo_thing = {'some_prop' : some_prop }
I like the structure that option 1 provides, it makes the operation of an application more rigid and whatnot. However, I'm wondering if there are other more absolute benefits that I'm not aware of.
The obvious advantage of using objects is that you can extend their functionality beyond simply storing data. You could, for instance, have two attributes, and define and __eq__ method that uses both attributes in some way other than simply comparing both of them and returning False unless both match.
Also, once you've got a class defined, you can easily define new instances of that class that will share the structure of the original, but with a dictionary, you'd either have to redefine that structure or make a copy of some sort of the original and then change each element to match the values you want the new pseudo-object to have.
The primary advantages of dictionaries are that they come with a variety of pre-defined methods (such as .items()), can easily be iterated over using in, can be conveniently created using a dict comprehension, and allow for easy access of data "members" using a string variable (though really, the getattr function achieves the same thing with objects).
If you're using an implementation of Python that includes a JIT compiler (e.g. PyPy), using actual objects can improve the compiler's ability to optimize your code (because it's easier for the compiler to reason about how members of an object are utilized, unlike a plain dictionary).
Using objects also allows for subclassing, which can save some redundant implementation.