I need a memory-efficient data structure for for storing about a million key--value pairs, where keys are strings of about 80 bytes, and values are strings of about 200 bytes, the total key and value size being about 280MB. I also need efficient lookup of value by key, preferably a hash-map. The memory overhead should be as little as possible, e.g. for 280MB of useful data, the data structure shouldn't use more than 300MB of virtual memory (including malloc() overhead and everything else). The usage pattern is the following: we start with an empty data structure, and we populate it gradually, never changing keys, and never changing the length of values. As a plus, the data structure may support changing the length of values, at the expense of a 100% value overhead (meaning that for x value bytes, x bytes might be wasted in temporarily in unused buffer space).
I need a pure Python module, or a built-in Python module, or a C implementation preferably with (C)Python bindings. I'd prefer if it was possible to serialize the whole data structure to disk, and to read it back very quickly.
Just to prove that such a small overhead is possible, I've created a simple design with open addressing, the hash table of 1.25 million elements containing 4-byte pointers to 1MB data blocks, the data blocks containing the key and value lengths as base-128 varints. This design has an important limitation: it doesn't allow removing or changing pairs without wasting their memory area. According to my calculations with 1 million key--value pairs of 280 bytes each, the overhead is less than 3.6% (10 080 000 bytes). The limits above are more generous, they allow 20 000 000 bytes of overhead.
I've just found http://www.pytables.org/ , which provides fast access and memory-efficient packing of data. I have to examine it more closely to check if it suits my needs.
Ok, the dirt-simple approach.
Use a python dictionary for the data structure. I filled a python dictionary with 1 million random key-value pairs where the key was 80 characters and the value 200 characters. It took 360,844 Kb on my computer, which is outside of your specification of no more than 300 MB, but I offer it up as a solution anyway because it's still pretty memory efficient.
This also fails your requirement of having a C API. I'm not sure why you need C, but as the question is tagged Python and lacks a C tag, I'll offer the pure Python to see if it just might fit the bill.
Regarding persistence. Use the cPickle module. It's very fast and, again, dirt-simple. To save your dictionary:
cPickle.dump(mydict, "myfile.pkl")
To reload your dictionary:
mydict = cPickle.load("myfile.pkl")
A second dirt-simple idea is to use the shelve module, which is basically disk-based python dictionary. Memory overhead is very low (it's all on disk). But it's also much slower.
Martijn mentioned this in a comment (not sure why people comment with answers), but I agree: use SQLite. You should give it a try and see if it will meet your needs.
If you don't plan to have a large amounts of deletes, then this isn't that hard. Deletes lead to fragmentation.
You also need to commit to a fixed length key. You mentioned 80 bytes. Are your keys allowed to duplicate? If not, it's even easier.
So, here is what you do.
You create an array of:
struct {
char value[80];
char *data;
} key;
And you keep this array sorted.
If you keys can duplicate, then you need:
struct link {
char *data;
link *next;
}
struct {
char value[80];
link *data;
} key;
(My C is rusty, but this is the gist of it) The latter has each key pointing to a linked list of values.
Then a lookup is a simple binary search. The "pain" is in maintaining this array and inserting/deleting keys. It's not as painful as it sounds, but it saves a LOT of memory, especially on 64bit systems.
What you want to reduce is the number of pointers. Pointers are expensive when you have lots of structures filled with pointers. On a 64bit system, a pointer is 8 bytes. So for a single pointer, there goes 8MB of your memory budget.
So, the expense is in building the array, copying and compacting memory (if you "know" you will have a million rows and can commit to that, then malloc(1000000 * sizeof(key)) right away, it'll save you some copying during expansion).
But don't be afraid of, once it's up and running, performance is quite good. Modern cpus are actually pretty good at copying 100M blocks of memory around.
Just as an aside, I just did something much like this in Java. On a 64bit JVM, a Map with 25M entries is 2G of RAM. My solution (using similar techniques to this) has at around 600M). Java uses more pointers than C, but the premise is the same.
Have you tried using a straightforward dict? Most of your data is in strings, so the overhead might fit within your requirements.
You can use the sha1 of the key instead of the key itself. If the keys are unique, then the sha1 hash of the keys is likely, too. It provides a memory savings to try to squeak in under your limit.
from random import choice
from string import letters
from hashlib import sha1
def keygen(length):
return "".join(choice(letters) for _ in xrange(length))
def gentestdata(n=1000*1000):
# return dict((sha1(keygen(80)).digest(), keygen(200)) for _ in xrange(n))
d = {}
for _ in xrange(n):
key = sha1(keygen(80)).digest()
assert key not in d
value = keygen(200)
d[key] = value
return d
if __name__ == '__main__':
d = gentestdata()
On my ubuntu box, this tops out at 304 MB of memory:
2010-10-26 14:26:02 hbrown#hbrown-ubuntu-wks:~$ ps aux | grep python
[...]
hbrown 12082 78.2 7.5 307420 303128 pts/1 S+ 14:20 4:47 python
Close enough? It's python, not C.
Later: also, if your data is somewhat redundant, you can gzip the values. It's a time versus space trade-off.
Using SQLite is a good idea. A quick implementation can tell if you are fast enough with little effort.
If you determine you have to roll your own, I'd recommend the following:
How well can you predict the number of pairs, or an upper limit for that?
How well can you predict the total data size, or an upper limit for that?
Arena allocator for strings and nodes. (Usually, you'd work on a list of arenas, so you don't have to predict the total size).
Alignment depends on your algorithms, in principle you could pack it byte-tight, and the only overhead is your overallocation, which only minimally affects your working set.
However, if you have to run any cmp/copy etc. operations on these strings, remember that with the following guarantees, you can squeeze a little or a lot from these string operations:
all elements are CPU word aligned
all pad bytes are (e.g.) 0
you can safely read "beyond" a string end as long as you don't cross a CPU border
Hash table for the index. A dictionary would work, too, but that makes sense only if potential degradation / rehashing would be a serious issue. I don't know any "stock" hashtable implementation for C, but there should be one, right? right? Just replace allocations with calls to the arena allocator.
Memory Locality
If you can guarantee that lookup will never request a string that is not in the map, you should store the keys in a separate arena, as they are needed only on hash collisions. That can improve memory locality significantly. (In that case, if you ever have a "final" table, you could even copy the colliding keys to a new arena, and throw away all the others. The benefits of that are probably marginal, though.)
Separation can help or hurt, depending on your access patterns. If you typically use the value once after each lookup, having them pair-wise in the same arena is great. If you e.g. look up a few keys, then use their values repeatedly, separate arenas make sense.
If you have to support "funny characters" / Unicode, normalize your strings before storing them.
You could use struct module to pack binary data and unpack it when needed.
You can implement a memory efficient storage using this approach. I guess access would be a pain.
http://docs.python.org/library/struct.html
Apache Portable Runtime (aka APR) has a c-based hash table. You can see documentation at http://apr.apache.org/docs/apr/0.9/group_apr_hash.html
With apr_hash_t all you store is void*. So it gives you full control over values. SO if you want you can store pointer to a 100 byte block instead of actual length of the string.
Judy should be memory-efficient: http://judy.sourceforge.net/
(Benchmarks: http://www.nothings.org/computer/judy/, see "Data Structure Size").
See also: http://www.dalkescientific.com/Python/PyJudy.html
Also,
For keys of a fixed size there is http://panthema.net/2007/stx-btree/ in C++ (I'm sure that with a custom C wrappers it can be used from CPython).
If the dataset allows it, you can store the variable-length keys in the value and use a hash or a prefix of the variable-length key as the fixed-length key.
The same logic applies to http://google-opensource.blogspot.ru/2013/01/c-containers-that-save-memory-and-time.html and http://code.google.com/p/sparsehash/ - istead of using a heavy std::string as a key, use an 32-bit or 64-bit integer key, making it somehow from the real variable-length key.
Since I couldn't find any existing solutions which will pack the memory tightly, I've decided to implement it in C for myself. See my design with open addressing in the question.
Related
I have a large structure of primitive types within nested dict/list. The structure is quite complicated and doesn't really matter.
If I represent it in python's built-in types (dict/list/float/int/str) it takes 1.1 GB, but if I store it in protobuf and load it to memory it is significantly smaller. ~250 MB total.
I'm wondering how can this be. Are the built-in types in python inefficient in comparison to some external library?
Edit: The structure is loaded from json file. So no internal references between objects
"Simple" python objects, such as int or float, need much more memory than their C-counterparts used by protobuf.
Let's take a list of Python integers as example compared to an array of integers, as for example in an array.array (i.e. array.array('i', ...)).
The analysis for array.array is simple: discarding some overhead from the array.arrays-object, only 4 bytes (size of a C-integer) are needed per element.
The situation is completely different for a list of integers:
the list holds not the integer-objects themselves but pointers to the objects (8 additional bytes for a 64bit executable)
even a small non-zero integer needs at least 28 bytes (see import sys; sys.getsizeof(1) returns 28): 8 bytes are needed for reference counting, 8 bytes to hold a pointer to the integer-function table, 8 bytes are needed for the size of the integer value (Python's integers can be much bigger than 2^32), and at least 4 byte to hold the integer value itself.
there is also an overhead for memory management of 4.5 bytes.
This means there is a whopping cost of 40.5 bytes per Python integer compared to the possible 4 bytes (or 8 bytes if we use long long int, i.e. 64bit integers).
A situation is similar for a list with Python floats compared to an array of doubles( i.e. array.array('d',...)), which only needs about 8 bytes per element. But for list we have:
the list holds not the float objects themselves but pointers to the objects (8 additional bytes for a 64bit executable)
a float object needs 24 bytes (see import sys; sys.getsizeof(1.0) returns 24): 8 bytes are needed for reference counting, 8 bytes to hold a pointer to the float-function table, and 8 bytes to hold the double-value itself.
because 24 is a multiple of 8, the overhead for memory management is "only" about 0.5 bytes.
Which means 32.5 bytes for a Python float object vs. 8 byte for a C-double.
protobuf uses internally the same representation of the data as array.array and thus needs much less memory (about 4-5 times less, as you observe). numpy.array is another example for a data type, which holds raw C-values and thus needs much less memory than lists.
If one doesn't need to search in a dictionary, then saving the key-values-pairs in a list will need less memory than in a dictionary, because one doesn't have to maintain a structure for searching (which imposes some memory costs) - this is also another thing that leads to smaller memory footprint of protobuf-data.
To answer your other question: There are no built-in modules which are to Python-dict, what array.array are to Python-list, so I use this opportunity to shamelessly plug-in an advertisement for a library of mine: cykhash.
Sets and maps from cykhash need less than 25% of Python'S-dict/set memory but are about the same fast.
This is normal and it's all about space vs. time tradeoff. Memory layout depends on the way how a particular data structure is implemented, which in turn depends on how it is going to be used.
A general-purpose dictionary is typically implemented with a hashtable. It has a fixed-size list of buckets that store key-value pairs. The number of items in a dictionary can be smaller, equal or bigger that number of buckets. If smaller, space is wasted. If bigger, dictionary operations take a long time. A hashtable implementation usually starts with a small initial bucket list, then grow it as new items are added to keep the performance decent. However, resizing also requires rehashing which is computationally very expensive, so whenever you do it, you want to leave some room for growth. General-purpose dictionaries are a trade-off between space and time because they don't "know" how many elements they are supposed to contain and because there is no perfect hash function. But in a good-enough case, a general-use hashtable will give you near-O(1) performance.
When data is serialized it's a different story. Data in transit does not change, you are not doing lookups with it, it is not subjected to garbage collection, boundary alignment and so on. This means you can simply pack keys and values one after another for space efficiency. You need virtually no metadata and no control structures as long as the values can be reconstructed. On the downside, manipulating packed data is very slow because all operations take O(n) time.
For this reason, you will almost always want to:
convert data from time-efficient into space-efficient format before sending it
convert data from space-efficient into time-efficient format after receiving it.
If you are using nested dictionaries (or lists, which are in many ways similar), the differences will add up and become even more pronounced. When you know the number of items in advance and the data does not change much, you can probably get some improvement by preallocating the memory for it, such as dict.fromkeys(range(count)).
I'd like to create a random permutation of the numbers [1,2,...,N] where N is a big number. So I don't want to store all elements of the permutation in memory, but rather iterate over the elements of my particular permutation without holding former values in memory.
Any idea how to do that in Python?
One possibility is to use an encryption. Since encryption is reversible, i.e. one-to-one, for a given key you will get back the same numbers you encrypt but in a different order.
You need a block cypher with a block size large enough to include your maximum N. Use DES in ECB mode for N = 2^64 - 1. Use AES in ECB mode for N = 2^128 - 1. For other sizes, either use Hasty Pudding cipher, which has variable block size, or write your own simple Feistel cipher. I assume that you just need a shuffle, not a cryptographically secure shuffle.
If the output is greater than N, then just re-encrypt until it is less than N, the 1-to-1 property ensures that the chain of large numbers is also unique.
There is no need to store the entire array in memory, each number can be encrypted as needed. Just the key and the cipher algorithm are needed. One slight complication is that block ciphers work on [0 ... N-1]; you might need some extra code to deal with the extremes.
This is a generic issue and rather not a Python-specific. In most languages, even when iterators are used for using structures, the whole structure is kept in memory. So, iterators are mainly used as "functional" tools and not as "memory-optimization" tools.
In python, a lot of people end up using a lot of memory due to having really big structures (dictionaries etc.). However, all the variables-objects of the program will be stored in memory in any way. The only solution is the serialization of the data (save in filesystem, Database etc.).
So, in your case, you could create a customized function that would create the list of the permutations. But, instead of adding each element of the permutation to a list, it would save the element either in a file (or in a database with the corresponding structure). Then, you would be able to retrieve one-by-one each permutation from the file (or the database), without bringing the whole list in memory.
However, as mentioned before, you will have to always know in which permutation you currently are. In order to avoid retrieving all the created permutations from Database (which would create the same bottleneck), you could have an index for each place holding the symbol used in the previously generated permutation (and create the permutations adding the symbols and a predefined sequence).
I am build a very large dictionary and I am performing many checks to see if a key is in the structure and then adding if it unique or incrementing a counter if it is identical.
Python uses a hash data structure to store dictionaries (not to be confused with a cryptographic hash function). Lookups are O(1), but if the hash table is full the it has to be rehashed which is very expensive.
My Question is, would I be better off using a AVL Binary Search Tree or is a hash table good enough?
The only way to be sure would be to implement both and check, but my informed guess is that the dictionary will be faster, because a binary search tree has cost O(log(n)) for lookup and insertion, and I think that except under the most pessimal of situations (such as massive hash collisions) the hash table's O(1) lookup will outweigh the occasional resize.
If you take a look at the Python dictionary implementation, you'll see that:
a dictionary starts out with 8 entries (PyDict_MINSIZE);
a dictionary with 50,000 or fewer entries quadruples in size when it grows;
a dictionary with more than 50,000 entries doubles in size when it grows;
key hashes are cached in the dictionary, so they are not recomputed when the dictionary is resized.
(The "NOTES ON OPTIMIZING DICTIONARIES" are worth reading too.)
So if your dictionary has 1,000,000 entries, I believe that it will be resized eleven times (8 → 32 → 128 → 512 → 2048 → 8192 → 32768 → 131072 → 262144 → 524288 → 1048576 → 2097152) at a cost of 2,009,768 extra insertions during the resizes. This seems likely to be much less than the cost of all the rebalancing involved in 1,000,000 insertions into an AVL tree.
What's the ratio of items vs unique items?
What's the expected number of unique items?
If a hash bucket fills, then extending should just be a matter of some memory reallocation, not rehashing.
Testing a counting dict should be very quick and easy to do.
Note also the counter class available since python 2.7
http://docs.python.org/library/collections.html#counter-objects
http://svn.python.org/view?view=rev&revision=68559
Python dictionaries are highly optimized. Python makes various special-case optimizations that the Python developers cater for in the CPython dictionary implementation.
In CPython, all PyDictObject's are optimized for dictionaries containing only string keys.
Python's dictionary makes an effort to never be more than 2/3rds full.
The book "Beautiful Code" discusses this all.
The eighteenth chapter is Python's Dictionary Implementation: Being All Things to All People by Adrew Kuchling
It is much better to use it than try to achieve the hand crafted custom implementation which will have to replicate all these optimizations to be any where near the main CPython implementation of dictionary look ups.
You would have to implement your own data structures in C to stand a reasonable chance of beating the built-in structures.
Also you can avoid some of the overhead by using get, avoiding find existing elements twice.
Or collections.Counter if you are using python 2.7+.
def increment(map, key):
map[key] = map.get(key,0)+1
Using a dict is O(1). As the dict grows, reallocation is sometimes required, but that is amortized O(1)
If your other algorithm is O(log n), the simple dict will always beat it as the dataset grows larger.
If you use any type of tree, I would expect a O(log n) component in there somewhere.
Not only is a hash table good enough, it is better
Say there is a dict variable that grows very large during runtime - up into millions of key:value pairs.
Does this variable get stored in RAM, effectively using up all the available memory and slowing down the rest of the system?
Asking the interpreter to display the entire dict is a bad idea, but would it be okay as long as one key is accessed at a time?
Yes, the dict will be stored in the process memory. So if it gets large enough that there's not enough room in the system RAM, then you can expect to see massive slowdown as the system starts swapping memory to and from disk.
Others have said that a few million items shouldn't pose a problem; I'm not so sure. The dict overhead itself (before counting the memory taken by the keys and values) is significant. For Python 2.6 or later, sys.getsizeof gives some useful information about how much RAM various Python structures take up. Some quick results, from Python 2.6 on a 64-bit OS X machine:
>>> from sys import getsizeof
>>> getsizeof(dict((n, 0) for n in range(5462)))/5462.
144.03368729403149
>>> getsizeof(dict((n, 0) for n in range(5461)))/5461.
36.053470060428495
So the dict overhead varies between 36 bytes per item and 144 bytes per item on this machine (the exact value depending on how full the dictionary's internal hash table is; here 5461 = 2**14//3 is one of the thresholds where the internal hash table is enlarged). And that's before adding the overhead for the dict items themselves; if they're all short strings (6 characters or less, say) then that still adds another >= 80 bytes per item (possibly less if many different keys share the same value).
So it wouldn't take that many million dict items to exhaust RAM on a typical machine.
The main concern with the millions of items is not the dictionary itself so much as how much space each of these items takes up. Still, unless you're doing something weird, they should probably fit.
If you've got a dict with millions of keys, though, you're probably doing something wrong. You should do one or both of:
Figure out what data structure you should actually be using, because a single dict is probably not the right answer. Exactly what this would be depends on what you're doing.
Use a database. Your Python should come with a sqlite3 module, so that's a start.
Yes, a Python dict is stored in RAM. A few million keys isn't an issue for modern computers, however. If you need more and more data and RAM is running out, consider using a real database. Options include a relational DB like SQLite (built-in in Python, by the way) or a key-value store like Redis.
It makes little sense displaying millions of items in the interpreter, but accessing a single element should be still very efficient.
For all I know Python uses the best hashing algorithms so you are probably going to get the best possible memory efficiency and performance. Now, whether the whole thing is kept in RAM or committed to a swap file is up to your OS and depends on the amount of RAM you have.
What I'd say is best if to just try it:
from random import randint
a = {}
for i in xrange(10*10**6):
a[i] = i
How is this looking when you run it? Takes about 350Mb on my system which should be manageable to say the least.
I'm writing an application in Python (2.6) that requires me to use a dictionary as a data store.
I am curious as to whether or not it is more memory efficient to have one large dictionary, or to break that down into many (much) smaller dictionaries, then have an "index" dictionary that contains a reference to all the smaller dictionaries.
I know there is a lot of overhead in general with lists and dictionaries. I read somewhere that python internally allocates enough space that the dictionary/list # of items to the power of 2.
I'm new enough to python that I'm not sure if there are other unexpected internal complexities/suprises like that, that is not apparent to the average user that I should take into consideration.
One of the difficulties is knowing how the power of 2 system counts "items"? Is each key:pair counted as 1 item? That's seems important to know because if you have a 100 item monolithic dictionary then space 100^2 items would be allocated. If you have 100 single item dictionaries (1 key:pair) then each dictionary would only be allocation 1^2 (aka no extra allocation)?
Any clearly laid out information would be very helpful!
Three suggestions:
Use one dictionary.
It's easier, it's more straightforward, and someone else has already optimized this problem for you. Until you've actually measured your code and traced a performance problem to this part of it, you have no reason not to do the simple, straightforward thing.
Optimize later.
If you are really worried about performance, then abstract the problem make a class to wrap whatever lookup mechanism you end up using and write your code to use this class. You can change the implementation later if you find you need some other data structure for greater performance.
Read up on hash tables.
Dictionaries are hash tables, and if you are worried about their time or space overhead, you should read up on how they're implemented. This is basic computer science. The short of it is that hash tables are:
average case O(1) lookup time
O(n) space (Expect about 2n, depending on various parameters)
I do not know where you read that they were O(n^2) space, but if they were, then they would not be in widespread, practical use as they are in most languages today. There are two advantages to these nice properties of hash tables:
O(1) lookup time implies that you will not pay a cost in lookup time for having a larger dictionary, as lookup time doesn't depend on size.
O(n) space implies that you don't gain much of anything from breaking your dictionary up into smaller pieces. Space scales linearly with number of elements, so lots of small dictionaries will not take up significantly less space than one large one or vice versa. This would not be true if they were O(n^2) space, but lucky for you, they're not.
Here are some more resources that might help:
The Wikipedia article on Hash Tables gives a great listing of the various lookup and allocation schemes used in hashtables.
The GNU Scheme documentation has a nice discussion of how much space you can expect hashtables to take up, including a formal discussion of why "the amount of space used by the hash table is proportional to the number of associations in the table". This might interest you.
Here are some things you might consider if you find you actually need to optimize your dictionary implementation:
Here is the C source code for Python's dictionaries, in case you want ALL the details. There's copious documentation in here:
dictobject.h
dictobject.c
Here is a python implementation of that, in case you don't like reading C.
(Thanks to Ben Peterson)
The Java Hashtable class docs talk a bit about how load factors work, and how they affect the space your hash takes up. Note there's a tradeoff between your load factor and how frequently you need to rehash. Rehashes can be costly.
If you're using Python, you really shouldn't be worrying about this sort of thing in the first place. Just build your data structure the way it best suits your needs, not the computer's.
This smacks of premature optimization, not performance improvement. Profile your code if something is actually bottlenecking, but until then, just let Python do what it does and focus on the actual programming task, and not the underlying mechanics.
"Simple" is generally better than "clever", especially if you have no tested reason to go beyond "simple". And anyway "Memory efficient" is an ambiguous term, and there are tradeoffs, when you consider persisting, serializing, cacheing, swapping, and a whole bunch of other stuff that someone else has already thought through so that in most cases you don't need to.
Think "Simplest way to handle it properly" optimize much later.
Premature optimization bla bla, don't do it bla bla.
I think you're mistaken about the power of two extra allocation does. I think its just a multiplier of two. x*2, not x^2.
I've seen this question a few times on various python mailing lists.
With regards to memory, here's a paraphrased version of one such discussion (the post in question wanted to store hundreds of millions integers):
A set() is more space efficient than a dict(), if you just want to test for membership
gmpy has a bitvector type class for storing dense sets of integers
Dicts are kept between 50% and 30% empty, and an entry is about ~12 bytes (though the true amount will vary by platform a bit).
So, the fewer objects you have, the less memory you're going to be using, and the fewer lookups you're going to do (since you'll have to lookup in the index, then a second lookup in the actual value).
Like others, said, profile to see your bottlenecks. Keeping an membership set() and value dict() might be faster, but you'll be using more memory.
I'd also suggest reposting this to a python specific list, such as comp.lang.python, which is full of much more knowledgeable people than myself who would give you all sorts of useful information.
If your dictionary is so big that it does not fit into memory, you might want to have a look at ZODB, a very mature object database for Python.
The 'root' of the db has the same interface as a dictionary, and you don't need to load the whole data structure into memory at once e.g. you can iterate over only a portion of the structure by providing start and end keys.
It also provides transactions and versioning.
Honestly, you won't be able to tell the difference either way, in terms of either performance or memory usage. Unless you're dealing with tens of millions of items or more, the performance or memory impact is just noise.
From the way you worded your second sentence, it sounds like the one big dictionary is your first inclination, and matches more closely with the problem you're trying to solve. If that's true, go with that. What you'll find about Python is that the solutions that everyone considers 'right' nearly always turn out to be those that are as clear and simple as possible.
Often times, dictionaries of dictionaries are useful for other than performance reasons. ie, they allow you to store context information about the data without having extra fields on the objects themselves, and make querying subsets of the data faster.
In terms of memory usage, it would stand to reason that one large dictionary will use less ram than multiple smaller ones. Remember, if you're nesting dictionaries, each additional layer of nesting will roughly double the number of dictionaries you need to allocate.
In terms of query speed, multiple dicts will take longer due to the increased number of lookups required.
So I think the only way to answer this question is for you to profile your own code. However, my suggestion is to use the method that makes your code the cleanest and easiest to maintain. Of all the features of Python, dictionaries are probably the most heavily tweaked for optimal performance.