How to retrieve an element from a set without removing it? - python

Suppose the following:
>>> s = set([1, 2, 3])
How do I get a value (any value) out of s without doing s.pop()? I want to leave the item in the set until I am sure I can remove it - something I can only be sure of after an asynchronous call to another host.
Quick and dirty:
>>> elem = s.pop()
>>> s.add(elem)
But do you know of a better way? Ideally in constant time.

Two options that don't require copying the whole set:
for e in s:
break
# e is now an element from s
Or...
e = next(iter(s))
But in general, sets don't support indexing or slicing.

Least code would be:
>>> s = set([1, 2, 3])
>>> list(s)[0]
1
Obviously this would create a new list which contains each member of the set, so not great if your set is very large.

I wondered how the functions will perform for different sets, so I did a benchmark:
from random import sample
def ForLoop(s):
for e in s:
break
return e
def IterNext(s):
return next(iter(s))
def ListIndex(s):
return list(s)[0]
def PopAdd(s):
e = s.pop()
s.add(e)
return e
def RandomSample(s):
return sample(s, 1)
def SetUnpacking(s):
e, *_ = s
return e
from simple_benchmark import benchmark
b = benchmark([ForLoop, IterNext, ListIndex, PopAdd, RandomSample, SetUnpacking],
{2**i: set(range(2**i)) for i in range(1, 20)},
argument_name='set size',
function_aliases={first: 'First'})
b.plot()
This plot clearly shows that some approaches (RandomSample, SetUnpacking and ListIndex) depend on the size of the set and should be avoided in the general case (at least if performance might be important). As already shown by the other answers the fastest way is ForLoop.
However as long as one of the constant time approaches is used the performance difference will be negligible.
iteration_utilities (Disclaimer: I'm the author) contains a convenience function for this use-case: first:
>>> from iteration_utilities import first
>>> first({1,2,3,4})
1
I also included it in the benchmark above. It can compete with the other two "fast" solutions but the difference isn't much either way.

tl;dr
for first_item in muh_set: break remains the optimal approach in Python 3.x. Curse you, Guido.
y u do this
Welcome to yet another set of Python 3.x timings, extrapolated from wr.'s excellent Python 2.x-specific response. Unlike AChampion's equally helpful Python 3.x-specific response, the timings below also time outlier solutions suggested above – including:
list(s)[0], John's novel sequence-based solution.
random.sample(s, 1), dF.'s eclectic RNG-based solution.
Code Snippets for Great Joy
Turn on, tune in, time it:
from timeit import Timer
stats = [
"for i in range(1000): \n\tfor x in s: \n\t\tbreak",
"for i in range(1000): next(iter(s))",
"for i in range(1000): s.add(s.pop())",
"for i in range(1000): list(s)[0]",
"for i in range(1000): random.sample(s, 1)",
]
for stat in stats:
t = Timer(stat, setup="import random\ns=set(range(100))")
try:
print("Time for %s:\t %f"%(stat, t.timeit(number=1000)))
except:
t.print_exc()
Quickly Obsoleted Timeless Timings
Behold! Ordered by fastest to slowest snippets:
$ ./test_get.py
Time for for i in range(1000):
for x in s:
break: 0.249871
Time for for i in range(1000): next(iter(s)): 0.526266
Time for for i in range(1000): s.add(s.pop()): 0.658832
Time for for i in range(1000): list(s)[0]: 4.117106
Time for for i in range(1000): random.sample(s, 1): 21.851104
Faceplants for the Whole Family
Unsurprisingly, manual iteration remains at least twice as fast as the next fastest solution. Although the gap has decreased from the Bad Old Python 2.x days (in which manual iteration was at least four times as fast), it disappoints the PEP 20 zealot in me that the most verbose solution is the best. At least converting a set into a list just to extract the first element of the set is as horrible as expected. Thank Guido, may his light continue to guide us.
Surprisingly, the RNG-based solution is absolutely horrible. List conversion is bad, but random really takes the awful-sauce cake. So much for the Random Number God.
I just wish the amorphous They would PEP up a set.get_first() method for us already. If you're reading this, They: "Please. Do something."

To provide some timing figures behind the different approaches, consider the following code.
The get() is my custom addition to Python's setobject.c, being just a pop() without removing the element.
from timeit import *
stats = ["for i in xrange(1000): iter(s).next() ",
"for i in xrange(1000): \n\tfor x in s: \n\t\tbreak",
"for i in xrange(1000): s.add(s.pop()) ",
"for i in xrange(1000): s.get() "]
for stat in stats:
t = Timer(stat, setup="s=set(range(100))")
try:
print "Time for %s:\t %f"%(stat, t.timeit(number=1000))
except:
t.print_exc()
The output is:
$ ./test_get.py
Time for for i in xrange(1000): iter(s).next() : 0.433080
Time for for i in xrange(1000):
for x in s:
break: 0.148695
Time for for i in xrange(1000): s.add(s.pop()) : 0.317418
Time for for i in xrange(1000): s.get() : 0.146673
This means that the for/break solution is the fastest (sometimes faster than the custom get() solution).

Since you want a random element, this will also work:
>>> import random
>>> s = set([1,2,3])
>>> random.sample(s, 1)
[2]
The documentation doesn't seem to mention performance of random.sample. From a really quick empirical test with a huge list and a huge set, it seems to be constant time for a list but not for the set. Also, iteration over a set isn't random; the order is undefined but predictable:
>>> list(set(range(10))) == range(10)
True
If randomness is important and you need a bunch of elements in constant time (large sets), I'd use random.sample and convert to a list first:
>>> lst = list(s) # once, O(len(s))?
...
>>> e = random.sample(lst, 1)[0] # constant time

Yet another way in Python 3:
next(iter(s))
or
s.__iter__().__next__()

Seemingly the most compact (6 symbols) though very slow way to get a set element (made possible by PEP 3132):
e,*_=s
With Python 3.5+ you can also use this 7-symbol expression (thanks to PEP 448):
[*s][0]
Both options are roughly 1000 times slower on my machine than the for-loop method.

I use a utility function I wrote. Its name is somewhat misleading because it kind of implies it might be a random item or something like that.
def anyitem(iterable):
try:
return iter(iterable).next()
except StopIteration:
return None

Following #wr. post, I get similar results (for Python3.5)
from timeit import *
stats = ["for i in range(1000): next(iter(s))",
"for i in range(1000): \n\tfor x in s: \n\t\tbreak",
"for i in range(1000): s.add(s.pop())"]
for stat in stats:
t = Timer(stat, setup="s=set(range(100000))")
try:
print("Time for %s:\t %f"%(stat, t.timeit(number=1000)))
except:
t.print_exc()
Output:
Time for for i in range(1000): next(iter(s)): 0.205888
Time for for i in range(1000):
for x in s:
break: 0.083397
Time for for i in range(1000): s.add(s.pop()): 0.226570
However, when changing the underlying set (e.g. call to remove()) things go badly for the iterable examples (for, iter):
from timeit import *
stats = ["while s:\n\ta = next(iter(s))\n\ts.remove(a)",
"while s:\n\tfor x in s: break\n\ts.remove(x)",
"while s:\n\tx=s.pop()\n\ts.add(x)\n\ts.remove(x)"]
for stat in stats:
t = Timer(stat, setup="s=set(range(100000))")
try:
print("Time for %s:\t %f"%(stat, t.timeit(number=1000)))
except:
t.print_exc()
Results in:
Time for while s:
a = next(iter(s))
s.remove(a): 2.938494
Time for while s:
for x in s: break
s.remove(x): 2.728367
Time for while s:
x=s.pop()
s.add(x)
s.remove(x): 0.030272

What I usually do for small collections is to create kind of parser/converter method like this
def convertSetToList(setName):
return list(setName)
Then I can use the new list and access by index number
userFields = convertSetToList(user)
name = request.json[userFields[0]]
As a list you will have all the other methods that you may need to work with

You can unpack the values to access the elements:
s = set([1, 2, 3])
v1, v2, v3 = s
print(v1,v2,v3)
#1 2 3

I f you want just the first element try this:
b = (a-set()).pop()

How about s.copy().pop()? I haven't timed it, but it should work and it's simple. It works best for small sets however, as it copies the whole set.

Another option is to use a dictionary with values you don't care about. E.g.,
poor_man_set = {}
poor_man_set[1] = None
poor_man_set[2] = None
poor_man_set[3] = None
...
You can treat the keys as a set except that they're just an array:
keys = poor_man_set.keys()
print "Some key = %s" % keys[0]
A side effect of this choice is that your code will be backwards compatible with older, pre-set versions of Python. It's maybe not the best answer but it's another option.
Edit: You can even do something like this to hide the fact that you used a dict instead of an array or set:
poor_man_set = {}
poor_man_set[1] = None
poor_man_set[2] = None
poor_man_set[3] = None
poor_man_set = poor_man_set.keys()

Related

How is str.find so fast?

I had an earlier problem where I was looking for a substring while iterating the string and using slicing. Turns out that's a really bad idea regarding performance. str.find is much faster. But I don't understand why?
import random
import string
import timeit
# Generate 1 MB of random string data
haystack = "".join(random.choices(string.ascii_lowercase, k=1_000_000))
def f():
return [i for i in range(len(haystack)) if haystack[i : i + len(needle)] == needle]
def g():
return [i for i in range(len(haystack)) if haystack.startswith(needle, i)]
def h():
def find(start=0):
while True:
position = haystack.find(needle, start)
if position < 0:
return
start = position + 1
yield position
return list(find())
number = 100
needle = "abcd"
expectation = f()
for func in "fgh":
assert eval(func + "()") == expectation
t = timeit.timeit(func + "()", globals=globals(), number=number)
print(func, t)
Results:
f 26.46937609199813
g 16.11952730899793
h 0.07721933699940564
f and g are slow since they check if needle can be found in every possible location of haystack resulting in a O(n m) complexity. f is slower because of the slicing operation that creates a new string object (as pointed out by Barmar in the comments).
h is fast because it can skip many locations. For example, if the needle string is not found, only one find is performed. The built-in find function is highly optimized in C and thus faster than an interpreted pure-Python code. Additionally, the find function use an efficient algorithm called Crochemore and Perrin's Two-Way. This algorithm is much faster than searching needle at every possible location of haystack when the string is relatively big. The related CPython code is available here.
If the number of occurrence is relatively small, your implementation should already be good. Otherwise, it may be better to use a custom variant based on the CPTW algorithm of possibly the KMP algorithm but doing that in pure-Python will be very inefficient. You could do that in C or with Cython. That being said this is not trivial to do and not great to maintain.
The built-in Python functions are implemented in C, which allows them to be much faster. It's not possible to make a function that performs just as well when using Python.

Slow processing time of a list

Why is my code so sluggish (inefficient)? I need to make two methods to record the time it takes to process a list of a given size. I have a search_fast and search_slow method. Even though there is a difference between those two search times. Search_fast is still pretty slow. I'd like to optimise the processing time so instead of getting 8.99038815498 with search_fast and 65.0739619732 with search_slow. It would only take a fraction of a second. What can I do? I'd be eternally grateful for some tips as coding is still pretty new to me. :)
from timeit import Timer
def fillList(l, n):
l.extend(range(1, n + 1))
l = []
fillList(l, 100)
def search_fast(l):
for item in l:
if item == 10:
return True
return False
def search_slow(l):
return_value = False
for item in l:
if item == 10:
return_value = True
return return_value
t = Timer(lambda: search_fast(l))
print t.timeit()
t = Timer(lambda: search_slow(l))
print t.timeit()
The fastest way is using in operator, which tests membership of a value in a sequence.
if value in some_container:
…
Reference: https://docs.python.org/3/reference/expressions.html#membership-test-operations
Update: also, if you frequently need to test the membership, consider using sets instead of lists.
Some pros and cons can be found here: https://docs.python.org/3/library/stdtypes.html#set-types-set-frozenset
Adding the following code to above:
t = Timer(lambda: 10 in l)
print(t.timeit())
produces the following on my system:
0.6166538814701169
3.884095008084452
0.29087270299795875
>>>
Hope this helps. The basic idea is to tap into underlying C code and not make your own Python code.
I managed to find out what made the code sluggish. It was a simple mistake of adding to the list byextend instead of append.
def fillList(l, n):
l.**append**(range(1, n + 1))
l = []
fillList(l, 100)
Now search_slowclocks in at 3.91826605797 instead of 65.0739619732. But I have no idea why it changes the performance so much.

Should one use xrange(iterations) or a list of len(iterations) for better performance in a "for loop" evoked 1000000 times?

I have a script that evokes a "for loop" with a set number of iterations (1000) over a million times. I've read the range() vs xrange() threads, and I am aware that I do not want to do something like this.
for o in xrange(1000000):
for i in range(1000): #Definitely do not want
pass
Instead, I wish to create an object that contains 1000 elements and then use that to constantly iterate.
Method 1:
iterate=range(1000)
for o in xrange(1000000):
for i in iterate: #<---
pass
Method 2:
for o in xrange(1000000):
for i in xrange(1000): #<----
pass
I was wondering which method would give better performance inside the "for loop." Thank you.
Edit: Sorry. I believe I was unclear. My problem is whether I should make this inner loop I'm evoking use a list already created or use the xrange() for better performance.
Some local test results:
>>> timeit.timeit('for i in repeat(None, 1000): pass', setup='from itertools import repeat', number=100000)
1.94118924332561
>>> timeit.timeit('for i in xrange(1000): pass', number=100000)
2.5231991775491025
>>> timeit.timeit('for i in range(1000): pass', number=100000)
3.9302601308266816
>>> timeit.timeit('for i in r: pass', setup='r = [None] * 1000', number=100000)
2.0900103923822684
>>> timeit.timeit('for i in r: pass', setup='r = range(1000)', number=100000)
2.2248894063351656
>>> timeit.timeit('for i in r: pass', setup='r = xrange(1000)', number=100000)
2.9105822108675823
You can't use caching for the itertools.repeat one, since that iterator behaves like a generator (you can only "read" the values once, and then they're gone).
Repeating 1 or something similar might be infinitesimally faster because the name None no longer has to be looked up, but any such performance benefit is lost in the noise of random variation in test results.
xrange in Python 2.x and range in 3.x do not build a list in memory and are therefore optimal if you just want to iterate over the values. Building a temporary list with a list expression defeats the purpose though. Instead, you want simply:
for i in xrange(1000): # range in Python 3.x
pass
For small numbers like 1000, this is unlikely to be significantly impact performance (the 2.x documentation says the advantage of xrange over range is minimal). Instead, benchmark your program and find out which part is slow.
xrange() consumes less memory than range(). It is also faster:
In [1]: %timeit for i in range(1000): pass
10000 loops, best of 3: 28.8 us per loop
In [2]: %timeit for i in xrange(1000): pass
100000 loops, best of 3: 18.3 us per loop
(64-bit Python 2.7.2 on Linux.)
Note that the above uses xrange() directly. Your second code snippet (the one with with iterate) negates the benefits of using xrange() and should be avoided.
xrange will give you better performance due to it not having to allocate the whole resulting list before iterating over it. This leads to better data cache coherency among other things.
Edit: looking at your example, you are creating the list both times. I would then say that using range would be more performant in the case that you have given due to it's dedication to creating range lists rather than using a list comprehension through xrange to create the list.
It would perform worth. The benefit of xrange is, that it returns an iterable, which means it doesn't create a list of N elements and return it (like range):
def xrange(n):
i = 0
while i < n:
yield i
xrange looks similiar to this. So you're LC iterate = [x for x in xrange(1000)] equals to iterate = range(1000)
Correct code would be:
for i in xrange(1000):
pass

Fastest way to update a dictionary in python

I have a dictionary A, and a possible entry foo. I know that A[foo] should be equal to x, but I don't know if A[foo] has been already defined. In any case if A[foo] has been defined it means that it already has the correct value.
It is faster to execute:
if foo not in A.keys():
A[foo]=x
or simply update
A[foo]=x
because by the time the computer has found the foo entry, it can as well update it. While if not I would have to call the hash table two times?
Thanks.
Just add items to the dictionary without checking for their existence. I added 100,000 items to a dictionary using 3 different methods and timed it with the timeit module.
if k not in d: d[k] = v
d.setdefault(k, v)
d[k] = v
Option 3 was the fastest, but not by much.
[ Actually, I also tried if k not in d.keys(): d[k] = v, but that was slower by a factor of 300 (each iteration built a list of keys and performed a linear search). It made my tests so slow that I left it out here. ]
Here's my code:
import timeit
setup = """
import random
random.seed(0)
item_count = 100000
# divide key range by 5 to ensure lots of duplicates
items = [(random.randint(0, item_count/5), 0) for i in xrange(item_count)]
"""
in_dict = """
d = {}
for k, v in items:
if k not in d:
d[k] = v
"""
set_default = """
d = {}
for k, v in items:
d.setdefault(k, v)
"""
straight_add = """
d = {}
for k, v in items:
d[k] = v
"""
print 'in_dict ', timeit.Timer(in_dict, setup).timeit(1000)
print 'set_default ', timeit.Timer(set_default, setup).timeit(1000)
print 'straight_add ', timeit.Timer(straight_add, setup).timeit(1000)
And the results:
in_dict 13.090878085
set_default 21.1309413091
straight_add 11.4781760635
Note: This is all pretty pointless. We get many questions daily about what's the fastest way to do x or y in Python. In most cases, it is clear that the question was being asked before any performance issues were encountered. My advice? Focus on writing the clearest program you can write and if it's too slow, profile it and optimize where needed. In my experience, I almost never get to to profile and optimize step. From the description of the problem, it seems as if dictionary storage will not be the major bottle-neck in your program.
Using the built-in update() function is even faster. I tweaked Steven Rumbalski's example above a bit and it shows how update() is the fastest. There are at least two ways to use it (with a list of tuples or with another dictionary). The former (shown below as update_method1) is the fastest. Note that I also changed a couple of other things about Steven Rumbalski's example. My dictionaries will each have exactly 100,000 keys but the new values have a 10% chance of not needing to be updated. This chance of redundancy will depend on the nature of the data that you're updating your dictionary with. In all cases on my machine, my update_method1 was the fastest.
import timeit
setup = """
import random
random.seed(0)
item_count = 100000
existing_dict = dict([(str(i), random.randint(1, 10)) for i in xrange(item_count)])
items = [(str(i), random.randint(1, 10)) for i in xrange(item_count)]
items_dict = dict(items)
"""
in_dict = """
for k, v in items:
if k not in existing_dict:
existing_dict[k] = v
"""
set_default = """
for k, v in items:
existing_dict.setdefault(k, v)
"""
straight_add = """
for k, v in items:
existing_dict[k] = v
"""
update_method1 = """
existing_dict.update(items)
"""
update_method2 = """
existing_dict.update(items_dict)
"""
print 'in_dict ', timeit.Timer(in_dict, setup).timeit(1000)
print 'set_default ', timeit.Timer(set_default, setup).timeit(1000)
print 'straight_add ', timeit.Timer(straight_add, setup).timeit(1000)
print 'update_method1 ', timeit.Timer(update_method1, setup).timeit(1000)
print 'update_method2 ', timeit.Timer(update_method2, setup).timeit(1000)
This code resulted in the following results:
in_dict 10.6597309113
set_default 19.3389420509
straight_add 11.5891621113
update_method1 7.52693581581
update_method2 9.10132408142
if foo not in A.keys():
A[foo] = x
is very slow, because A.keys() creates a list, which has to be parsed in O(N).
if foo not in A:
A[foo] = x
is faster, because it takes O(1) to check, whether foo exists in A.
A[foo] = x
is even better, because you already have the object x and you just add (if it already does not exist) a pointer to it to A.
There are certainly faster ways than your first example. But I suspect the straight update will be faster than any test.
foo not in A.keys()
will, in Python 2, create a new list with the keys and then perform linear search on it. This is guaranteed to be slower (although I mainly object to it because there are alternatives that are faster and more elegant/idiomatic).
A[foo] = x
and
if foo not in A:
A[foo] = x
are different if A[foo] already exists but is not x. But since your "know" A[foo] will be x, it doesn't matter semantically. Anyway, both will be fine performance-wise (hard to tell without benchmarking, although intuitively I'd say the if takes much more time than copying a pointer).
So the answer is clear anyway: Choose the one that is much shorter code-wise and just as clear (the first one).
If you "know" that A[foo] "should be" equal to x, then I would just do:
assert(A[foo]==x)
which will tell you if your assumption is wrong!
A.setdefault(foo, x) but i'm not sure it is faster then if not A.has_key(foo): A[foo] = x. Should be tested.

Python if vs try-except

I was wondering why the try-except is slower than the if in the program below.
def tryway():
try:
while True:
alist.pop()
except IndexError:
pass
def ifway():
while True:
if alist == []:
break
else:
alist.pop()
if __name__=='__main__':
from timeit import Timer
alist = range(1000)
print "Testing Try"
tr = Timer("tryway()","from __main__ import tryway")
print tr.timeit()
print "Testing If"
ir = Timer("ifway()","from __main__ import ifway")
print ir.timeit()
The results I get are interesting.
Testing Try
2.91111302376
Testing If
0.30621099472
Can anyone shed some light why the try is so much slower?
You're setting alist only once. The first call to "tryway" clears it, then every successive call does nothing.
def tryway():
alist = range(1000)
try:
while True:
alist.pop()
except IndexError:
pass
def ifway():
alist = range(1000)
while True:
if alist == []:
break
else:
alist.pop()
if __name__=='__main__':
from timeit import Timer
print "Testing Try"
tr = Timer("tryway()","from __main__ import tryway")
print tr.timeit(10000)
print "Testing If"
ir = Timer("ifway()","from __main__ import ifway")
print ir.timeit(10000)
>>> Testing Try
>>> 2.09539294243
>>> Testing If
>>> 2.84440898895
Exception handling is generally slow in most languages. Most compilers, interpreters and VMs (that support exception handling) treat exceptions (the language idiom) as exceptions (uncommon). Performance optimization involves trade-offs and making exceptions fast would typically mean other areas of the language would suffer (either in performance or simplicity of design).
At a more technical level, exceptions generally mean that the VM/interpretter (or the runtime execution library) has to save a bunch of state and begin pulling off all the state on the function call stack (called unwinding) up until the point where a valid catch (except) is found.
Or looking at it from a different viewpoint, the program stops running when an exception occurs and a "debugger" takes over. This debugger searches back through the stack (calling function data) for a catch that matches the exception. If it finds one, it cleans things up and returns control to the program at that point. If it doesn't find one then it returns control to the user (perhaps in the form of an interactive debugger or python REPL).
If you are really interested in speed, both of your contestants could do with losing some weight.
while True: is slower than while 1: -- True is a global "variable" which is loaded and tested; 1 is a constant and the compiler does the test and emits an unconditional jump.
while True: is redundant in ifway. Fold the while/if/break together: while alist != []:
while alist != []: is a slow way of writing while alist:
Try this:
def tryway2():
alist = range(1000)
try:
while 1:
alist.pop()
except IndexError:
pass
def ifway2():
alist = range(1000)
while alist:
alist.pop()
`
There is still faster way iterating with for, though sometimes we want list to physically shirink so we know how many are left. Then alist should be parameter to the generator. (John is also right for while alist:) I put the function to be a generator and used list(ifway()) etc. so the values are actualy used out of function (even not used):
def tryway():
alist = range(1000)
try:
while True:
yield alist.pop()
except IndexError:
pass
def whileway():
alist = range(1000)
while alist:
yield alist.pop()
def forway():
alist = range(1000)
for item in alist:
yield item
if __name__=='__main__':
from timeit import Timer
print "Testing Try"
tr = Timer("list(tryway())","from __main__ import tryway")
print tr.timeit(10000)
print "Testing while"
ir = Timer("list(whileway())","from __main__ import whileway")
print ir.timeit(10000)
print "Testing for"
ir = Timer("list(forway())","from __main__ import forway")
print ir.timeit(10000)
J:\test>speedtest4.py
Testing Try
6.52174983133
Testing while
5.08004508953
Testing for
2.14167694497
Not sure but I think it's something like this: the while true follow the normal instruction line which means the processor can pipeline and do all sorts of nice things. Exceptions jump straight through all that so the VM need to handle it specially, and that takes time.
defensive programming requires that one test for conditions which are rare and/or abnormal, some of which during the course of a year or many years will not occur, thus in these circumstances perhaps try-except may be justified.
Just thought to toss this into the mix:
I tried the following script below which seems to suggest that handling an exception is slower than handling an else statement:
import time
n = 10000000
l = range(0, n)
t0 = time.time()
for i in l:
try:
i[0]
except:
pass
t1 = time.time()
for i in l:
if type(i) == list():
print(i)
else:
pass
t2 = time.time()
print(t1-t0)
print(t2-t1)
gives:
5.5908801555633545
3.512694835662842
So, (even though I know someone will likely comment upon the use of time rather than timeit), there appears to be a ~60% slow down using try/except in loops. So, perhaps better to go with if/else when going through a for loop of several billion items.

Categories

Resources