What is the time complexity of iterating through a deque in Python? - python

What is the time complexity of iterating, or more precisely each iteration through a deque from the collections library in Python?
An example is this:
elements = deque([1,2,3,4])
for element in elements:
print(element)
Is each iteration a constant O(1) operation? or does it do a linear O(n) operation to get to the element in each iteration?
There are many resources online for time complexity with all of the other deque methods like appendleft, append, popleft, pop. There doesn't seem to be any time complexity information about the iteration of a deque.
Thanks!

If your construction is something like:
elements = deque([1,2,3,4])
for i in range(len(elements)):
print(elements[i])
You are not iterating over the deque, you are iterating over the range object, and then indexing into the deque. This makes the iteration polynomial time, since each indexing operation, elements[i] is O(n). However, actually iterating over the deque is linear time.
for x in elements:
print(x)
Here's a quick, empirical test:
import timeit
import pandas as pd
from collections import deque
def build_deque(n):
return deque(range(n))
def iter_index(d):
for i in range(len(d)):
d[i]
def iter_it(d):
for x in d:
x
r = range(100, 10001, 100)
index_runs = [timeit.timeit('iter_index(d)', 'from __main__ import build_deque, iter_index, iter_it; d = build_deque({})'.format(n), number=1000) for n in r]
it_runs = [timeit.timeit('iter_it(d)', 'from __main__ import build_deque, iter_index, iter_it; d = build_deque({})'.format(n), number=1000) for n in r]
df = pd.DataFrame({'index':index_runs, 'iter':it_runs}, index=r)
df.plot()
And the resulting plot:
Now, we can actually see how the iterator protocol is implemented for deque objects in CPython source code:
First, the deque object itself:
typedef struct BLOCK {
struct BLOCK *leftlink;
PyObject *data[BLOCKLEN];
struct BLOCK *rightlink;
} block;
typedef struct {
PyObject_VAR_HEAD
block *leftblock;
block *rightblock;
Py_ssize_t leftindex; /* 0 <= leftindex < BLOCKLEN */
Py_ssize_t rightindex; /* 0 <= rightindex < BLOCKLEN */
size_t state; /* incremented whenever the indices move */
Py_ssize_t maxlen;
PyObject *weakreflist;
} dequeobject;
So, as stated in the comments, a deque is a doubly-linked list of "block" nodes, where a block is essentially an array of python object pointers. Now for the iterator protocol:
typedef struct {
PyObject_HEAD
block *b;
Py_ssize_t index;
dequeobject *deque;
size_t state; /* state when the iterator is created */
Py_ssize_t counter; /* number of items remaining for iteration */
} dequeiterobject;
static PyTypeObject dequeiter_type;
static PyObject *
deque_iter(dequeobject *deque)
{
dequeiterobject *it;
it = PyObject_GC_New(dequeiterobject, &dequeiter_type);
if (it == NULL)
return NULL;
it->b = deque->leftblock;
it->index = deque->leftindex;
Py_INCREF(deque);
it->deque = deque;
it->state = deque->state;
it->counter = Py_SIZE(deque);
PyObject_GC_Track(it);
return (PyObject *)it;
}
// ...
static PyObject *
dequeiter_next(dequeiterobject *it)
{
PyObject *item;
if (it->deque->state != it->state) {
it->counter = 0;
PyErr_SetString(PyExc_RuntimeError,
"deque mutated during iteration");
return NULL;
}
if (it->counter == 0)
return NULL;
assert (!(it->b == it->deque->rightblock &&
it->index > it->deque->rightindex));
item = it->b->data[it->index];
it->index++;
it->counter--;
if (it->index == BLOCKLEN && it->counter > 0) {
CHECK_NOT_END(it->b->rightlink);
it->b = it->b->rightlink;
it->index = 0;
}
Py_INCREF(item);
return item;
}
As you can see, the iterator essentially keeps track of a block index, a pointer to a block, and a counter of total items in the deque. It stops iterating if the counter reaches zero, if not, it grabs the element at the current index, increments the index, decrements the counter, and tales care of checking whether to move to the next block or not. In other words, A deque could be represented as a list-of-lists in Python, e.g. d = [[1,2,3],[4,5,6]], and it iterates
for block in d:
for x in block:
...

Related

Python C binding - get array from python to C++

As the title says: I wold like to make a python binding in C++ that does some algebraic operations on some array. For this, I have to parse the python "array object" into C++ as a vector of double or integer or whatever the case may be.
I tried to do this but I face some issues. I've created a new python type and a class with the name Typer where I have this method that tries to get the elements of a python array, then compute the sum (as a first step).
tatic PyObject *Typer_vectorsum(Typer *self, PyObject *args)
{
PyObject *retval;
PyObject *list;
if (!PyArg_ParseTuple(args, "O", &list))
return NULL;
double *arr;
arr = (double *)malloc(sizeof(double) * PyTuple_Size(list));
int length;
length = PyTuple_Size(list);
PyObject *item = NULL;
for (int i = 0; i < length; ++i)
{
item = PyTuple_GetItem(list, i);
if (!PyFloat_Check(item))
{
exit(1);
}
arr[i] = PyFloat_AsDouble(item);
}
double result = 0.0;
for (int i = 0; i < length; ++i)
{
result += arr[i];
}
retval = PyFloat_FromDouble(result);
free(arr);
return retval;
}
In this method I parse the python array object into a C array (allocating the memory of the array with malloc). Then I add every element from the object to my C array and just compute the sum in the last for-loop.
If I build the project and then create a python test file, nothing happens (the file compiles without any issues but it is not printing anything).
y = example.Typer() . #typer is the init
tuple = (1, 2, 3)
print(y.vectorsum(tuple))
Am I missing something? And also, Is there a nice and easy way of getting a python array object into C++, but as a std::vector instead of a classic C array?
Thank you in advance!
The tuple contains ints, not floats, so your PyFloat_Check fails. And no, there is no direct way from Python tuple to C array or C++ std::vector. The reason being that the tuple is an array of Python objects, not an array of C values such as doubles.
Here's your example with improved error checking, after which it should work:
PyObject *retval;
PyObject *list;
if (!PyArg_ParseTuple(args, "O!", &PyTuple_Type, &list))
return NULL;
double *arr =
arr = (double *)malloc(sizeof(double) * PyTuple_GET_SIZE(list));
int length;
length = PyTuple_GET_SIZE(list);
PyObject *item = NULL;
for (int i = 0; i < length; ++i)
{
item = PyTuple_GET_ITEM(list, i);
arr[i] = PyFloat_AsDouble(item);
if (arr[i] == -1. && PyErr_Occurred())
{
exit(1);
}
}
double result = 0.0;
for (int i = 0; i < length; ++i)
{
result += arr[i];
}
retval = PyFloat_FromDouble(result);
free(arr);
return retval;

Why is collections.Counter much slower than ''.count?

I have a simple task: To count how many times every letter occurs in a string. I've used a Counter() for it, but on one forum I saw information that using dict() / Counter() is much slower than using string.count() for every letter. I thought that it would interate through the string only once, and the string.count() solution would have to iterate through it four times (in this case). Why is Counter() so slow?
>>> timeit.timeit('x.count("A");x.count("G");x.count("C");x.count("T")', setup="x='GAAAAAGTCGTAGGGTTCCTTCACTCGAGGAATGCTGCGACAGTAAAGGAGGCCACGTGGTTGAGAGTTCCTAAGCATTCGTATGTACACCCGGACTCGATGCACTCAAACGTGCTTAAGGGTAAAGAAGGTCGAGAGGTATACTGGGGCACTCCCCTTAGAATTATATCTTGGTCAACTACAATATGGATGGAAATTCTAAGCCGAAAACGACCCGCTAGCGGATTGTGTATGTATCACAACGGTTTCGGTTCATACGCAAAATCATCCCATTTCAAGGCCACTCAAGGACATGACGCCGTGCAACTCCGAGGACATCCCTCAGCGATTGATGCAACCTGGTCATCTAATAATCCTTAGAACGGATGTGCCCTCTACTGGGAGAGCCGGCTAGACTGGCATCTCGCGTTGTTCGTACGAGCTCCGGGCGCCCGGGCGGTGTACGTTGATGTACAGCCTAAGAGCTTTCCACCTATGCTACGAACTAATTTCCCGTCCATCGTTCCTCGGACTGAGGTCAAAGTAACCCGGAAGTACATGGATCAGATACACTCACAGTCCCCTTTAATGACTGAGCTGGACGCTATTGATTGCTTTATAAGTGTTATGGTGAACTCGAAGACTTAGCTAGGAATTTCGCTATACCCGGGTAATGAGCTTAATACCTCACAGCATGTACGCTCTGAATATATGTAGCGATGCTAGCGGAACGTAAGCGTGAGCGTTATGCAGGGCTCCGCACCTCGTGGCCACTCGCCCAATGCCCGAGTTTTTGAGCAATGCCATGCCCTCCAGGTGAAGCGTGCTGAATATGTTCCGCCTCCGCACACCTACCCTACGGGCCTTACGCCATAGCTGAGGATACGCGAGTTGGTTAGCGATTACGTCATTCCAGGTGGTCGTTC'", number=10000)
0.07911698750407936
>>> timeit.timeit('Counter(x)', setup="from collections import Counter;x='GAAAAAGTCGTAGGGTTCCTTCACTCGAGGAATGCTGCGACAGTAAAGGAGGCCACGTGGTTGAGAGTTCCTAAGCATTCGTATGTACACCCGGACTCGATGCACTCAAACGTGCTTAAGGGTAAAGAAGGTCGAGAGGTATACTGGGGCACTCCCCTTAGAATTATATCTTGGTCAACTACAATATGGATGGAAATTCTAAGCCGAAAACGACCCGCTAGCGGATTGTGTATGTATCACAACGGTTTCGGTTCATACGCAAAATCATCCCATTTCAAGGCCACTCAAGGACATGACGCCGTGCAACTCCGAGGACATCCCTCAGCGATTGATGCAACCTGGTCATCTAATAATCCTTAGAACGGATGTGCCCTCTACTGGGAGAGCCGGCTAGACTGGCATCTCGCGTTGTTCGTACGAGCTCCGGGCGCCCGGGCGGTGTACGTTGATGTACAGCCTAAGAGCTTTCCACCTATGCTACGAACTAATTTCCCGTCCATCGTTCCTCGGACTGAGGTCAAAGTAACCCGGAAGTACATGGATCAGATACACTCACAGTCCCCTTTAATGACTGAGCTGGACGCTATTGATTGCTTTATAAGTGTTATGGTGAACTCGAAGACTTAGCTAGGAATTTCGCTATACCCGGGTAATGAGCTTAATACCTCACAGCATGTACGCTCTGAATATATGTAGCGATGCTAGCGGAACGTAAGCGTGAGCGTTATGCAGGGCTCCGCACCTCGTGGCCACTCGCCCAATGCCCGAGTTTTTGAGCAATGCCATGCCCTCCAGGTGAAGCGTGCTGAATATGTTCCGCCTCCGCACACCTACCCTACGGGCCTTACGCCATAGCTGAGGATACGCGAGTTGGTTAGCGATTACGTCATTCCAGGTGGTCGTTC'", number=10000)
2.1727447831030844
>>> 2.1727447831030844 / 0.07911698750407936
27.462430656767047
>>>
Counter() allows you to count any hashable objects, not just substrings. Both solutions are O(n)-time. Your measurements show that the overhead of iterating and hashing individual characters by Counter() is greater than running s.count() 4 times.
Counter() can use C helper to count elements but it seems it doesn't special case strings and uses general algorithm applicable for any other iterable i.e., processing a single character involves multiple Python C API calls to advance the iterator, get previous value (a lookup in the hash table), increment counter, set new value (a lookup in the hash table):
while (1) {
key = PyIter_Next(it);
if (key == NULL)
break;
oldval = PyObject_GetItem(mapping, key);
if (oldval == NULL) {
if (!PyErr_Occurred() || !PyErr_ExceptionMatches(PyExc_KeyError))
break;
PyErr_Clear();
Py_INCREF(one);
newval = one;
} else {
newval = PyNumber_Add(oldval, one);
Py_DECREF(oldval);
if (newval == NULL)
break;
}
if (PyObject_SetItem(mapping, key, newval) == -1)
break;
Py_CLEAR(newval);
Py_DECREF(key);
}
Compare it to FASTSEARCH() overhead for bytestrings:
for (i = 0; i < n; i++)
if (s[i] == p[0]) {
count++;
if (count == maxcount)
return maxcount;
}
return count;
The Counter class inherits from dict, while string.count is the following C-implementation (CPython 3.3):
/* stringlib: count implementation */
#ifndef STRINGLIB_FASTSEARCH_H
#error must include "stringlib/fastsearch.h" before including this module
#endif
Py_LOCAL_INLINE(Py_ssize_t)
STRINGLIB(count)(const STRINGLIB_CHAR* str, Py_ssize_t str_len,
const STRINGLIB_CHAR* sub, Py_ssize_t sub_len,
Py_ssize_t maxcount)
{
Py_ssize_t count;
if (str_len < 0)
return 0; /* start > len(str) */
if (sub_len == 0)
return (str_len < maxcount) ? str_len + 1 : maxcount;
count = FASTSEARCH(str, str_len, sub, sub_len, maxcount, FAST_COUNT);
if (count < 0)
return 0; /* no match */
return count;
}
Guess, which one is faster? :)

Returning C arrays into python scope from scipy's weave.inline

I am using scipy's weave.inline to perform computationally expensive tasks. I have problems returning an one-dimensional array back into the python scope. Weave.inline uses a special argument called "return_val" for the purpose of returning values back into the python scope.
The following example returning an integer value works well:
>>> from scipy.weave import inline
>>> print inline(r'''int N = 10; return_val = N;''')
10
However the following example, which indeed compiles without prompting an error, does not return the array i would expect:
>>> from scipy.weave import inline
>>> code =\
r'''
int* pairs;
int lenght = 0;
for (int i=0;i<N;i++){
lenght += 1;
pairs = (int *)malloc(sizeof(int)*lenght);
pairs[i] = i;
std::cout << pairs[i] << std::endl;
}
return_val = pairs;
'''
>>> N = 5
>>> R = inline(code,['N'])
>>> print "RETURN_VAL:",R
0
1
2
3
4
RETURN_VAL: 1
I need to reallocate the size of the array "pairs" dynamically which is why I can't pass a numpy.array or python list per se.
All you need to do is use the raw python c-api calls, or if you're looking for something a bit more convenient, the built in scipy weave wrappers.
No guarantees about leaks or efficiency, but it should look something a bit like this:
from scipy.weave import inline
code = r'''
py::list ret;
for(int i = 0; i < N; i++) {
py::list item;
for(int j = 0; j < i; j++) {
item.append(j);
}
ret.append(item);
}
return_val = ret;
'''
N = 5
R = inline(code,['N'])
print R
If you absolutely don't know the size of the output array in advance, you must create it in your inline code. I'm pretty sure that your array allocated by using malloc will result in leaked memory since you have no way of controlling when this memory is to be freed.
The solution is to create a numpy array, fill it with your function's results and return it.
import scipy.weave
code = r"""
npy_intp dims[1] = {n};
PyObject* out_array = PyArray_SimpleNew(1, dims, NPY_DOUBLE);
double* data = (double*) ((PyArrayObject*) out_array)->data;
for (int i=0; i<n; ++i) data[i] = i;
return_val = out_array;
Py_XDECREF(out_array);
"""
n = 5
out_array = scipy.weave.inline(code, ["n"])
print "Array:", out_array

How does python extend work?

If I have a list:
a = [1,2,3,4]
and then add 4 elements using extend
a.extend(range(5,10))
I get
a = [1, 2, 3, 4, 5, 6, 7, 8, 9]
How does python do this? does it create a new list and copy the elements across or does it make 'a' bigger? just concerned that using extend will gobble up memory. I'am also asking as there is a comment in some code I'm revising that extending by 10000 x 100 is quicker than doing it in one block of 1000000.
Python's documentation on it says:
Extend the list by appending all the
items in the given list; equivalent to
a[len(a):] = L.
As to "how" it does it behind the scene, you really needn't concern yourself about it.
L.extend(M) is amortized O(n) where n=len(m), so excessive copying is not usually a problem. The times it can be a problem is when there is not enough space to extend into, so a copy is performed. This is a problem when the list is large and you have limits on how much time is acceptable for an individual extend call.
That is the point when you should look for a more efficient datastructure for your problem. I find it is rarely a problem in practice.
Here is the relevant code from CPython, you can see that extra space is allocated when the list is extended to avoid excessive copying
static PyObject *
listextend(PyListObject *self, PyObject *b)
{
PyObject *it; /* iter(v) */
Py_ssize_t m; /* size of self */
Py_ssize_t n; /* guess for size of b */
Py_ssize_t mn; /* m + n */
Py_ssize_t i;
PyObject *(*iternext)(PyObject *);
/* Special cases:
1) lists and tuples which can use PySequence_Fast ops
2) extending self to self requires making a copy first
*/
if (PyList_CheckExact(b) || PyTuple_CheckExact(b) || (PyObject *)self == b) {
PyObject **src, **dest;
b = PySequence_Fast(b, "argument must be iterable");
if (!b)
return NULL;
n = PySequence_Fast_GET_SIZE(b);
if (n == 0) {
/* short circuit when b is empty */
Py_DECREF(b);
Py_RETURN_NONE;
}
m = Py_SIZE(self);
if (list_resize(self, m + n) == -1) {
Py_DECREF(b);
return NULL;
}
/* note that we may still have self == b here for the
* situation a.extend(a), but the following code works
* in that case too. Just make sure to resize self
* before calling PySequence_Fast_ITEMS.
*/
/* populate the end of self with b's items */
src = PySequence_Fast_ITEMS(b);
dest = self->ob_item + m;
for (i = 0; i < n; i++) {
PyObject *o = src[i];
Py_INCREF(o);
dest[i] = o;
}
Py_DECREF(b);
Py_RETURN_NONE;
}
it = PyObject_GetIter(b);
if (it == NULL)
return NULL;
iternext = *it->ob_type->tp_iternext;
/* Guess a result list size. */
n = _PyObject_LengthHint(b, 8);
if (n == -1) {
Py_DECREF(it);
return NULL;
}
m = Py_SIZE(self);
mn = m + n;
if (mn >= m) {
/* Make room. */
if (list_resize(self, mn) == -1)
goto error;
/* Make the list sane again. */
Py_SIZE(self) = m;
}
/* Else m + n overflowed; on the chance that n lied, and there really
* is enough room, ignore it. If n was telling the truth, we'll
* eventually run out of memory during the loop.
*/
/* Run iterator to exhaustion. */
for (;;) {
PyObject *item = iternext(it);
if (item == NULL) {
if (PyErr_Occurred()) {
if (PyErr_ExceptionMatches(PyExc_StopIteration))
PyErr_Clear();
else
goto error;
}
break;
}
if (Py_SIZE(self) < self->allocated) {
/* steals ref */
PyList_SET_ITEM(self, Py_SIZE(self), item);
++Py_SIZE(self);
}
else {
int status = app1(self, item);
Py_DECREF(item); /* append creates a new ref */
if (status < 0)
goto error;
}
}
/* Cut back result list if initial guess was too large. */
if (Py_SIZE(self) < self->allocated)
list_resize(self, Py_SIZE(self)); /* shrinking can't fail */
Py_DECREF(it);
Py_RETURN_NONE;
error:
Py_DECREF(it);
return NULL;
}
PyObject *
_PyList_Extend(PyListObject *self, PyObject *b)
{
return listextend(self, b);
}
It works as if it were defined like this
def extend(lst, iterable):
for x in iterable:
lst.append(x)
This mutates the list, it does not create a copy of it.
Depending on the underlying implementation, append and extend may trigger the list to copy its own data structures but this is normal and nothing to worry about. For example array-based implementations typically grow the underlying array exponentially and need to copy the list of elements when they do so.
How does python do this? does it create a new list and copy the elements across or does it make 'a' bigger?
>>> a = ['apples', 'bananas']
>>> b = a
>>> a is b
True
>>> c = ['apples', 'bananas']
>>> a is c
False
>>> a.extend(b)
>>> a
['apples', 'bananas', 'apples', 'bananas']
>>> b
['apples', 'bananas', 'apples', 'bananas']
>>> a is b
True
>>>
It does not create a new list object, it extends a. This is self-evident from the fact that you don't make an assigment. Python will not magically replace your objects with other objects. :-)
How the memory allocation happens inside the list object is implementation dependent.

Permutation with backtraking from C to Python

I have to do a program that gives all permutations of n numbers {1,2,3..n} using backtracking. I managed to do it in C, and it works very well, here is the code:
int st[25], n=4;
int valid(int k)
{
int i;
for (i = 1; i <= k - 1; i++)
if (st[k] == st[i])
return 0;
return 1;
}
void bktr(int k)
{
int i;
if (k == n + 1)
{
for (i = 1; i <= n; i++)
printf("%d ", st[i]);
printf("\n");
}
else
for (i = 1; i <= n; i++)
{
st[k] = i;
if (valid(k))
bktr(k + 1);
}
}
int main()
{
bktr(1);
return 0;
}
Now I have to write it in Python. Here is what I did:
st=[]
n=4
def bktr(k):
if k==n+1:
for i in range(1,n):
print (st[i])
else:
for i in range(1,n):
st[k]=i
if valid(k):
bktr(k+1)
def valid(k):
for i in range(1,k-1):
if st[k]==st[i]:
return 0
return 1
bktr(1)
I get this error:
list assignment index out of range
at st[k]==st[i].
Python has a "permutations" functions in the itertools module:
import itertools
itertools.permutations([1,2,3])
If you need to write the code yourself (for example if this is homework), here is the issue:
Python lists do not have a predetermined size, so you can't just set e.g. the 10th element to 3. You can only change existing elements or add to the end.
Python lists (and C arrays) also start at 0. This means you have to access the first element with st[0], not st[1].
When you start your program, st has a length of 0; this means you can not assign to st[1], as it is not the end.
If this is confusing, I recommend you use the st.append(element) method instead, which always adds to the end.
If the code is done and works, I recommend you head over to code review stack exchange because there are a lot more things that could be improved.

Categories

Resources