I need to create a numpy array of N elements, but I want to access the
array with an offset Noff, i.e. the first element should be at Noff and
not at 0. In C this is simple to do with some simple pointer arithmetic, i.e.
I malloc the array and then define a pointer and shift it appropriately.
Furthermore, I do not want to allocate N+Noff elements, but only N elements.
Now for numpy there are many methods that come to my mind:
(1) define a wrapper function to access the array
(2) overwrite the [] operator
(3) etc
But what is the fastest method to realize this?
Thanks a lot!
Mark
I would be very cautious about over-riding the [] operator through the __getitem__() method. Although it will be fine with your own code, I can easily imagine that when the array gets passed to an arbitrary library function, you could get problems.
For example, if the function explicitly tried to get all values in the array as A[0:-1], it would maps to A[offset:offset-1], which will be an empty array for any positive or negative value of offset. This may be a little contrived, but it illustrates the general problem.
Therefore, I would suggest that you create a wrapper function for your own use (as a member function may be most convenient), but don't muck around with __getitem__().
Use A[n-offset]. this turns offset to offset+len(A) into 0 to len(A).
You've already given (1) and (2) as both more or less sensible methods. To test speed for these kind of things try timeit magic function in ipython. Example usage:
A = array(range(10))
Noff = 2
wrapper_access = lambda i: A[i - Noff]
print wrapper_access(2) #0
print wrapper_access(11) #9
print wrapper_access(1) #9 = A[-1]
timeit wrapper_access(5)
On my machine I get output from timeit 10000000 loops, best of 3: 193 ns per loop
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Closed 4 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Python has a built in function sum, which is effectively equivalent to:
def sum2(iterable, start=0):
return start + reduce(operator.add, iterable)
for all types of parameters except strings. It works for numbers and lists, for example:
sum([1,2,3], 0) = sum2([1,2,3],0) = 6 #Note: 0 is the default value for start, but I include it for clarity
sum({888:1}, 0) = sum2({888:1},0) = 888
Why were strings specially left out?
sum( ['foo','bar'], '') # TypeError: sum() can't sum strings [use ''.join(seq) instead]
sum2(['foo','bar'], '') = 'foobar'
I seem to remember discussions in the Python list for the reason, so an explanation or a link to a thread explaining it would be fine.
Edit: I am aware that the standard way is to do "".join. My question is why the option of using sum for strings was banned, and no banning was there for, say, lists.
Edit 2: Although I believe this is not needed given all the good answers I got, the question is: Why does sum work on an iterable containing numbers or an iterable containing lists but not an iterable containing strings?
Python tries to discourage you from "summing" strings. You're supposed to join them:
"".join(list_of_strings)
It's a lot faster, and uses much less memory.
A quick benchmark:
$ python -m timeit -s 'import operator; strings = ["a"]*10000' 'r = reduce(operator.add, strings)'
100 loops, best of 3: 8.46 msec per loop
$ python -m timeit -s 'import operator; strings = ["a"]*10000' 'r = "".join(strings)'
1000 loops, best of 3: 296 usec per loop
Edit (to answer OP's edit): As to why strings were apparently "singled out", I believe it's simply a matter of optimizing for a common case, as well as of enforcing best practice: you can join strings much faster with ''.join, so explicitly forbidding strings on sum will point this out to newbies.
BTW, this restriction has been in place "forever", i.e., since the sum was added as a built-in function (rev. 32347)
You can in fact use sum(..) to concatenate strings, if you use the appropriate starting object! Of course, if you go this far you have already understood enough to use "".join(..) anyway..
>>> class ZeroObject(object):
... def __add__(self, other):
... return other
...
>>> sum(["hi", "there"], ZeroObject())
'hithere'
Here's the source: http://svn.python.org/view/python/trunk/Python/bltinmodule.c?revision=81029&view=markup
In the builtin_sum function we have this bit of code:
/* reject string values for 'start' parameter */
if (PyObject_TypeCheck(result, &PyBaseString_Type)) {
PyErr_SetString(PyExc_TypeError,
"sum() can't sum strings [use ''.join(seq) instead]");
Py_DECREF(iter);
return NULL;
}
Py_INCREF(result);
}
So.. that's your answer.
It's explicitly checked in the code and rejected.
From the docs:
The preferred, fast way to concatenate a
sequence of strings is by calling
''.join(sequence).
By making sum refuse to operate on strings, Python has encouraged you to use the correct method.
Short answer: Efficiency.
Long answer: The sum function has to create an object for each partial sum.
Assume that the amount of time required to create an object is directly proportional to the size of its data. Let N denote the number of elements in the sequence to sum.
doubles are always the same size, which makes sum's running time O(1)×N = O(N).
int (formerly known as long) is arbitary-length. Let M denote the absolute value of the largest sequence element. Then sum's worst-case running time is lg(M) + lg(2M) + lg(3M) + ... + lg(NM) = N×lg(M) + lg(N!) = O(N log N).
For str (where M = the length of the longest string), the worst-case running time is M + 2M + 3M + ... + NM = M×(1 + 2 + ... + N) = O(N²).
Thus, summing strings would be much slower than summing numbers.
str.join does not allocate any intermediate objects. It preallocates a buffer large enough to hold the joined strings, and copies the string data. It runs in O(N) time, much faster than sum.
The Reason Why
#dan04 has an excellent explanation for the costs of using sum on large lists of strings.
The missing piece as to why str is not allowed for sum is that many, many people were trying to use sum for strings, and not many use sum for lists and tuples and other O(n**2) data structures. The trap is that sum works just fine for short lists of strings, but then gets put in production where the lists can be huge, and the performance slows to a crawl. This was such a common trap that the decision was made to ignore duck-typing in this instance, and not allow strings to be used with sum.
Edit: Moved the parts about immutability to history.
Basically, its a question of preallocation. When you use a statement such as
sum(["a", "b", "c", ..., ])
and expect it to work similar to a reduce statement, the code generated looks something like
v1 = "" + "a" # must allocate v1 and set its size to len("") + len("a")
v2 = v1 + "b" # must allocate v2 and set its size to len("a") + len("b")
...
res = v10000 + "$" # must allocate res and set its size to len(v9999) + len("$")
In each of these steps a new string is created, which for one might give some copying overhead as the strings are getting longer and longer. But that’s maybe not the point here. What’s more important, is that every new string on each line must be allocated to it’s specific size (which. I don’t know it it must allocate in every iteration of the reduce statement, there might be some obvious heuristics to use and Python might allocate a bit more here and there for reuse – but at several points the new string will be large enough that this won’t help anymore and Python must allocate again, which is rather expensive.
A dedicated method like join, however has the job to figure out the real size of the string before it starts and would therefore in theory only allocate once, at the beginning and then just fill that new string, which is much cheaper than the other solution.
I dont know why, but this works!
import operator
def sum_of_strings(list_of_strings):
return reduce(operator.add, list_of_strings)
I have two np.matrixes, one of which I'm trying to normalize. I know, in general, list comprehensions are faster than for loops, so I'm trying to convert my double for loop into a list expression.
# normalize the rows and columns of A by B
for i in range(1,q+1):
for j in range(1,q+1):
A[i-1,j-1] = A[i-1,j-1] / (B[i-1] / B[j-1])
This is what I have gotten so far:
A = np.asarray([A/(B[i-1]/B[j-1]) for i, j in zip(range(1,q+1), range(1,q+1))])
but I think I'm taking the wrong approach because I'm not seeing any significant time difference.
Any help would be appreciated.
First, if you really do mean np.matrix, stop using np.matrix. It has all sorts of nasty incompatibilities, and its role is obsolete now that # for matrix multiplication exists. Even if you're stuck on a Python version without #, using the dot method with normal ndarrays is still better than dealing with np.matrix.
You shouldn't use any sort of Python-level iteration construct with NumPy arrays, whether for loops or list comprehensions, unless you're sure you have no better options. Assuming A is 2D and B is 1D with shapes (q, q) and (q,) respectively, what you should instead do for this case is
A *= B
A /= B[:, np.newaxis]
broadcasting the operation over A. This will allow NumPy to perform the iteration at C level directly over the arrays' underlying data buffers, without having to create wrapper objects and perform dynamic dispatch on every operation.
I am very new to Python, and I am trying to get used to performing Python's array operations rather than looping through arrays. Below is an example of the kind of looping operation I am doing, but am unable to work out a suitable pure array operation that does not rely on loops:
import numpy as np
def f(arg1, arg2):
# an arbitrary function
def myFunction(a1DNumpyArray):
A = a1DNumpyArray
# Create a square array with each dimension the size of the argument array.
B = np.zeros((A.size, A.size))
# Function f is a function of two elements of the 1D array. For each
# element, i, I want to perform the function on it and every element
# before it, and store the result in the square array, multiplied by
# the difference between the ith and (i-1)th element.
for i in range(A.size):
B[i,:i] = f(A[i], A[:i])*(A[i]-A[i-1])
# Sum through j and return full sums as 1D array.
return np.sum(B, axis=0)
In short, I am integrating a function which takes two elements of the same array as arguments, returning an array of results of the integral.
Is there a more compact way to do this, without using loops?
The use of an arbitrary f function, and this [i, :i] business complicates by passing a loop.
Most of the fast compiled numpy operations work on the whole array, or whole rows and/or columns, and effectively do so in parallel. Loops that are inherently sequential (value from one loop depends on the previous) don't fit well. And different size lists or arrays in each loop are also a good indicator that 'vectorizing' will be difficult.
for i in range(A.size):
B[i,:i] = f(A[i], A[:i])*(A[i]-A[i-1])
With a sample A and known f (as simple as arg1*arg2), I'd generate a B array, and look for patterns that treat B as a whole. At first glance it looks like your B is a lower triangle. There are functions to help index those. But that final sum might change the picture.
Sometimes I tackle these problems with a bottom up approach, trying to remove inner loops first. But in this case, I think some sort of big-picture approach is needed.
This will create an empty array of type signed int:
import array
a = array.array('i')
What is an efficient (performance-wise) way to specify the array lengh (as well as the array's rank - number of dimensions)?
I understand that NumPy allows to specify array size at creation, but can it be done in standard Python?
Initialising an array of fixed size in python
This deals mostly with lists, as well as no consideration is given to performance. The main reason to use an array instead of a list is performance.
The array constructor accepts as a 2nd argument an iterable. So, the following works to efficiently create and initialize the array to 0..N-1:
x = array.array('i', range(N))
This does not create a separate N element vector or list.
(If using python 2, use xrange instead). Of course, if you need different initialization you may use generator object instead of range. For example, you can use generator expressions to fill the array with zeros:
a=array.array('i',(0 for i in range(N)))
Python has no 2D (or higher) array. You have to construct one from a list of 1D arrays.
The truth is, if you are looking for a high performance implementation, you should probably use Numpy.
It's simple and fast to just use:
array.array('i', [0]) * n
Timing of different ways to initialize an array in my machine:
n = 10 ** 7
array('i', [0]) * n # 21.9 ms
array('i', [0]*n) # 395.2 ms
array('i', range(n)) # 810.6 ms
array('i', (0 for _ in range(n))) # 1238.6 ms
You said
The main reason to use an array instead of a list is performance.
Surely arrays use less memory than lists.
But by my experiment, I found no evidence that an array is always faster than a normal list.
I have a master array of length n of id numbers that apply to other analogous arrays with corresponding data for elements in my simulation that belong to those id numbers (e.g. data[id]). Were I to generate a list of id numbers of length m separately and need the information in the data array for those ids, what is the best method of getting a list of indices idx of the original array of ids in order to extract data[idx]? That is, given:
a=numpy.array([1,3,4,5,6]) # master array
b=numpy.array([3,4,3,6,4,1,5]) # secondary array
I would like to generate
idx=numpy.array([1,2,1,4,2,0,3])
The array a is typically in sequential order but it's not a requirement. Also, array b will most definitely have repeats and will not be in any order.
My current method of doing this is:
idx=numpy.array([numpy.where(a==bi)[0][0] for bi in b])
I timed it using the following test:
a=(numpy.random.uniform(100,size=100)).astype('int')
b=numpy.repeat(a,100)
timeit method1(a,b)
10 loops, best of 3: 53.1 ms per loop
Is there a better way of doing this?
The current way you are doing it with where searching through the whole array of a each time. You can make this look-up O(1) instead of O(N) using a dict. For instance, I used the following method:
def method2(a,b):
tmpdict = dict(zip(a,range(len(a))))
idx = numpy.array([tmpdict[bi] for bi in b])
and got a very large speed-up which will be even better for larger arrays. For the sizes that you had in your example code, I got a speed-up of 15x. The only problem with my code is that if there are repeated elements in a, then the dict will currently point to the last instance of the element while with your method it will point to the first instance. However, that can remedied if there are to be repeated elements in the actual usage of the code.
I'm not sure if there is a way to do this automatically in python, but you're probably best off sorting the two arrays and then generating your output in one pass through b. The complexity of that operation should be O(|a|*log|a|)+O(|b|*log|b|)+O(|b|) = O(|b|*log|b|) (assuming |b| > |a|). I believe your original try has complexity O(|a|*|b|), so this should provide a noticeable improvement for a sufficiently large b.