Multiply all items in a list in Python [duplicate] - python

This question already has answers here:
How can I multiply all items in a list together with Python?
(15 answers)
Closed 6 years ago.
How do I multiply the items in a list ?
For example:
num_list = [1,2,3,4,5]
def multiplyListItems(l):
# some code here...
The expected calculation and return value is 1 x 2 x 3 x 4 x 5 = 120.

One way is to use reduce:
>>> num_list = [1,2,3,4,5]
>>> reduce(lambda x, y: x*y, num_list)
120

Use functools.reduce, which is faster (see below) and more forward-compatible with Python 3.
import operator
import functools
num_list = [1,2,3,4,5]
accum_value = functools.reduce(operator.mul, num_list)
print(accum_value)
# Output
120
Measure the execution time for 3 different ways,
# Way 1: reduce
$ python -m timeit "reduce(lambda x, y: x*y, [1,2,3,4,5])"
1000000 loops, best of 3: 0.727 usec per loop
# Way 2: np.multiply.reduce
$ python -m timeit -s "import numpy as np" "np.multiply.reduce([1,2,3,4,5])"
100000 loops, best of 3: 6.71 usec per loop
# Way 3: functools.reduce
$ python -m timeit -s "import operator, functools" "functools.reduce(operator.mul, [1,2,3,4,5])"
1000000 loops, best of 3: 0.421 usec per loop
For a bigger list, it is better to use np.multiply.reduce as mentioned by #MikeMüller.
$ python -m timeit "reduce(lambda x, y: x*y, range(1, int(1e5)))"
10 loops, best of 3: 3.01 sec per loop
$ python -m timeit -s "import numpy as np" "np.multiply.reduce(range(1, int(1e5)))"
100 loops, best of 3: 11.2 msec per loop
$ python -m timeit -s "import operator, functools" "functools.reduce(operator.mul, range(1, int(1e5)))"
10 loops, best of 3: 2.98 sec per loop

A NumPy solution:
>>> import numpy as np
>>> np.multiply.reduce(num_list)
120
Run times for a bit larger list:
In [303]:
from operator import mul
from functools import reduce
import numpy as np
​
a = list(range(1, int(1e5)))
In [304]
%timeit np.multiply.reduce(a)
100 loops, best of 3: 8.25 ms per loop
In [305]:
%timeit reduce(lambda x, y: x*y, a)
1 loops, best of 3: 5.04 s per loop
In [306]:
%timeit reduce(mul, a)
1 loops, best of 3: 5.37 s per loop
NumPy is largely implemented in C. Therefore, it can often be one or two orders of magnitudes faster than writing loops over Python lists. This works for larger arrays. If an array is small and it is used often form Python, things can be slower than using pure Python. This is because of the overhead converting between Python objects and C data types. In fact, it is an anti-pattern to write Python for loops to iterate over NumPy arrays.
Here, the list with five numbers causes considerable overhead compared to gain
from the faster numerics.

Related

deque.popleft() vs list.pop(0), performance analysis

According to this question, I checked the performance on my laptop.
Surprisingly, I found that pop(0) from a list is faster than popleft() from a deque stucture:
python -m timeit 'l = range(10000)' 'l.pop(0)'
gives:
10000 loops, best of 3: 66 usec per loop
While:
python -m timeit 'import collections' 'l = collections.deque(range(10000))' 'l.popleft()'
gives:
10000 loops, best of 3: 123 usec per loop
Moreover, I checked the performance on jupyter finding the same outcome:
%timeit l = range(10000); l.pop(0)
10000 loops, best of 3: 64.7 µs per loop
from collections import deque
%timeit l = deque(range(10000)); l.popleft()
10000 loops, best of 3: 122 µs per loop
What is the reason?
The problem is that your timeit call also times the deque/list creation, and creating a deque is obviously much slower because of the chaining.
In the command line, you can pass the setup to timeit using the -s option like this:
python -m timeit -s"import collections, time; l = collections.deque(range(10000000))" "l.popleft()"
Also, since setup is only run once, you get a pop error (empty list) after a whule, since I haven't changed default number of iterations, so I created a large deque to make it up, and got
10000000 loops, best of 3: 0.0758 usec per loop
on the other hand with list it's slower:
python -m timeit -s "l = list(range(10000000))" "l.pop(0)"
100 loops, best of 3: 9.72 msec per loop
I have also coded the bench in a script (more convenient), with a setup (to avoid clocking the setup) and 99999 iterations on a 100000-size list:
import timeit
print(timeit.timeit(stmt='l.pop(0)',setup='l = list(range(100000))',number=99999))
print(timeit.timeit(setup='import collections; l = collections.deque(range(100000))', stmt='l.popleft()', number=99999))
no surprise: deque wins:
2.442976927292288 for pop in list
0.007311641921253109 for pop in deque
note that l.pop() for the list runs in 0.011536903686244897 seconds, which is very good when popping the last element, as expected.

What is the fastest way to copy a 2D array in Python?

I have to make a very large number of simulations on a R*C grid.
These simulations are altering the grid, so I need to copy my reference grid before each, and then apply my simulating function on the fresh new grid.
What is the fastest way to do this in Python?
Since I have not found a similar question on StackOverflow, I did the tests myself and decided to post them here thinking they could be useful to other people.
The answer will be a community response so that other people can add new measurements with possibly other techniques.
If you add another method, remember to measure all the old tests and update them because the time depends on the computer used, avoid biasing the results.
I used a bash variable for setting up the timeit tests:
setup="""
R = 100
C = 100
from copy import deepcopy
import numpy as np
ref = [[i for i in range(C)] for _ in range(R)]
ref_np = np.array(ref)
cp = [[100 for i in range(C)] for _ in range(R)]
cp_np = np.array(cp)
"""
Just for convenience, I also set a temporary alias pybench:
alias pybench='python3.5 -m timeit -s "$setup" $1'
Python 3
Python 3.5.0+ (default, Oct 11 2015, 09:05:38)
Deepcopy:
>>> pybench "cp = deepcopy(ref)"
100 loops, best of 3: 8.29 msec per loop
Modifying pre-created array using index:
>>> pybench \
"for y in range(R):
for x in range(C):
cp[y][x] = ref[y][x]"
1000 loops, best of 3: 1.16 msec per loop
Nested list comprehension:
>>> pybench "cp = [[x for x in row] for row in ref]"
1000 loops, best of 3: 390 usec per loop
Slicing:
>>> pybench "cp = [row[:] for row in ref]"
10000 loops, best of 3: 45.8 usec per loop
NumPy copy:
>>> pybench "cp_np = np.copy(ref_np)"
100000 loops, best of 3: 6.03 usec per loop
Copying to pre-created NumPy array:
>>> pybench "np.copyto(cp_np, ref_np)"
100000 loops, best of 3: 4.52 usec per loop
There is nothing very surprising in these results, as you might have guessed, use NumPy is enormously faster, especially if one avoids creating a new table each time.
To add to the answer from Delgan, numpy copy's documentation says to use numpy.ndarray.copy as the preferred method. So for now, without doing a timing test, I will use numpy.ndarray.copy
https://numpy.org/doc/stable/reference/generated/numpy.copy.html
https://numpy.org/doc/stable/reference/generated/numpy.ndarray.copy.html

Python3 vs Python2 list/generator range performance

I have this simple function that partitions a list and returns an index i in the list such that elements at indices less that i are smaller than list[i] and elements at indices greater than i are bigger.
def partition(arr):
first_high = 0
pivot = len(arr) - 1
for i in range(len(arr)):
if arr[i] < arr[pivot]:
arr[first_high], arr[i] = arr[i], arr[first_high]
first_high = first_high + 1
arr[first_high], arr[pivot] = arr[pivot], arr[first_high]
return first_high
if __name__ == "__main__":
arr = [1, 5, 4, 6, 0, 3]
pivot = partition(arr)
print(pivot)
The runtime is substantially bigger with python 3.4 that python 2.7.6
on OS X:
time python3 partition.py
real 0m0.040s
user 0m0.027s
sys 0m0.010s
time python partition.py
real 0m0.031s
user 0m0.018s
sys 0m0.011s
Same thing on ubuntu 14.04 / virtual box
python3:
real 0m0.049s
user 0m0.034s
sys 0m0.015s
python:
real 0m0.044s
user 0m0.022s
sys 0m0.018s
Is python3 inherently slower that python2.7 or is there any specific optimizations to the code do make run as fast as on python2.7
As mentioned in the comments, you should be benchmarking with timeit rather than with OS tools.
My guess is the range function is probably performing a little slower in Python 3. In Python 2 it simply returns a list, in Python 3 it returns a range which behave more or less like a generator. I did some benchmarking and this was the result, which may be a hint on what you're experiencing:
python -mtimeit "range(10)"
1000000 loops, best of 3: 0.474 usec per loop
python3 -mtimeit "range(10)"
1000000 loops, best of 3: 0.59 usec per loop
python -mtimeit "range(100)"
1000000 loops, best of 3: 1.1 usec per loop
python3 -mtimeit "range(100)"
1000000 loops, best of 3: 0.578 usec per loop
python -mtimeit "range(1000)"
100000 loops, best of 3: 11.6 usec per loop
python3 -mtimeit "range(1000)"
1000000 loops, best of 3: 0.66 usec per loop
As you can see, when input provided to range is small, it tends to be fast in Python 2. If the input grows, then Python 3's range behave better.
My suggestion: test the code for larger arrays, with a hundred or a thousand elements.
Actually, I went further and test a complete iteration through the elements. The results were totally in favor of Python 2:
python -mtimeit "for i in range(1000):pass"
10000 loops, best of 3: 31 usec per loop
python3 -mtimeit "for i in range(1000):pass"
10000 loops, best of 3: 45.3 usec per loop
python -mtimeit "for i in range(10000):pass"
1000 loops, best of 3: 330 usec per loop
python3 -mtimeit "for i in range(10000):pass"
1000 loops, best of 3: 480 usec per loop
My conclusion is that, is probably faster to iterate through a list than through a generator. Although the latter is definitely more efficient regarding memory consumption. This is a classic example of the trade off between speed and memory. Although the speed difference is not that big per se (less than miliseconds). So you should value this and what's better for your program.

Check for multidimensional list in Python

I have some data which is either 1 or 2 dimensional. I want to iterate through every pattern in the data set and perform foo() on it. If the data is 1D then add this value to a list, if it's 2D then take the mean of the inner list and append this value.
I saw this question, and decided to implement it checking for instance of a list. I can't use numpy for this application.
outputs = []
for row in data:
if isinstance(row, list):
vals = [foo(window) for window in row]
outputs.append(sum(vals)/float(len(vals)))
else:
outputs.append(foo(row))
Is there a neater way of doing this? On each run, every pattern will have the same dimensionality, so I could make a separate class for 1D/2D but that will add a lot of classes to my code. The datasets can get quite large so a quick solution is preferable.
Your code is already almost as neat and fast as it can be. The only slight improvement is replacing [foo(window) for window in row] with map(foo, row), which can be seen by the benchmarks:
> python -m timeit "foo = lambda x: x+1; list(map(foo, range(1000)))"
10000 loops, best of 3: 132 usec per loop
> python -m timeit "foo = lambda x: x+1; [foo(a) for a in range(1000)]"
10000 loops, best of 3: 140 usec per loop
isinstance() already seems faster than its counterparts hasattr() and type() ==:
> python -m timeit "[isinstance(i, int) for i in range(1000)]"
10000 loops, best of 3: 117 usec per loop
> python -m timeit "[hasattr(i, '__iter__') for i in range(1000)]"
1000 loops, best of 3: 470 usec per loop
> python -m timeit "[type(i) == int for i in range(1000)]"
10000 loops, best of 3: 130 usec per loop
However, if you count short as neat, you can also simplify your code (after replacingmap) to:
mean = lambda x: sum(x)/float(len(x)) #or `from statistics import mean` in python3.4
output = [foo(r) if isinstance(r, int) else mean(map(foo, r)) for r in data]

How to produce an arbitary string with a specific length in python?

I need to have a 100000 characters long string. What is the most efficient and shortest way of producing such a string in python?
The content of the string is not of importance.
Something like:
'x' * 100000 # or,
''.join('x' for x in xrange(100000)) # or,
from itertools import repeat
''.join(repeat('x', times=100000))
Or for a bit of a mixup of letters:
from string import ascii_letters
from random import choice
''.join(choice(ascii_letters) for _ in xrange(100000))
Or, for some random data:
import os
s = os.urandom(100000)
You can simply do
s = 'a' * 100000
Since efficiency is important, here's a quick benchmark for some of the approaches mentioned so far:
$ python -m timeit "" "'a'*100000"
100000 loops, best of 3: 4.99 usec per loop
$ python -m timeit "from itertools import repeat" "''.join(repeat('x', times=100000))"
1000 loops, best of 3: 2.24 msec per loop
$ python -m timeit "import array" "array.array('c',[' ']*100000).tostring()"
100 loops, best of 3: 3.92 msec per loop
$ python -m timeit "" "''.join('x' for x in xrange(100000))"
100 loops, best of 3: 5.69 msec per loop
$ python -m timeit "import os" "os.urandom(100000)"
100 loops, best of 3: 6.17 msec per loop
Not surprisingly, of the ones posted, using string multiplication is the fastest by far.
Also note that it is more efficient to multiply a single char than a multi-char string (to get the same final string length).
$ python -m timeit "" "'a'*100000"
100000 loops, best of 3: 4.99 usec per loop
$ python -m timeit "" "'ab'*50000"
100000 loops, best of 3: 6.02 usec per loop
$ python -m timeit "" "'abcd'*25000"
100000 loops, best of 3: 6 usec per loop
$ python -m timeit "" "'abcdefghij'*10000"
100000 loops, best of 3: 6.03 usec per loop
Tested on Python 2.7.3
Strings can use the multiplication operator:
"a" * 100000
Try making an array of blank characters.
import array
longCharArray = array.array('c',[' ']*100000)
This will allocate an array of ' ' characters of size 100000
longCharArray.tostring()
Will convert to a string.
Just pick some character and repeat it 100000 times:
"a"*100000
Why you would want this is another question. . .
You can try something like this:
"".join(random.sample(string.lowercase * 385,10000))
As a one liner:
''.join([chr(random.randint(32, 126)) for x in range(30)])
Change the range() value to get a different length of string; change the bounds of randint() to get a different set of characters.

Categories

Resources