Assuming I have two iterables of numbers of the same length
weights = range(0, 10)
values = range(0, 100, 10)
I need to count weighted sum. I know that it can be done with list comprehension
weighted_sum = sum(weight * value for weight, value in zip(weights, values))
I wonder if it can be done using map and operator.mul like
import operator
weighted_sum = sum(map(operator.mul, zip(weights, values)))
but this gives an error
Traceback (most recent call last):
File "<input>", line 3, in <module>
TypeError: op_mul expected 2 arguments, got 1
so my question: is there any way of passing unpacked tuples to function using map?
map doesn't need the zip, just use
weighted_sum = sum(map(operator.mul, weights, values))
From map's Documentation
If additional iterable arguments are passed, function must take that many arguments and is applied to the items from all iterables in parallel.
Also mentioned in map's documentation is that instead you can use itertools.starmap instead of map for already zipped input.
As Rahul hinted at, using numpy is always a good idea when dealing with numerics, actually something like
import numpy as np
np.asarray(weights) * values
already should do the trick (though in contrast to map this requires the two arrays to be of same length, while map would map the shortest length).
Try this:
>>> import operator
>>>
>>> weights = range(0, 10)
>>> values = range(0, 100, 10)
>>> sum(map(lambda i:operator.mul(*i), zip(weights, values)))
2850
Or
>>> sum(map(operator.mul, weights, values))
2850
You can also attempt with numpy,
In [45]: import numpy as np
In [46]: sum(map(np.multiply,weights,values))
Out[46]: 2850
As per Tobias Kienzler's Suggestion,
In [52]: np.sum(np.array(weights) * values)
Out[52]: 2850
Related
I have a list of functions of the type:
func_list = [lambda x: function1(input),
lambda x: function2(input),
lambda x: function3(input),
lambda x: x]
and an array of shape [4, 200, 200, 1] (a batch of images).
I want to apply the list of functions, in order, along the 0th axis.
EDIT: Rephrasing the problem. This is equivalent to the above. Say, instead of the array, I have a tuple of 4 identical arrays, of shape (200, 200, 1), and I want to apply function1 on the first element, function2 on the second element, etc. Can this be done without a for loop?
You can iterate over your function list using np.apply_along_axis:
import numpy as np
x = np.ranom.randn(100, 100)
for f in fun_list:
x = np.apply_along_axis(f, 0, x)
Based on OP's Update
Assuming your functions and batches are the same in size:
batch = ... # tuple of 4 images
batch_out = tuple([np.apply_along_axis(f, 0, x) for f, x in zip(fun_list, batch)])
I tried #Coldspeed's answer, and it does work (so I will accept it) but here is an alternative I found, without using for loops:
result = tuple(map(lambda x,y:x(y), functions, image_tuple))
Edit: forgot to add the tuple(), thanks #Coldspeed
list_ = [(1, 2), (3, 4)]
What is the Pythonic way of taking sum of ordered pairs from inner tuples and multiplying the sums? For the above example:
(1 + 3) * (2 + 4) = 24
For example:
import operator as op
import functools
functools.reduce(op.mul, (sum(x) for x in zip(*list_)))
works for any length of the initial array as well as of the inner tuples.
Another solution using numpy:
import numpy as np
np.array(list_).sum(0).prod()
If the lists are small as is implied, I feel that using operator and itertools for something like this is applying a sledgehammer to a nut. Likewise numpy. What is wrong with pure Python?
result = 1
for s in [ sum(x) for x in zip( *list_) ]:
result *= s
(although it would be a lot nicer if pure
Python had a product built-in as well as sum ). Also if you are specifically dealing only with pairs of 2-tuples then any form of iteration is a sledgehammer. Just code
result = (list_[0][0]+list_[1][0] )*( list_[0][1]+list_[1][1])
I have a list that I want to calculate the average(mean?) of the values for her.
When I do this:
import numpy as np #in the beginning of the code
goodPix = ['96.7958', '97.4333', '96.7938', '96.2792', '97.2292']
PixAvg = np.mean(goodPix)
I'm getting this error code:
ret = um.add.reduce(arr, axis=axis, dtype=dtype, out=out, keepdims=keepdims)
TypeError: cannot perform reduce with flexible type
I tried to find some help but didn't find something that was helpful
Thank you all.
Convert you list from strings to np.float:
>>> gp = np.array(goodPix, np.float)
>>> np.mean(gp)
96.906260000000003
There is a statistics library if you are using python >= 3.4
https://docs.python.org/3/library/statistics.html
You may use it's mean method like this. Let's say you have a list of numbers of which you want to find mean:-
list = [11, 13, 12, 15, 17]
import statistics as s
s.mean(list)
It has other methods too like stdev, variance, mode etc.
The things are still strings instead of floats. Try the following:
goodPix = ['96.7958', '97.4333', '96.7938', '96.2792', '97.2292']
gp2 = []
for i in goodPix:
gp2.append(float(i))
numpy.mean(gp2)
Using list comprehension
>>> np.mean([float(n) for n in goodPix])
96.906260000000003
If you're not using numpy, the obvious way to calculate the arithmetic mean of a list of values is to divide the sum of all elements by the number of elements, which is easily achieved using the two built-ins sum() and len(), e.g.:
>>> l = [1,3]
>>> sum(l)/len(l)
2.0
In case the list elements are strings, one way to convert them is with a list comprehension:
>>> s = ['1','3']
>>> l = [float(e) for e in s]
>>> l
[1.0, 3.0]
For an integer result, use the // operator ("floored quotient of x and y") or convert with int().
For many other solutions, also see Calculating arithmetic mean (one type of average) in Python
I need to generate a lot of random numbers. I've tried using random.random but this function is quite slow. Therefore I switched to numpy.random.random which is way faster! So far so good. The generated random numbers are actually used to calculate some thing (based on the number). I therefore enumerate over each number and replace the value. This seems to kill all my previously gained speedup. Here are the stats generated with timeit():
test_random - no enumerate
0.133111953735
test_np_random - no enumerate
0.0177130699158
test_random - enumerate
0.269361019135
test_np_random - enumerate
1.22525310516
as you can see, generating the number is almost 10 times faster using numpy, but enumerating over those numbers gives me equal run times.
Below is the code that I'm using:
import numpy as np
import timeit
import random
NBR_TIMES = 10
NBR_ELEMENTS = 100000
def test_random(do_enumerate=False):
y = [random.random() for i in range(NBR_ELEMENTS)]
if do_enumerate:
for index, item in enumerate(y):
# overwrite the y value, in reality this will be some function of 'item'
y[index] = 1 + item
def test_np_random(do_enumerate=False):
y = np.random.random(NBR_ELEMENTS)
if do_enumerate:
for index, item in enumerate(y):
# overwrite the y value, in reality this will be some function of 'item'
y[index] = 1 + item
if __name__ == '__main__':
from timeit import Timer
t = Timer("test_random()", "from __main__ import test_random")
print "test_random - no enumerate"
print t.timeit(NBR_TIMES)
t = Timer("test_np_random()", "from __main__ import test_np_random")
print "test_np_random - no enumerate"
print t.timeit(NBR_TIMES)
t = Timer("test_random(True)", "from __main__ import test_random")
print "test_random - enumerate"
print t.timeit(NBR_TIMES)
t = Timer("test_np_random(True)", "from __main__ import test_np_random")
print "test_np_random - enumerate"
print t.timeit(NBR_TIMES)
What's the best way to speed this up and why does enumerate slow things down so dramatically?
EDIT: the reason I use enumerate is because I need both the index and the value of the current element.
To take full advantage of numpy's speed, you want to create ufuncs whenever possible. Applying vectorize to a function as mgibsonbr suggests is one way to do that, but a better way, if possible, is simply to construct a function that takes advantage of numpy's built-in ufuncs. So something like this:
>>> import numpy
>>> a = numpy.random.random(10)
>>> a + 1
array([ 1.29738145, 1.33004628, 1.45825441, 1.46171177, 1.56863326,
1.58502855, 1.06693054, 1.93304272, 1.66056379, 1.91418473])
>>> (a + 1) * 0.25 / 4
array([ 0.08108634, 0.08312789, 0.0911409 , 0.09135699, 0.09803958,
0.09906428, 0.06668316, 0.12081517, 0.10378524, 0.11963655])
What is the nature of the function you want to apply across the numpy array? If you tell us, perhaps we can help you come up with a version that uses only numpy ufuncs.
It's also possible to generate an array of indices without using enumerate. Numpy provides ndenumerate, which is an iterator, and probably slower, but it also provides indices, which is a very quick way to generate the indices corresponding to the values in an array. So...
>>> numpy.indices(a.shape)
array([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]])
So to be more explicit, you can use the above and combine them using numpy.rec.fromarrays:
>>> a = numpy.random.random(10)
>>> ind = numpy.indices(a.shape)
>>> numpy.rec.fromarrays([ind[0], a])
rec.array([(0, 0.092473494150913438), (1, 0.20853257641948986),
(2, 0.35141455604686067), (3, 0.12212258656960817),
(4, 0.50986868372639049), (5, 0.0011439325711705139),
(6, 0.50412473457942508), (7, 0.28973489788728601),
(8, 0.20078799423168536), (9, 0.34527678271856999)],
dtype=[('f0', '<i8'), ('f1', '<f8')])
It's starting to sound like your main concern is performing the operation in-place. That's harder to do using vectorize but it's easy with the ufunc approach:
>>> def somefunc(a):
... a += 1
... a /= 15
...
>>> a = numpy.random.random(10)
>>> b = a
>>> somefunc(a)
>>> a
array([ 0.07158446, 0.07052393, 0.07276768, 0.09813235, 0.09429439,
0.08561703, 0.11204622, 0.10773558, 0.11878885, 0.10969279])
>>> b
array([ 0.07158446, 0.07052393, 0.07276768, 0.09813235, 0.09429439,
0.08561703, 0.11204622, 0.10773558, 0.11878885, 0.10969279])
As you can see, numpy performs these operations in-place.
Check numpy.vectorize, it should let you apply arbitrary functions to numpy arrays. For your simple example, you'd do something like this:
vecFunc = vectorize(lambda x: x + 1)
vecFunc(y)
However, that will create a new numpy array instead of modifying it in-place (which may or may not be a problem in your particular case).
In general, you'll always be better manipulating numpy structures with numpy functions than iterating with python functions, since the former are not only optimized but implemented in C, while the latter will be always interpreted.
I know you can create easily nested lists in python like this:
[[1,2],[3,4]]
But how to create a 3x3x3 matrix of zeroes?
[[[0] * 3 for i in range(0, 3)] for j in range (0,3)]
or
[[[0]*3]*3]*3
Doesn't seem right. There is no way to create it just passing a list of dimensions to a method? Ex:
CreateArray([3,3,3])
In case a matrix is actually what you are looking for, consider the numpy package.
http://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html#numpy.zeros
This will give you a 3x3x3 array of zeros:
numpy.zeros((3,3,3))
You also benefit from the convenience features of a module built for scientific computing.
List comprehensions are just syntactic sugar for adding expressiveness to list initialization; in your case, I would not use them at all, and go for a simple nested loop.
On a completely different level: do you think the n-dimensional array of NumPy could be a better approach?
Although you can use lists to implement multi-dimensional matrices, I think they are not the best tool for that goal.
NumPy addresses this problem
Link
>>> a = array( [2,3,4] )
>>> a
array([2, 3, 4])
>>> type(a)
<type 'numpy.ndarray'>
But if you want to use the Python native lists as a matrix the following helper methods can become handy:
import copy
def Create(dimensions, item):
for dimension in dimensions:
item = map(copy.copy, [item] * dimension)
return item
def Get(matrix, position):
for index in position:
matrix = matrix[index]
return matrix
def Set(matrix, position, value):
for index in position[:-1]:
matrix = matrix[index]
matrix[position[-1]] = value
Or use the nest function defined here, combined with repeat(0) from the itertools module:
nest(itertools.repeat(0),[3,3,3])
Just nest the multiplication syntax:
[[[0] * 3] * 3] * 3
It's therefore simple to express this operation using folds
def zeros(dimensions):
return reduce(lambda x, d: [x] * d, [0] + dimensions)
Or if you want to avoid reference replication, so altering one item won't affect any other you should instead use copies:
import copy
def zeros(dimensions):
item = 0
for dimension in dimensions:
item = map(copy.copy, [item] * dimension)
return item