Let us assume that I have a 2D array named arr of shape (4, 3) as follows:
>>> arr
array([[ nan, 1., -18.],
[ -1., -1., -1.],
[ 1., 1., 5.],
[ 1., -1., 0.]])
Say that, I would like to assign the signed value of the element-wise absolute maximum of (1.0, 1.0, -15.0) and the rows arr[[0, 2], :] back to arr. Which means, I am looking for the output:
>>> arr
array([[ 1., 1., -18.],
[ -1., -1., -1.],
[ 1., 1., -15.],
[ 1., -1., 0.]])
The closest thing I found in the API reference for this is numpy.fmax but it doesn't do the absolute value. If I used:
arr[index_list, :] = np.fmax(arr[index_list, :], new_tuple)
my array would finally look like:
>>> arr
array([[ 1., 1., -15.],
[ -1., -1., -1.],
[ 1., 1., 5.],
[ 1., -1., 0.]])
Now, the API says that this function is
equivalent to np.where(x1 >= x2, x1, x2) when neither x1 nor x2 are NaNs, but it is faster and does proper broadcasting
I tried using the following:
arr[index_list, :] = np.where(np.absolute(arr[index_list, :]) >= np.absolute(new_tuple),
arr[index_list, :], new_tuple)
Although this produced the desired output, I got the warning:
/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevconsole.py:1: RuntimeWarning: invalid value encountered in greater_equal
I believe this warning is because of the NaN which is not handled gracefully here, unlike the np.fmax function. In addition, the API docs mention that np.fmax is faster and does broadcasting correctly (not sure what part of broadcasting is missing in the np.where version)
In conclusion, what I am looking for is something similar to:
arr[index_list, :] = np.fmax(arr[index_list, :], new_tuple, key=abs)
There is no such key attribute available to this function, unfortunately.
Just for context, I am interested in the fastest possible solution because my actual shape of the arr array is an average of (100000, 50) and I am looping through almost 1000 new_tuple tuples (with each tuple equal in shape to the number of columns in arr, of course). The index_list changes for each new_tuple.
Edit 1:
One possible solution is, to begin with replacing all NaN in arr with 0. i.e. arr[np.isnan(arr)] = 0. After this, I can use the np.where with np.absolute trick mentioned in my original text. However, this is probably a lot slower than np.fmax, as suggested by the API.
Edit 2:
The index_list may have repeated indexes in subsequent loops. Every new_tuple comes with a corresponding rule and the index_list is selected based on that rule. There is nothing stopping different rules from having overlapping indexes that they match to. #Divakar has an excellent answer for the case where index_list has no repeats. Other solutions are however welcome covering both cases.
Assuming that list of all index_list has no repeated indexes:
Approach #1
I would propose more of a vectorized solution once we have all of index_lists and new_tuples stored in one place, preferably as a list. As such this could be the preferred one, if we are dealing with lots of such tuples and lists.
So, let's say we have them stored as the following :
new_tuples = [(1.0, 1.0, -15.0), (6.0, 3.0, -4.0)] # list of all new_tuple
index_lists =[[0,2],[4,1,6]] # list of all index_list
The solution thereafter would be to manually repeat, replacing the broadcasting and then use np.where as shown later on in the question. Using np.where on the concern around the said warning, we can ignore, if the new_tuples have non-NaN values. Thus, the solution would be -
idx = np.concatenate(index_lists)
lens = list(map(len,index_lists))
a = arr[idx]
b = np.repeat(new_tuples,lens,axis=0)
arr[idx] = np.where(np.abs(a) > np.abs(b), a, b)
Approach #2
Another approach would be to store the absolute values of arr beforeand : abs_arr = np.abs(arr) and using those within np.where. This should save a lot time within the loop. Thus, the relevant computation would reduce to :
arr[index_list, :] = np.where(abs_arr[index_list, :] > np.abs(b), a, new_tuple)
Related
numpy.vectorize takes a function f:a->b and turns it into g:a[]->b[].
This works fine when a and b are scalars, but I can't think of a reason why it wouldn't work with b as an ndarray or list, i.e. f:a->b[] and g:a[]->b[][]
For example:
import numpy as np
def f(x):
return x * np.array([1,1,1,1,1], dtype=np.float32)
g = np.vectorize(f, otypes=[np.ndarray])
a = np.arange(4)
print(g(a))
This yields:
array([[ 0. 0. 0. 0. 0.],
[ 1. 1. 1. 1. 1.],
[ 2. 2. 2. 2. 2.],
[ 3. 3. 3. 3. 3.]], dtype=object)
Ok, so that gives the right values, but the wrong dtype. And even worse:
g(a).shape
yields:
(4,)
So this array is pretty much useless. I know I can convert it doing:
np.array(map(list, a), dtype=np.float32)
to give me what I want:
array([[ 0., 0., 0., 0., 0.],
[ 1., 1., 1., 1., 1.],
[ 2., 2., 2., 2., 2.],
[ 3., 3., 3., 3., 3.]], dtype=float32)
but that is neither efficient nor pythonic. Can any of you guys find a cleaner way to do this?
np.vectorize is just a convenience function. It doesn't actually make code run any faster. If it isn't convenient to use np.vectorize, simply write your own function that works as you wish.
The purpose of np.vectorize is to transform functions which are not numpy-aware (e.g. take floats as input and return floats as output) into functions that can operate on (and return) numpy arrays.
Your function f is already numpy-aware -- it uses a numpy array in its definition and returns a numpy array. So np.vectorize is not a good fit for your use case.
The solution therefore is just to roll your own function f that works the way you desire.
A new parameter signature in 1.12.0 does exactly what you what.
def f(x):
return x * np.array([1,1,1,1,1], dtype=np.float32)
g = np.vectorize(f, signature='()->(n)')
Then g(np.arange(4)).shape will give (4L, 5L).
Here the signature of f is specified. The (n) is the shape of the return value, and the () is the shape of the parameter which is scalar. And the parameters can be arrays too. For more complex signatures, see Generalized Universal Function API.
import numpy as np
def f(x):
return x * np.array([1,1,1,1,1], dtype=np.float32)
g = np.vectorize(f, otypes=[np.ndarray])
a = np.arange(4)
b = g(a)
b = np.array(b.tolist())
print(b)#b.shape = (4,5)
c = np.ones((2,3,4))
d = g(c)
d = np.array(d.tolist())
print(d)#d.shape = (2,3,4,5)
This should fix the problem and it will work regardless of what size your input is. "map" only works for one dimentional inputs. Using ".tolist()" and creating a new ndarray solves the problem more completely and nicely(I believe). Hope this helps.
You want to vectorize the function
import numpy as np
def f(x):
return x * np.array([1,1,1,1,1], dtype=np.float32)
Assuming that you want to get single np.float32 arrays as result, you have to specify this as otype. In your question you specified however otypes=[np.ndarray] which means you want every element to be an np.ndarray. Thus, you correctly get a result of dtype=object.
The correct call would be
np.vectorize(f, signature='()->(n)', otypes=[np.float32])
For such a simple function it is however better to leverage numpy's ufunctions; np.vectorize just loops over it. So in your case just rewrite your function as
def f(x):
return np.multiply.outer(x, np.array([1,1,1,1,1], dtype=np.float32))
This is faster and produces less obscure errors (note however, that the results dtype will depend on x if you pass a complex or quad precision number, so will be the result).
I've written a function, it seems fits to your need.
def amap(func, *args):
'''array version of build-in map
amap(function, sequence[, sequence, ...]) -> array
Examples
--------
>>> amap(lambda x: x**2, 1)
array(1)
>>> amap(lambda x: x**2, [1, 2])
array([1, 4])
>>> amap(lambda x,y: y**2 + x**2, 1, [1, 2])
array([2, 5])
>>> amap(lambda x: (x, x), 1)
array([1, 1])
>>> amap(lambda x,y: [x**2, y**2], [1,2], [3,4])
array([[1, 9], [4, 16]])
'''
args = np.broadcast(None, *args)
res = np.array([func(*arg[1:]) for arg in args])
shape = args.shape + res.shape[1:]
return res.reshape(shape)
Let try
def f(x):
return x * np.array([1,1,1,1,1], dtype=np.float32)
amap(f, np.arange(4))
Outputs
array([[ 0., 0., 0., 0., 0.],
[ 1., 1., 1., 1., 1.],
[ 2., 2., 2., 2., 2.],
[ 3., 3., 3., 3., 3.]], dtype=float32)
You may also wrap it with lambda or partial for convenience
g = lambda x:amap(f, x)
g(np.arange(4))
Note the docstring of vectorize says
The vectorize function is provided primarily for convenience, not for
performance. The implementation is essentially a for loop.
Thus we would expect the amap here have similar performance as vectorize. I didn't check it, Any performance test are welcome.
If the performance is really important, you should consider something else, e.g. direct array calculation with reshape and broadcast to avoid loop in pure python (both vectorize and amap are the later case).
The best way to solve this would be to use a 2-D NumPy array (in this case a column array) as an input to the original function, which will then generate a 2-D output with the results I believe you were expecting.
Here is what it might look like in code:
import numpy as np
def f(x):
return x*np.array([1, 1, 1, 1, 1], dtype=np.float32)
a = np.arange(4).reshape((4, 1))
b = f(a)
# b is a 2-D array with shape (4, 5)
print(b)
This is a much simpler and less error prone way to complete the operation. Rather than trying to transform the function with numpy.vectorize, this method relies on NumPy's natural ability to broadcast arrays. The trick is to make sure that at least one dimension has an equal length between the arrays.
I came up with the following approach for finding all common indices in which values are present across two vectors of equal length. I love the readability of this but I need for it to be faster...
missingA = np.argwhere(np.isnan(vectorA)==True);
missingA = [missingA[ma][0] for ma in range(len(missingA))];
missingB = np.argwhere(np.isnan(vectorB)==True);
missingB = [missingB[mb][0] for mb in range(len(missingB))];
allmissidxs = set(missingA).union(set(missingB));
idxs = [idx for idx in range(len(vectorA)) if idx not in allmissidxs];
It most definitely works, but the vectors I need to use it on are anywhere from 1Million to 3Million elements each...and potentially needs to be run multiple times. I'm using "...if idx not in allmissidxs" as opposed to say "...if idx in allpresidxs" since missing values are sure to be far smaller subset to sweep through. Also, I'm sure it doesn't help that missingA and missingB have to be reconfigured given the structure that np.argwhere() naturally returns but is that really the bottleneck here?
Any help would be greatly appreciated! Thanks
Assume that the source vectors are just the same as in the other solution:
vectorA = np.array([np.nan, 1., 2., 3., np.nan, 5., np.nan, 7.,
8., np.nan])
vectorB = np.array([0., 1., 2., np.nan, 4., np.nan, 6., np.nan,
8., np.nan])
You can do your task using Pandasonic Index and its intersection method.
It is even possible to write it as the following one-liner:
result = pd.Index(vectorA).intersection(vectorB)
The result is:
Float64Index([1.0, 2.0, 8.0], dtype='float64')
If you want the result as a Numpy vector, append .values to the above code
and the result will be:
array([1., 2., 8.])
The advantage of this method is that you avoid any list comprehensions,
so this code should run substantially faster than yours.
Check it on your own, on a bigger data sample.
I've been trying to come uo with a way to add these two ndarrays, one of them with a different amount of elements in each row:
a = np.array([np.array([0, 1]), np.array([4, 5, 6])])
z = np.zeros((3,3))
Expected output:
array([[0., 1., 0.],
[4., 5., 6.]])
Can anyone think of a way to do this using numpy?
I don't think there is a 'numpy-fast' solution for this. I think you will need to loop over a with a for loop and add every line individually.
for i in range(len(a)):
z[i,:len(a[i])] = z[i,:len(a[i])] + a[i]
I'm trying to take the exp of nonzero elements in a sparse theano variable. I have the current code:
A = T.matrix("Some matrix with many zeros")
A_sparse = theano.sparse.csc_from_dense(A)
I'm trying to do something that's equivalent to the following numpy syntax:
mask = (A_sparse != 0)
A_sparse[mask] = np.exp(A_sparse[mask])
but Theano doesn't support != masks yet. (And (A_sparse > 0) | (A_sparse < 0) doesn't seem to work either.)
How can I achieve this?
The support for sparse matrices in Theano is incomplete, so some things are tricky to achieve. You can use theano.sparse.structured_exp(A_sparse) in that particular case, but I try to answer your question more generally below.
Comparison
In Theano one would normally use the comparison operators described here: http://deeplearning.net/software/theano/library/tensor/basic.html
For example, instead of A != 0, one would write T.neq(A, 0). With sparse matrices one has to use the comparison operators in theano.sparse. Both operators have to be sparse matrices, and the result is also a sparse matrix:
mask = theano.sparse.neq(A_sparse, theano.sparse.sp_zeros_like(A_sparse))
Modifying a Subtensor
In order to modify part of a matrix, one can use theano.tensor.set_subtensor. With dense matrices this would work:
indices = mask.nonzero()
A = T.set_subtensor(A[indices], T.exp(A[indices]))
Notice that Theano doesn't have a separated boolean type—the mask is zeros and ones—so nonzero() has to be called first to take the indices of the nonzero elements. Furthermore, this is not implemented for sparse matrices.
Operating on Nonzero Sparse Elements
Theano provides sparse operations that are said to be structured and operate only on the nonzero elements. See:
http://deeplearning.net/software/theano/tutorial/sparse.html#structured-operation
More precisely, they operate on the data attribute of a sparse matrix, independent of the indices of the elements. Such operations are straightforward to implement. Note that the structured operations will operate on all the values in the data array, also those that are explicitly set to zero.
Here's a way of doing this with the scipy.sparse module. I don't know how theano implements its sparse. It's likely to be based on similar ideas (since it uses name like csc)
In [224]: A=sparse.csc_matrix([[1.,0,0,2,0],[0,0,3,0,0],[0,1,1,2,0]])
In [225]: A.A
Out[225]:
array([[ 1., 0., 0., 2., 0.],
[ 0., 0., 3., 0., 0.],
[ 0., 1., 1., 2., 0.]])
In [226]: A.data
Out[226]: array([ 1., 1., 3., 1., 2., 2.])
In [227]: A.data[:]=np.exp(A.data)
In [228]: A.A
Out[228]:
array([[ 2.71828183, 0. , 0. , 7.3890561 , 0. ],
[ 0. , 0. , 20.08553692, 0. , 0. ],
[ 0. , 2.71828183, 2.71828183, 7.3890561 , 0. ]])
The main attributes of the csc format at data, indices, indptr. It's possible that data has some 0 values if you fiddle with them after creation, but a freshly created matrix shouldn't.
The matrix also has a nonzero method modeled on the numpy one. In practice it converts the matrix to coo format, filters out any zero values, and returns the row and col attributes:
In [229]: A.nonzero()
Out[229]: (array([0, 0, 1, 2, 2, 2]), array([0, 3, 2, 1, 2, 3]))
And the csc format allows indexing just as a dense numpy array:
In [230]: A[A.nonzero()]
Out[230]:
matrix([[ 2.71828183, 7.3890561 , 20.08553692, 2.71828183,
2.71828183, 7.3890561 ]])
T.where works.
A_sparse = T.where(A_sparse == 0, 0, T.exp(A_sparse))
#Seppo Envari's answer seems faster though. So I'll accept his answer.
I have a numpy array(eg., a = np.array([ 8., 2.])), and another array which stores the indices I would like to get from the former array. (eg., b = np.array([ 0., 1., 1., 0., 0.]).
What I would like to do is to create another array from these 2 arrays, in this case, it should be: array([ 8., 2., 2., 8., 8.])
of course, I can always use a for loop to achieve this goal:
for i in range(5):
c[i] = a[b[i]]
I wonder if there is a more elegant method to create this array. Something like c = a[b[0:5]] (well, this apparently doesn't work)
Only integer arrays can be used for indexing, and you've created b as a float64 array. You can get what you're looking for if you explicitly convert to integer:
bi = np.array(b, dtype=int)
c = a[bi[0:5]]