Implementation of numpy in1d for 2D arrays? - python

I have a 2D numpy array S representing a state space, with 80000000 rows (as states) and 5 columns (as state variables).
I initialize K0 with S, and at each iteration, I apply a state transition function f(x) on all of the states in Ki, and delete states whose f(x) is not in Ki, resulting Ki+1. Until it converges i.e. Ki+1 = Ki.
Going like this would take ages:
K = S
to_delete = [0]
While to_delete:
to_delete = []
for i in xrange(len(K)):
if not f(i) in K:
to_delete.append(K(i))
K = delete(K,to_delete,0)
So I wanted to make a vectorized implementation :
slice K in columns, apply f and, join them once again, thus obtaining f(K) somehow.
The question now is how to get an array of length len(K), say Sel, where each row Sel[i] determine whether f(K[i]) is in K. Exactly like the function in1d works.
Then it would be simple to make
K=K[Sel]]

Your question is difficult to understand because it contains extraneous information and contains typos. If I understand correctly, you simply want an efficient way to perform a set operation on the rows of a 2D array (in this case the intersection of the rows of K and f(K)).
You can do this with numpy.in1d if you create structured array view.
Code:
if this is K:
In [50]: k
Out[50]:
array([[6, 6],
[3, 7],
[7, 5],
[7, 3],
[1, 3],
[1, 5],
[7, 6],
[3, 8],
[6, 1],
[6, 0]])
and this is f(K) (for this example I subtract 1 from the first col and add 1 to the second):
In [51]: k2
Out[51]:
array([[5, 7],
[2, 8],
[6, 6],
[6, 4],
[0, 4],
[0, 6],
[6, 7],
[2, 9],
[5, 2],
[5, 1]])
then you can find all rows in K also found in f(K) by doing something this:
In [55]: k[np.in1d(k.view(dtype='i,i').reshape(k.shape[0]),k2.view(dtype='i,i').
reshape(k2.shape[0]))]
Out[55]: array([[6, 6]])
view and reshape create flat structured views so that each row appears as a single element to in1d. in1d creates a boolean index of k of matched items which is used to fancy index k and return the filtered array.

Not sure if I understand your question entirely, but if the interpretation of Paul is correct, it can be solved efficiently and fully vectorized using the numpy_indexed package as such in a single readable line:
import numpy_indexed as npi
K = npi.intersection(K, f(K))
Also, this works for rows of any type or shape.

Above answer is great.
But if one doesn't want to mingle with structured arrays and wants a solution that doesn't care what is the type of your array, nor the dimensions of your array elements, I came up with this :
k[np.in1d(list(map(np.ndarray.dumps, k)), list(map(np.ndarray.dumps, k2)))]
basically, list(map(np.ndarray.dumps, k))instead of k.view(dtype='f8,f8').reshape(k.shape[0]).
Take into account that this solution is ~50 times slower.
k = np.array([[6.5, 6.5],
[3.5, 7.5],
[7.5, 5.5],
[7.5, 3.5],
[1.5, 3.5],
[1.5, 5.5],
[7.5, 6.5],
[3.5, 8.5],
[6.5, 1.5],
[6.5, 0.5]])
k = np.tile(k, (1000, 1))
k2 = np.c_[k[:, 0] - 1, k[:, 1] + 1]
In [132]: k.shape, k2.shape
Out[132]: ((10000, 2), (10000, 2))
In [133]: timeit k[np.in1d(k.view(dtype='f8,f8').reshape(k.shape[0]),k2.view(dtype='f8,f8').reshape(k2.shape[0]))]
10 loops, best of 3: 22.2 ms per loop
In [134]: timeit k[np.in1d(list(map(np.ndarray.dumps, k)), list(map(np.ndarray.dumps, k2)))]
1 loop, best of 3: 892 ms per loop
It can be marginal for small inputs, but for the op's, it will take 1h 20min instead of 2 min.

Related

Numpy Matrix initialization with ascending numbers for rows

I try to have a matrix like:
M= [[1,1,..,1],
[2,2,..,2],
...
[40000, 40000, ..,40000]
It's what I tried:
data = np.mat((40000,8))
print(data.shape)
for i in range(data.shape[0]):
data[i,:] = i
print(data[:5])
The above code prints:
(1, 2)
[[0 0]]
I know how to fill a matrix with constant values, but I couldn't find a similar question for this case.
Use a simple array and don't forget that Python starts indexing at 0:
data = np.zeros((40000,8))
for i in range(data.shape[0]):
data[i,:] = i+1
Here's a way using numpy:
rows = 10
cols = 3
l = np.arange(1,rows)
np.tile(l,cols).reshape(cols,rows-1).T
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4],
[5, 5, 5],
[6, 6, 6],
[7, 7, 7],
[8, 8, 8],
[9, 9, 9]])
Matthieu Brucher's answer will perfectly do for your case. If you are looking at numbers much higher than 4000 and if time is an issue, you might want to get rid of the for-loop and create a list of lists with list comprehension before turning it into a numpy array:
a = [[i]*8 for i in range(1,4001)]
m = np.asarray(a)
In my case, this solution was ~7 times faster.
To use numpy broadcast over iterations u can do,
import numpy as np
M = np.ones((40000,8), dtype=np.int).T * np.arange(1, 40001)
M = M.T
print(M)
This should be faster than any above iterations.
If that's what u are looking for
Very simple:
data = np.arange(1, 40001).repeat(8).reshape(-1,8)
Though this is pure numpy as well, this is considerably slower than #yatu's solution.

Is there a more efficient way to find the average of a large number of matrices?

so I have around 300 matrices right now which isn't too bad, but I want to make my code reusable in the future and so I was wondering if there is a more efficient way to find the average. The matrices I have are 88x88, and the way I want to average it is to get just one matrix at the end, where each [i][j] value is the average of all the [i][j] values in the other 300 matrices.
mean = []
smaller = []
for j in range(88):
for i in range(88):
for k in range(len(listof_matrices)):
smaller.append(listof_matrices[k][i][j])
mean.append(str(float(sum(smaller))/float(len(smaller))))
Basically the way the code works is 3 nested loops (I know...) which goes first appends the values of a single [i][j] position in all k matrices, finds the mean, adds that to a mean list which stores it, and does that again for all i and all j. Surely there must be a faster way. Cheers
Definitely use numpy. I'll explain to you with a reproducible example
Setup
m1 = np.array([[3, 6, 2], [5, 6, 3], [2, 7, 2]])
m2 = np.array([[1, 5, 7], [9, 9, 8], [1, 6, 6]])
m3 = np.array([[9, 8, 3], [3, 5, 4], [7, 3, 3]])
list_of_matrices = [m1, m2, m3]
Solution
Then just use np.mean
np.mean(list_of_matrices, axis=0)
Outputs
array([[4.33333333, 6.33333333, 4. ],
[5.66666667, 6.66666667, 5. ],
[3.33333333, 5.33333333, 3.66666667]])
So for your example, the only loop you might need to do is to create the list_of_matrices, which you already have to do anyway. Then, you just call np.mean, which will generate your matrix of means with a vectorized solution. Timing will be extremely faster than your three-nested-for-loops approach.
You should really use numpy for these purposes.
Let's say you have the matrices in a single numpy array. Example:
import numpy as np
matrix = np.array([
[
[1, 2],
[3, 4]
],
[
[5, 6],
[7, 8]
],
[
[9, 10],
[11, 12]
],
])
you could get the averages with just
np.sum(matrix, axis=0)/float(matrix.shape[0])
as np.sum(matrix, axis=0) sums all the array by it's outermost axis and matrix.shape[0] gives you the amount of matrices contained in it.
Also numpy's performance is much much greater than raw python's

Selecting specific groups of rows from numpy array [duplicate]

Given:
test = numpy.array([[1, 2], [3, 4], [5, 6]])
test[i] gives the ith row (e.g. [1, 2]). How do I access the ith column? (e.g. [1, 3, 5]). Also, would this be an expensive operation?
To access column 0:
>>> test[:, 0]
array([1, 3, 5])
To access row 0:
>>> test[0, :]
array([1, 2])
This is covered in Section 1.4 (Indexing) of the NumPy reference. This is quick, at least in my experience. It's certainly much quicker than accessing each element in a loop.
>>> test[:,0]
array([1, 3, 5])
this command gives you a row vector, if you just want to loop over it, it's fine, but if you want to hstack with some other array with dimension 3xN, you will have
ValueError: all the input arrays must have same number of dimensions
while
>>> test[:,[0]]
array([[1],
[3],
[5]])
gives you a column vector, so that you can do concatenate or hstack operation.
e.g.
>>> np.hstack((test, test[:,[0]]))
array([[1, 2, 1],
[3, 4, 3],
[5, 6, 5]])
And if you want to access more than one column at a time you could do:
>>> test = np.arange(9).reshape((3,3))
>>> test
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> test[:,[0,2]]
array([[0, 2],
[3, 5],
[6, 8]])
You could also transpose and return a row:
In [4]: test.T[0]
Out[4]: array([1, 3, 5])
Although the question has been answered, let me mention some nuances.
Let's say you are interested in the first column of the array
arr = numpy.array([[1, 2],
[3, 4],
[5, 6]])
As you already know from other answers, to get it in the form of "row vector" (array of shape (3,)), you use slicing:
arr_col1_view = arr[:, 1] # creates a view of the 1st column of the arr
arr_col1_copy = arr[:, 1].copy() # creates a copy of the 1st column of the arr
To check if an array is a view or a copy of another array you can do the following:
arr_col1_view.base is arr # True
arr_col1_copy.base is arr # False
see ndarray.base.
Besides the obvious difference between the two (modifying arr_col1_view will affect the arr), the number of byte-steps for traversing each of them is different:
arr_col1_view.strides[0] # 8 bytes
arr_col1_copy.strides[0] # 4 bytes
see strides and this answer.
Why is this important? Imagine that you have a very big array A instead of the arr:
A = np.random.randint(2, size=(10000, 10000), dtype='int32')
A_col1_view = A[:, 1]
A_col1_copy = A[:, 1].copy()
and you want to compute the sum of all the elements of the first column, i.e. A_col1_view.sum() or A_col1_copy.sum(). Using the copied version is much faster:
%timeit A_col1_view.sum() # ~248 µs
%timeit A_col1_copy.sum() # ~12.8 µs
This is due to the different number of strides mentioned before:
A_col1_view.strides[0] # 40000 bytes
A_col1_copy.strides[0] # 4 bytes
Although it might seem that using column copies is better, it is not always true for the reason that making a copy takes time too and uses more memory (in this case it took me approx. 200 µs to create the A_col1_copy). However if we needed the copy in the first place, or we need to do many different operations on a specific column of the array and we are ok with sacrificing memory for speed, then making a copy is the way to go.
In the case we are interested in working mostly with columns, it could be a good idea to create our array in column-major ('F') order instead of the row-major ('C') order (which is the default), and then do the slicing as before to get a column without copying it:
A = np.asfortranarray(A) # or np.array(A, order='F')
A_col1_view = A[:, 1]
A_col1_view.strides[0] # 4 bytes
%timeit A_col1_view.sum() # ~12.6 µs vs ~248 µs
Now, performing the sum operation (or any other) on a column-view is as fast as performing it on a column copy.
Finally let me note that transposing an array and using row-slicing is the same as using the column-slicing on the original array, because transposing is done by just swapping the shape and the strides of the original array.
A[:, 1].strides[0] # 40000 bytes
A.T[1, :].strides[0] # 40000 bytes
To get several and indepent columns, just:
> test[:,[0,2]]
you will get colums 0 and 2
>>> test
array([[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]])
>>> ncol = test.shape[1]
>>> ncol
5L
Then you can select the 2nd - 4th column this way:
>>> test[0:, 1:(ncol - 1)]
array([[1, 2, 3],
[6, 7, 8]])
This is not multidimensional. It is 2 dimensional array. where you want to access the columns you wish.
test = numpy.array([[1, 2], [3, 4], [5, 6]])
test[:, a:b] # you can provide index in place of a and b

Using numpy.vectorize() to rotate all elements of a NumPy array

I am in the beginning phases of learning NumPy. I have a Numpy array of 3x3 matrices. I would like to create a new array where each of those matrices is rotated 90 degrees. I've studied this answer but I still can't figure out what I am doing wrong.
import numpy as np
# 3x3
m = np.array([[1,2,3], [4,5,6], [7,8,9]])
# array of 3x3
a = np.array([m,m,m,m])
# rotate a single matrix counter-clockwise
def rotate90(x):
return np.rot90(x)
# function that can be called on all elements of an np.array
# Note: I've tried different values for otypes= without success
f = np.vectorize(rotate90)
result = f(a)
# ValueError: Axes=(0, 1) out of range for array of ndim=0.
# The error occurs in NumPy's rot90() function.
Note: I realize I could do the following but I'd like to understand the vectorized option.
t = np.array([ np.rot90(x, k=-1) for x in a])
No need to do the rotations individually: numpy has a builtin numpy.rot90(m, k=1, axes=(0, 1)) function. By default the matrix is thus rotate over the first and second dimension.
If you want to rotate one level deeper, you simply have to set the axes over which rotation happens, one level deeper (and optionally swap them if you want to rotate in a different direction). Or as the documentation specifies:
axes: (2,) array_like
The array is rotated in the plane defined by the
axes. Axes must be different.
So we rotate over the y and z plane (if we label the dimensions x, y and z) and thus we either specify (2,1) or (1,2).
All you have to do is set the axes correctly, when you want to rotate to the right/left:
np.rot90(a,axes=(2,1)) # right
np.rot90(a,axes=(1,2)) # left
This will rotate all matrices, like:
>>> np.rot90(a,axes=(2,1))
array([[[7, 4, 1],
[8, 5, 2],
[9, 6, 3]],
[[7, 4, 1],
[8, 5, 2],
[9, 6, 3]],
[[7, 4, 1],
[8, 5, 2],
[9, 6, 3]],
[[7, 4, 1],
[8, 5, 2],
[9, 6, 3]]])
Or if you want to rotate to the left:
>>> np.rot90(a,axes=(1,2))
array([[[3, 6, 9],
[2, 5, 8],
[1, 4, 7]],
[[3, 6, 9],
[2, 5, 8],
[1, 4, 7]],
[[3, 6, 9],
[2, 5, 8],
[1, 4, 7]],
[[3, 6, 9],
[2, 5, 8],
[1, 4, 7]]])
Note that you can only specify the axes from numpy 1.12 and (probably) future versions.
Normally np.vectorize is used to apply a scalar (Python, non-numpy) function to all elements of an array, or set of arrays. There's a note that's often overlooked:
The vectorize function is provided primarily for convenience, not for
performance. The implementation is essentially a for loop.
In [278]: m = np.array([[1,2,3],[4,5,6]])
In [279]: np.vectorize(lambda x:2*x)(m)
Out[279]:
array([[ 2, 4, 6],
[ 8, 10, 12]])
This multiplies each element of m by 2, taking care of the looping paper-work for us.
Better yet, when given several arrays, it broadcasts (a generalization of 'outer product').
In [280]: np.vectorize(lambda x,y:2*x+y)(np.arange(3), np.arange(2)[:,None])
Out[280]:
array([[0, 2, 4],
[1, 3, 5]])
This feeds (x,y) scalar tuples to the lambda for all combinations of a (3,) array broadcasted against a (2,1) array, resulting in a (2,3) array. It can be viewed as a broadcasted extension of map.
The problem with np.vectorize(np.rot90) is that rot90 takes a 2d array, but vectorize will feed it scalars.
However I see in the docs that for v1.12 they've added a signature parameter. This is the first time I used it.
Your problem - apply np.rot90 to 2d elements of a 3d array:
In [266]: m = np.array([[1,2,3],[4,5,6]])
In [267]: a = np.stack([m,m])
In [268]: a
Out[268]:
array([[[1, 2, 3],
[4, 5, 6]],
[[1, 2, 3],
[4, 5, 6]]])
While you could describe this a as an array of 2d arrays, it's better to think of it as a 3d array of integers. That's how the np.vectorize(myfun)(a) sees it, giving myfun each number.
Applied to a 2d m:
In [269]: np.rot90(m)
Out[269]:
array([[3, 6],
[2, 5],
[1, 4]])
With the Python work horse, the list comprehension:
In [270]: [np.rot90(i) for i in a]
Out[270]:
[array([[3, 6],
[2, 5],
[1, 4]]), array([[3, 6],
[2, 5],
[1, 4]])]
The result is a list, but we could wrap that in np.array.
Python map does the same thing.
In [271]: list(map(np.rot90, a))
Out[271]:
[array([[3, 6],
[2, 5],
[1, 4]]), array([[3, 6],
[2, 5],
[1, 4]])]
The comprehension and map both iterate on the 1st dimension of a, action on the resulting 2d element.
vectorize with signature:
In [272]: f = np.vectorize(np.rot90, signature='(n,m)->(k,l)')
In [273]: f(a)
Out[273]:
array([[[3, 6],
[2, 5],
[1, 4]],
[[3, 6],
[2, 5],
[1, 4]]])
The signature tells it to pass a 2d array and expect back a 2d array. (I should explore how signature plays with the otypes parameter.)
Some quick time comparisons:
In [287]: timeit np.array([np.rot90(i) for i in a])
10000 loops, best of 3: 40 µs per loop
In [288]: timeit np.array(list(map(np.rot90, a)))
10000 loops, best of 3: 41.1 µs per loop
In [289]: timeit np.vectorize(np.rot90, signature='(n,m)->(k,l)')(a)
1000 loops, best of 3: 234 µs per loop
In [290]: %%timeit f=np.vectorize(np.rot90, signature='(n,m)->(k,l)')
...: f(a)
...:
1000 loops, best of 3: 196 µs per loop
So for a small array, the Python list methods are faster, by quite a bit. Sometimes, numpy approaches do better with larger arrays, though I doubt in this case.
rot90 with the axes parameter is even better, and will do well with larger arrays:
In [292]: timeit np.rot90(a,axes=(1,2))
100000 loops, best of 3: 15.7 µs per loop
Looking at the np.rot90 code, I see that it is just doing np.flip (reverse) and np.transpose, in various combinations depending on the k. In effect for this case it is doing:
In [295]: a.transpose(0,2,1)[:,::-1,:]
Out[295]:
array([[[3, 6],
[2, 5],
[1, 4]],
[[3, 6],
[2, 5],
[1, 4]]])
(this is even faster than rot90.)
I suspect vectorize with the signature is doing something like:
In [301]: b = np.zeros(2,dtype=object)
In [302]: b[...] = [m,m]
In [303]: f = np.frompyfunc(np.rot90, 1,1)
In [304]: f(b)
Out[304]:
array([array([[3, 6],
[2, 5],
[1, 4]]),
array([[3, 6],
[2, 5],
[1, 4]])], dtype=object)
np.stack(f(b)) will convert the object array into a 3d array like the other code.
frompyfunc is the underlying function for vectorize, and returns an array of objects. Here I create an array like your a except it is 1d, containing multiple m arrays. It is an array of arrays, as opposed to a 3d array.

Python: finding indices

If I have a and b:
a=[[1,2,3],
[4,5,6],
[7,8,9]]
b=8.1
and I want to find the index of the value b in a, I can do:
nonzero(abs(a-b)<0.5)
to get (2,1) as the index, but what do I do if b is a 1d or 2d array? Say,
b=[8.1,3.1,9.1]
and I want to get (2,1),(0,2),(2,2)
In general I expect only one match in a for every value of b. Can I avoid a for loop?
Use a list comprehension:
[nonzero(abs(x-a)<0.5) for x in b]
Vectorized approach with NumPy's broadcasting -
np.argwhere((np.abs(a - b[:,None,None])<0.5))[:,1:]
Explanation -
Extend b from a 1D to a 3D case with None/np.newaxis, keeping the elements along the first axis.
Perform absolute subtractions with the 2D array a, thus bringing in broadcasting and leading to a 3D array of elementwise subtractions.
Compare against the threshold of 0.5 and get the indices corresponding to matches along the last two axes and sorted by the first axis with np.argwhere(...)[:,1:].
Sample run -
In [71]: a
Out[71]:
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
In [72]: b
Out[72]: array([ 8.1, 3.1, 9.1, 0.7])
In [73]: np.argwhere((np.abs(a - b[:,None,None])<0.5))[:,1:]
Out[73]:
array([[2, 1],
[0, 2],
[2, 2],
[0, 0]])

Categories

Resources