Understanding references: why this Numpy assignment is not working? - python

I have a little test code like so:
import numpy as np
foo = np.zeros(1, dtype=int)
bar = np.zeros((10, 1), dtype=int)
foo_copy = np.copy(foo)
bar[-1] = foo_copy
foo_copy[-1] = 10
print(foo_copy)
print(bar)
I was expecting both foo_copy and the last element of bar to contain the value 10, but instead the last element of bar is still an np array with value 0 in it.
[10]
[[0]
[0]
[0]
[0]
[0]
[0]
[0]
[0]
[0]
[0]] # <<--- why not 10?
Isn't that last element pointing to foo_copy?
Or in all assignments np will copy the data over and I can't change it by using the original ndarray?
If so, is there a way to keep that last element as a pointer to foo_bar?

A numpy array have numeric values, not references (at least for numeric dtypes):
Make a 1d array, and reshape it to 2d:
In [64]: bar = np.arange(12).reshape(4,3)
In [65]: bar
Out[65]:
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11]])
Another 1d array:
In [66]: foo = np.array([10])
In [67]: foo
Out[67]: array([10])
This assignment is by value:
In [68]: bar[1,1] = foo
In [69]: bar
Out[69]:
array([[ 0, 1, 2],
[ 3, 10, 5],
[ 6, 7, 8],
[ 9, 10, 11]])
So is this, though the values are broadcasted to the whole row:
In [70]: bar[2] = foo
In [71]: bar
Out[71]:
array([[ 0, 1, 2],
[ 3, 10, 5],
[10, 10, 10],
[ 9, 10, 11]])
We can view the 2d array as 1d. This is closer representation of how the values are actually stored (but in a c byte array, 12*8 bytes long):
In [72]: bar1 = bar.ravel()
In [73]: bar1
Out[73]: array([ 0, 1, 2, 3, 10, 5, 10, 10, 10, 9, 10, 11])
Changing an element of view changes the corresponding element of the 2d:
In [74]: bar1[3] = 30
In [75]: bar
Out[75]:
array([[ 0, 1, 2],
[30, 10, 5],
[10, 10, 10],
[ 9, 10, 11]])
While we can make object dtype arrays, which store references as lists do, they do not have any performance benefits.
The bytestring containing the 'raw data' of bar:
In [76]: bar.tobytes()
Out[76]: b'\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x1e\x00\x00\x00\x00\x00\x00\x00\n\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\n\x00\x00\x00\x00\x00\x00\x00\n\x00\x00\x00\x00\x00\x00\x00\n\x00\x00\x00\x00\x00\x00\x00\t\x00\x00\x00\x00\x00\x00\x00\n\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00'
The fabled numpy speed comes from working with this raw data with compiled c code. Accessing individual elements with the Python code is relatively slow. It's the whole-array operations like bar*3 that are fast.

Related

Get element from array based on list of indices

z = np.arange(15).reshape(3,5)
indexx = [0,2]
indexy = [1,2,3,4]
zz = []
for i in indexx:
for j in indexy:
zz.append(z[i][j])
Output:
zz >> [1, 2, 3, 4, 11, 12, 13, 14]
This essentially flattens the array but only keeping the elements that have indicies present in the two indices list.
This works, but it is very slow for larger arrays/list of indicies. Is there a way to speed this up using numpy?
Thanks.
Edited to show desired output.
A list of integers can be used to access the entries of interest for numpy arrays.
z[indexx][:,indexy].flatten()
x = {"apple", "banana", "cherry"}
y = {"google", "microsoft", "apple"}
z = x.intersection(y)
print(z)
z => apples
If I understand you correctly, just use Python set. And then cast it to list.
Indexing in several dimensions at once requires broadcasting the indices against each other. np.ix_ is a handy tool for doing this:
In [127]: z
Out[127]:
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]])
In [128]: z[np.ix_(indexx, indexy)]
Out[128]:
array([[ 1, 2, 3, 4],
[11, 12, 13, 14]])
Converting that to 1d is a trivial ravel() task.
Look at the ix_ produces, here it's a (2,1) and (1,4) array. You can construct such arrays 'from-scratch':
In [129]: np.ix_(indexx, indexy)
Out[129]:
(array([[0],
[2]]),
array([[1, 2, 3, 4]]))

How do you get and set a 1-D array with column indexes of a 2-D matrix?

Suppose you have a matrix:
a = np.arange(9).reshape(3,3)
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
and I want get or set over the values 1, 5, and 6, how would I do that.
For example I thought doing
# getting
b = a[:, np.array([1,2,0])]
# want b = [1,5,6]
# setting
a[:, np.array([1,2,0])] = np.array([9, 10, 11])
# want:
# a = array([[0, 9, 2],
# [3, 4, 10],
# [11, 7, 8]])
would do it, but that is not the case. Any thoughts on this?
Only a small tweak makes this work:
import numpy as np
a = np.arange(9).reshape(3,3)
# getting
b = a[range(a.shape[0]), np.array([1,2,0])]
# setting
a[range(a.shape[0]), np.array([1,2,0])] = np.array([9, 10, 11])
The reason why your code didn't work as expected is because you were indexing the x-axis with slices instead of indices. Slices mean take all rows, but specifying the index directly will get you the row you want for each index value.

Unexpected result from Numpy Matrix insert, How does this work?

My goal was to insert a column to the right on a numpy matrix. However, I found that the code I was using is putting in two columns rather than just one.
# This one results in a 4x1 matrix, as expected
np.insert(np.matrix([[0],[0]]), 1, np.matrix([[0],[0]]), 0)
>>>matrix([[0],
[0],
[0],
[0]])
# I would expect this line to return a 2x2 matrix, but it returns a 2x3 matrix instead.
np.insert(np.matrix([[0],[0]]), 1, np.matrix([[0],[0]]), 1)
>>>matrix([[0, 0, 0],
[0, 0, 0]]
Why do I get the above, in the second example, instead of [[0,0], [0,0]]?
While new use of np.matrix is discouraged, we get the same result with np.array:
In [41]: np.insert(np.array([[1],[2]]),1, np.array([[10],[20]]), 0)
Out[41]:
array([[ 1],
[10],
[20],
[ 2]])
In [42]: np.insert(np.array([[1],[2]]),1, np.array([[10],[20]]), 1)
Out[42]:
array([[ 1, 10, 20],
[ 2, 10, 20]])
In [44]: np.insert(np.array([[1],[2]]),1, np.array([10,20]), 1)
Out[44]:
array([[ 1, 10],
[ 2, 20]])
Insert as [1]:
In [46]: np.insert(np.array([[1],[2]]),[1], np.array([[10],[20]]), 1)
Out[46]:
array([[ 1, 10],
[ 2, 20]])
In [47]: np.insert(np.array([[1],[2]]),[1], np.array([10,20]), 1)
Out[47]:
array([[ 1, 10, 20],
[ 2, 10, 20]])
np.insert is a complex function written in Python. So we need to look at that code, and see how values are being mapped on the target space.
The docs elaborate on the difference between insert at 1 and [1]. But off hand I don't see an explanation of how the shape of values matters.
Difference between sequence and scalars:
>>> np.insert(a, [1], [[1],[2],[3]], axis=1)
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]])
>>> np.array_equal(np.insert(a, 1, [1, 2, 3], axis=1),
... np.insert(a, [1], [[1],[2],[3]], axis=1))
True
When adding an array at the end of another, I'd use concatenate (or one of its stack variants) rather than insert. None of these operate in-place.
In [48]: np.concatenate([np.array([[1],[2]]), np.array([[10],[20]])], axis=1)
Out[48]:
array([[ 1, 10],
[ 2, 20]])

Accessing chunks at once in a numpy array

Provided a numpy array:
arr = np.array([0,1,2,3,4,5,6,7,8,9,10,11,12])
I wonder how access chosen size chunks with chosen separation, both concatenated and in slices:
E.g.: obtain chunks of size 3 separated by two values:
arr_chunk_3_sep_2 = np.array([0,1,2,5,6,7,10,11,12])
arr_chunk_3_sep_2_in_slices = np.array([[0,1,2],[5,6,7],[10,11,12])
Wha is the most efficient way to do it? If possible, I would like to avoid copying or creating new objects as much as possible. Maybe Memoryviews could be of help here?
Approach #1
Here's one with masking -
def slice_grps(a, chunk, sep):
N = chunk + sep
return a[np.arange(len(a))%N < chunk]
Sample run -
In [223]: arr
Out[223]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
In [224]: slice_grps(arr, chunk=3, sep=2)
Out[224]: array([ 0, 1, 2, 5, 6, 7, 10, 11, 12])
Approach #2
If the input array is such that the last chunk would have enough runway, we could , we could leverage np.lib.stride_tricks.as_strided, inspired by this post to select m elements off each block of n elements -
# https://stackoverflow.com/a/51640641/ #Divakar
def skipped_view(a, m, n):
s = a.strides[0]
strided = np.lib.stride_tricks.as_strided
shp = ((a.size+n-1)//n,n)
return strided(a,shape=shp,strides=(n*s,s), writeable=False)[:,:m]
out = skipped_view(arr,chunk,chunk+sep)
Note that the output would be a view into the input array and as such no extra memory overhead and virtually free!
Sample run to make things clear -
In [255]: arr
Out[255]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
In [256]: chunk = 3
In [257]: sep = 2
In [258]: skipped_view(arr,chunk,chunk+sep)
Out[258]:
array([[ 0, 1, 2],
[ 5, 6, 7],
[10, 11, 12]])
# Let's prove that the output is a view indeed
In [259]: np.shares_memory(arr, skipped_view(arr,chunk,chunk+sep))
Out[259]: True
How about a reshape and slice?
In [444]: arr = np.array([0,1,2,3,4,5,6,7,8,9,10,11,12])
In [445]: arr.reshape(-1,5)
...
ValueError: cannot reshape array of size 13 into shape (5)
Ah a problem - your array isn't big enough for this reshape - so we have to pad it:
In [446]: np.concatenate((arr,np.zeros(2,int))).reshape(-1,5)
Out[446]:
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 0, 0]])
In [447]: np.concatenate((arr,np.zeros(2,int))).reshape(-1,5)[:,:-2]
Out[447]:
array([[ 0, 1, 2],
[ 5, 6, 7],
[10, 11, 12]])
as_strided can get a way with this by including bytes outside the databuffer. Usually that's seen as a bug, though here it can be an asset - provided you really do throw that garbage away.
Or throwing away the last incomplete line:
In [452]: arr[:-3].reshape(-1,5)[:,:3]
Out[452]:
array([[0, 1, 2],
[5, 6, 7]])

Fastest way to check if two arrays have equivalent rows

I am trying to figure out a better way to check if two 2D arrays contain the same rows. Take the following case for a short example:
>>> a
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> b
array([[6, 7, 8],
[3, 4, 5],
[0, 1, 2]])
In this case b=a[::-1]. To check if two rows are equal:
>>>a=a[np.lexsort((a[:,0],a[:,1],a[:,2]))]
>>>b=b[np.lexsort((b[:,0],b[:,1],b[:,2]))]
>>> np.all(a-b==0)
True
This is great and fairly fast. However the issue comes about when two rows are "close":
array([[-1.57839867 2.355354 -1.4225235 ],
[-0.94728367 0. -1.4225235 ],
[-1.57839867 -2.355354 -1.4225215 ]]) <---note ends in 215 not 235
array([[-1.57839867 -2.355354 -1.4225225 ],
[-1.57839867 2.355354 -1.4225225 ],
[-0.94728367 0. -1.4225225 ]])
Within a tolerance of 1E-5 these two arrays are equal by row, but the lexsort will tell you otherwise. This can be solved by a different sorting order but I would like a more general case.
I was toying with the idea of:
a=a.reshape(-1,1,3)
>>> a-b
array([[[-6, -6, -6],
[-3, -3, -3],
[ 0, 0, 0]],
[[-3, -3, -3],
[ 0, 0, 0],
[ 3, 3, 3]],
[[ 0, 0, 0],
[ 3, 3, 3],
[ 6, 6, 6]]])
>>> np.all(np.around(a-b,5)==0,axis=2)
array([[False, False, True],
[False, True, False],
[ True, False, False]], dtype=bool)
>>>np.all(np.any(np.all(np.around(a-b,5)==0,axis=2),axis=1))
True
This doesn't tell you if the arrays are equal by row just if all points in b are close to a value in a. The number of rows can be several hundred and I need to do it quite a bit. Any ideas?
Your last code doesn't do what you think it is doing. What it tells you is whether every row in b is close to a row in a. If you change the axis you use for the outer calls to np.any and np.all, you could check whether every row in a is close to some row in b. If both every row in b is close to a row in a, and every row in a is close to a row in b, then the sets are equal. Probably not very computationally efficient, but probably very fast in numpy for moderately sized arrays:
def same_rows(a, b, tol=5) :
rows_close = np.all(np.round(a - b[:, None], tol) == 0, axis=-1)
return (np.all(np.any(rows_close, axis=-1), axis=-1) and
np.all(np.any(rows_close, axis=0), axis=0))
>>> rows, cols = 5, 3
>>> a = np.arange(rows * cols).reshape(rows, cols)
>>> b = np.arange(rows)
>>> np.random.shuffle(b)
>>> b = a[b]
>>> a
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11],
[12, 13, 14]])
>>> b
array([[ 9, 10, 11],
[ 3, 4, 5],
[ 0, 1, 2],
[ 6, 7, 8],
[12, 13, 14]])
>>> same_rows(a, b)
True
>>> b[0] = b[1]
>>> b
array([[ 3, 4, 5],
[ 3, 4, 5],
[ 0, 1, 2],
[ 6, 7, 8],
[12, 13, 14]])
>>> same_rows(a, b) # not all rows in a are close to a row in b
False
And for not too big arrays, performance is reasonable, even though it is having to build an array of (rows, rows, cols):
In [2]: rows, cols = 1000, 10
In [3]: a = np.arange(rows * cols).reshape(rows, cols)
In [4]: b = np.arange(rows)
In [5]: np.random.shuffle(b)
In [6]: b = a[b]
In [7]: %timeit same_rows(a, b)
10 loops, best of 3: 103 ms per loop

Categories

Resources