Numpy 1D vs nD arrays (index by reference vs. index by value) - python

So here's an oddity I noticed recently. In the code below I am creating either a 1D or 2D numpy array, extracting out one of the values from the array (position_3), and then assigning the position I extracted from to a different value.
In the 1D case position_3 matches the originally assigned value in the array (i.e. looks like position_3 is a copy from the 1D array) while in the 2D cases position_3 changes upon changing the array (i.e. it looks like position_3 is a reference from the 2D array).
import numpy as np
print "Testing 1D array"
D1_array = np.array([0,1,2,3,4])
position_3 = D1_array[3]
D1_array[3] = 0
print "Value at position 3 in array: %i"%(D1_array[3]) #: 0
print "Value at position_3 variable: %i"%(position_3) #: 3
print "Testing 2D array"
D2_array = np.array([[0,1,2],[3,4,5],[6,7,8],[9,10,11],[12,13,14]])
position_3 = D2_array[3]
D2_array[3] = [0,0,0]
print "Value at position 3 in array: %s"%(str(D2_array[3])) #: [0,0,0]
print "Value at position_3 variable: %s"%(str(position_3)) #: [0,0,0]
I understand that everything is a reference in Python etc., but what I don't understand is why is this behavior inconsistent between 1D and 2D arrays? It's worth noting that the same comparison with Python lists yields the copy-like behaviour in both cases (i.e. in the 2D array - AKA a nested list - the position_3 variable remains [6,7,8]).

Per the docs on Basic Slicing and Indexing:
The simplest case of indexing with N integers returns an array scalar
representing the corresponding item....
All arrays generated by basic slicing are always views of the original array. (my emphasis)
So indexing a 1D array with an integer returns an array scalar:
In [32]: D1_array = np.array([0,1,2,3,4])
In [33]: D2_array = np.array([[0,1,2],[3,4,5],[6,7,8],[9,10,11],[12,13,14]])
In [36]: D1_array[3]
Out[36]: 3
In [37]: type(D1_array[3]) # an array scalar
Out[37]: numpy.int64
If you assign position_3 = D1_array[3] then position_3 holds the value of this array scalar. Modifying the original array with D1_array[3] = 0 does not affect the value of position_3.
In contrast, indexing a 2D array with an integer returns a view:
In [38]: D2_array[3]
Out[38]: array([ 9, 10, 11])
In [40]: type(D2_array[3])
Out[40]: numpy.ndarray
In [42]: D2_array[3].base is D2_array
Out[42]: True
Modifying a view also alters the original array, and modifying the original array also affects the view.

Related

How to check numpy array is empty? [duplicate]

How can I check whether a numpy array is empty or not?
I used the following code, but this fails if the array contains a zero.
if not self.Definition.all():
Is this the solution?
if self.Definition == array([]):
You can always take a look at the .size attribute. It is defined as an integer, and is zero (0) when there are no elements in the array:
import numpy as np
a = np.array([])
if a.size == 0:
# Do something when `a` is empty
https://numpy.org/devdocs/user/quickstart.html (2020.04.08)
NumPy’s main object is the homogeneous multidimensional array. It is a table of elements (usually numbers), all of the same type, indexed by a tuple of non-negative integers. In NumPy dimensions are called axes.
(...) NumPy’s array class is called ndarray. (...) The more important attributes of an ndarray object are:
ndarray.ndim
the number of axes (dimensions) of the array.
ndarray.shape
the dimensions of the array. This is a tuple of integers indicating the size of the array in each dimension. For a matrix with n rows and m columns, shape will be (n,m). The length of the shape tuple is therefore the number of axes, ndim.
ndarray.size
the total number of elements of the array. This is equal to the product of the elements of shape.
One caveat, though.
Note that np.array(None).size returns 1!
This is because a.size is equivalent to np.prod(a.shape),
np.array(None).shape is (), and an empty product is 1.
>>> import numpy as np
>>> np.array(None).size
1
>>> np.array(None).shape
()
>>> np.prod(())
1.0
Therefore, I use the following to test if a numpy array has elements:
>>> def elements(array):
... return array.ndim and array.size
>>> elements(np.array(None))
0
>>> elements(np.array([]))
0
>>> elements(np.zeros((2,3,4)))
24
Why would we want to check if an array is empty? Arrays don't grow or shrink in the same that lists do. Starting with a 'empty' array, and growing with np.append is a frequent novice error.
Using a list in if alist: hinges on its boolean value:
In [102]: bool([])
Out[102]: False
In [103]: bool([1])
Out[103]: True
But trying to do the same with an array produces (in version 1.18):
In [104]: bool(np.array([]))
/usr/local/bin/ipython3:1: DeprecationWarning: The truth value
of an empty array is ambiguous. Returning False, but in
future this will result in an error. Use `array.size > 0` to
check that an array is not empty.
#!/usr/bin/python3
Out[104]: False
In [105]: bool(np.array([1]))
Out[105]: True
and bool(np.array([1,2]) produces the infamous ambiguity error.
edit
The accepted answer suggests size:
In [11]: x = np.array([])
In [12]: x.size
Out[12]: 0
But I (and most others) check the shape more than the size:
In [13]: x.shape
Out[13]: (0,)
Another thing in its favor is that it 'maps' on to an empty list:
In [14]: x.tolist()
Out[14]: []
But there are other other arrays with 0 size, that aren't 'empty' in that last sense:
In [15]: x = np.array([[]])
In [16]: x.size
Out[16]: 0
In [17]: x.shape
Out[17]: (1, 0)
In [18]: x.tolist()
Out[18]: [[]]
In [19]: bool(x.tolist())
Out[19]: True
np.array([[],[]]) is also size 0, but shape (2,0) and len 2.
While the concept of an empty list is well defined, an empty array is not well defined. One empty list is equal to another. The same can't be said for a size 0 array.
The answer really depends on
what do you mean by 'empty'?
what are you really test for?

What does "[()]" mean when called upon a numpy array?

I just came across this piece of code:
x = np.load(lc_path, allow_pickle=True)[()]
And I've never seen this pattern before: [()]. What does it do and why is this syntacticly correct?
a = np.load(lc_path, allow_pickle=True)
>>> array({'train_ppls': [1158.359413193576, 400.54333992093854, ...],
'val_ppls': [493.0056070137404, 326.53203520368623, ...],
'train_losses': [340.40905952453613, 675.6475067138672, ...],
'val_losses': [217.46258735656738, 438.86770486831665, ...],
'times': [19.488852977752686, 20.147733449935913, ...]}, dtype=object)
So I guess a is a dict wrapped in an array for some reason by the person who saved it
It a way (the only way) of indexing a 0d array:
In [475]: x=np.array(21)
In [476]: x
Out[476]: array(21)
In [477]: x.shape
Out[477]: ()
In [478]: x[()]
Out[478]: 21
In effect it pulls the element out of the array. item() is another way:
In [479]: x.item()
Out[479]: 21
In [480]: x.ndim
Out[480]: 0
In
x = np.load(lc_path, allow_pickle=True)[()]
most likely the np.save was given a non-array; and wrapped in a 0d object dtype array to save it. This is a way of recovering that object.
In [481]: np.save('test.npy', {'a':1})
In [482]: x = np.load('test.npy', allow_pickle=True)
In [483]: x
Out[483]: array({'a': 1}, dtype=object)
In [484]: x.ndim
Out[484]: 0
In [485]: x[()]
Out[485]: {'a': 1}
In general when we index a nd array, e.g. x[1,2] we are really doing x[(1,2)], that is, using a tuple that corresponds to the number of dimensions. If x is 0d, the only tuple that works is an empty one, ().
That's indexing the array with a tuple of 0 indices. For most arrays, this just produces a view of the whole array, but for a 0-dimensional array, it extracts the array's single element as a scalar.
In this case, it looks like someone made the weird choice to dump a non-NumPy object to an array with numpy.save, resulting in NumPy saving a 0-dimensional array of object dtype wrapping the original object. The use of allow_pickle=True and the empty tuple index extracts the object from the 0-dimensional array.
They probably should have picked something other than numpy.save to save this object.

non adjacent slicing of numpy multidimensional array in python

I have a multidimensional array a:
a = np.random.uniform(1,10,(2,4,2,3,10,10))
For dimensions 4-6, I have 3 lists which contain the indexes for slicing that dimension of array 'a'
dim4 = [0,2]
dim5 = [3,5,9]
dim6 = [1,2,7,8]
How do I slice out array 'a' such that i get:
b = a[0,:,0,dim4,dim5,dim6]
So b should be an array with shape (4,2,3,4), and containing elements from the corresponding dimensions of a. When I try the code above, I get an error saying that different shapes can't be broadcast together for axis 4-6, but if I were to do:
b = a[0,:,0:2,0:3,0:4]
then it does work, even though the slicing lists all have different lengths. So how do you slice multidimensional arrays with non adjacent indexes?
You can use the numpy.ix_ function to construct complex indexing like this. It takes a sequence of array_like, and makes an "open mesh" from them. The example from the docstring is pretty clear:
Using ix_ one can quickly construct index arrays that will index
the cross product. a[np.ix_([1,3],[2,5])] returns the array
[[a[1,2] a[1,5]], [a[3,2] a[3,5]]].
So, for your data, you'd do:
>>> indices = np.ix_((0,), np.arange(a.shape[1]), (0,), dim4, dim5, dim6)
>>> a[indices].shape
(1, 4, 1, 2, 3, 4)
Get rid of the size-1 dimensions with np.squeeze:
>>> np.squeeze(a[indices]).shape
(4, 2, 3, 4)

Comparing NumPy object references

I want to understand the NumPy behavior.
When I try to get the reference of an inner array of a NumPy array, and then compare it to the object itself, I get as returned value False.
Here is the example:
In [198]: x = np.array([[1,2,3], [4,5,6]])
In [201]: x0 = x[0]
In [202]: x0 is x[0]
Out[202]: False
While on the other hand, with Python native objects, the returned is True.
In [205]: c = [[1,2,3],[1]]
In [206]: c0 = c[0]
In [207]: c0 is c[0]
Out[207]: True
My question, is that the intended behavior of NumPy? If so, what should I do if I want to create a reference of inner objects of NumPy arrays.
2d slicing
When I first wrote this I constructed and indexed a 1d array. But the OP is working with a 2d array, so x[0] is a 'row', a slice of the original.
In [81]: arr = np.array([[1,2,3], [4,5,6]])
In [82]: arr.__array_interface__['data']
Out[82]: (181595128, False)
In [83]: x0 = arr[0,:]
In [84]: x0.__array_interface__['data']
Out[84]: (181595128, False) # same databuffer pointer
In [85]: id(x0)
Out[85]: 2886887088
In [86]: x1 = arr[0,:] # another slice, different id
In [87]: x1.__array_interface__['data']
Out[87]: (181595128, False)
In [88]: id(x1)
Out[88]: 2886888888
What I wrote earlier about slices still applies. Indexing an individual elements, as with arr[0,0] works the same as with a 1d array.
This 2d arr has the same databuffer as the 1d arr.ravel(); the shape and strides are different. And the distinction between view, copy and item still applies.
A common way of implementing 2d arrays in C is to have an array of pointers to other arrays. numpy takes a different, strided approach, with just one flat array of data, and usesshape and strides parameters to implement the transversal. So a subarray requires its own shape and strides as well as a pointer to the shared databuffer.
1d array indexing
I'll try to illustrate what is going on when you index an array:
In [51]: arr = np.arange(4)
The array is an object with various attributes such as shape, and a data buffer. The buffer stores the data as bytes (in a C array), not as Python numeric objects. You can see information on the array with:
In [52]: np.info(arr)
class: ndarray
shape: (4,)
strides: (4,)
itemsize: 4
aligned: True
contiguous: True
fortran: True
data pointer: 0xa84f8d8
byteorder: little
byteswap: False
type: int32
or
In [53]: arr.__array_interface__
Out[53]:
{'data': (176486616, False),
'descr': [('', '<i4')],
'shape': (4,),
'strides': None,
'typestr': '<i4',
'version': 3}
One has the data pointer in hex, the other decimal. We usually don't reference it directly.
If I index an element, I get a new object:
In [54]: x1 = arr[1]
In [55]: type(x1)
Out[55]: numpy.int32
In [56]: x1.__array_interface__
Out[56]:
{'__ref': array(1),
'data': (181158400, False),
....}
In [57]: id(x1)
Out[57]: 2946170352
It has some properties of an array, but not all. For example you can't assign to it. Notice also that its 'data` value is totally different.
Make another selection from the same place - different id and different data:
In [58]: x2 = arr[1]
In [59]: id(x2)
Out[59]: 2946170336
In [60]: x2.__array_interface__['data']
Out[60]: (181143288, False)
Also if I change the array at this point, it does not affect the earlier selections:
In [61]: arr[1] = 10
In [62]: arr
Out[62]: array([ 0, 10, 2, 3])
In [63]: x1
Out[63]: 1
x1 and x2 don't have the same id, and thus won't match with is, and they don't use the arr data buffer either. There's no record that either variable was derived from arr.
With slicing it is possible get a view of the original array,
In [64]: y = arr[1:2]
In [65]: y.__array_interface__
Out[65]:
{'data': (176486620, False),
'descr': [('', '<i4')],
'shape': (1,),
....}
In [66]: y
Out[66]: array([10])
In [67]: y[0]=4
In [68]: arr
Out[68]: array([0, 4, 2, 3])
In [69]: x1
Out[69]: 1
It's data pointer is 4 bytes larger than arr - that is, it points to the same buffer, just a different spot. And changing y does change arr (but not the independent x1).
I could even make a 0d view of this item
In [71]: z = y.reshape(())
In [72]: z
Out[72]: array(4)
In [73]: z[...]=0
In [74]: arr
Out[74]: array([0, 0, 2, 3])
In Python code we normally don't work with objects like this. When we use the c-api or cython is it possible to access the data buffer directly. nditer is an iteration mechanism that works with 0d objects like this (either in Python or the c-api). In cython typed memoryviews are particularly useful for low level access.
http://cython.readthedocs.io/en/latest/src/userguide/memoryviews.html
https://docs.scipy.org/doc/numpy/reference/arrays.nditer.html
https://docs.scipy.org/doc/numpy/reference/c-api.iterator.html#c.NpyIter
elementwise ==
In response to comment, Comparing NumPy object references
np.array([1]) == np.array([2]) will return array([False], dtype=bool)
== is defined for arrays as an elementwise operation. It compares the values of the respective elements and returns a matching boolean array.
If such a comparison needs to be used in a scalar context (such as an if) it needs to be reduced to a single value, as with np.all or np.any.
The is test compares object id's (not just for numpy objects). It has limited value in practical coding. I used it most often in expressions like is None, where None is an object with a unique id, and which does not play nicely with equality tests.
I think that you have a miss understanding about Numpy arrays. You think that sub arrays in a multidimensional array in Numpy (like in Python lists) are separate objects, well, they're not.
A Numpy array, regardless of its dimension is just one object. And that's because Numpy creates the arrays at C levels and when loads them up as a python object it can't be break down to multiple objects. That makes Python to create a new object for preserving new parts when you use some attributes like split(), __getitem__, take() or etc., which as a mater of fact, its just the way that python abstracts the list-like behavior for Numpy arrays.
You can also check thin in real-time like following:
In [7]: x
Out[7]:
array([[1, 2, 3],
[4, 5, 6]])
In [8]: x[0] is x[0]
Out[8]: False
So as soon as you have an array or any mutable object that can hols other object in it you'll have a python mutable object and therefore you will lose the performance and all other Numpy array's cool features.
Also as #Imanol mentioned in comments you may want to use Numpy view objects if you want to have a memory optimized and flexible operation when you want to modify an array(s) with reference(s). view objects can be constructed in following two ways:
a.view(some_dtype) or a.view(dtype=some_dtype) constructs a view of
the array’s memory with a different data-type. This can cause a
reinterpretation of the bytes of memory.
a.view(ndarray_subclass) or a.view(type=ndarray_subclass) just returns
an instance of ndarray_subclass that looks at the same array (same
shape, dtype, etc.) This does not cause a reinterpretation of the
memory.
For a.view(some_dtype), if some_dtype has a different number of bytes
per entry than the previous dtype (for example, converting a regular
array to a structured array), then the behavior of the view cannot be
predicted just from the superficial appearance of a (shown by
print(a)). It also depends on exactly how a is stored in memory.
Therefore if a is C-ordered versus fortran-ordered, versus defined as
a slice or transpose, etc., the view may give different results.
Not sure if it's useful at this point, but numpy.ndarray.ctypes seems to have useful bits:
https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.ctypes.html
Used something like this (missing dtype, but meh):
def is_same_array(a, b):
return (a.shape == b.shape) and (a == b).all() and a.ctypes.data == b.ctypes.data
here:
https://github.com/EricCousineau-TRI/repro/blob/a60daf899e9726daf2ca1259bb80ad2c7c9b3e3f/python/namedlist_alt.py#L111

how can I flatten an 2d numpy array, which has different length in the second axis?

I have a numpy array which looks like:
myArray = np.array([[1,2],[3]])
But I can not flatten it,
In: myArray.flatten()
Out: array([[1, 2], [3]], dtype=object)
If I change the array to the same length in the second axis, then I can flatten it.
In: myArray2 = np.array([[1,2],[3,4]])
In: myArray2.flatten()
Out: array([1, 2, 3, 4])
My Question is:
Can I use some thing like myArray.flatten() regardless the dimension of the array and the length of its elements, and get the output: array([1,2,3])?
myArray is a 1-dimensional array of objects. Your list objects will simply remain in the same order with flatten() or ravel(). You can use hstack to stack the arrays in sequence horizontally:
>>> np.hstack(myArray)
array([1, 2, 3])
Note that this is basically equivalent to using concatenate with an axis of 1 (this should make sense intuitively):
>>> np.concatenate(myArray, axis=1)
array([1, 2, 3])
If you don't have this issue however and can merge the items, it is always preferable to use flatten() or ravel() for performance:
In [1]: u = timeit.Timer('np.hstack(np.array([[1,2],[3,4]]))'\
....: , setup = 'import numpy as np')
In [2]: print u.timeit()
11.0124390125
In [3]: u = timeit.Timer('np.array([[1,2],[3,4]]).flatten()'\
....: , setup = 'import numpy as np')
In [4]: print u.timeit()
3.05757689476
Iluengo's answer also has you covered for further information as to why you cannot use flatten() or ravel() given your array type.
Well, I agree with the other answers when they say that hstack or concatenate do the job in this case. However, I would like to point that even if it 'fixes' the problem, the problem is not addressed properly.
The problem is that even if it looks like the second axis has different length, this is not true in practice. If you try:
>>> myArray.shape
(2,)
>>> myArray.dtype
dtype('O') # stands for Object
>>> myArray[0]
[1, 2]
It shows you that your array is not a 2D array with variable size (as you might think), it is just a 1D array of objects. In your case, the elements are list, being the first element of your array a 2-element list and the second element of the array is a 1-element list.
So, flatten and ravel won't work because transforming 1D array to a 1D array results in exactly the same 1D array. If you have a object numpy array, it won't care about what you put inside, it will treat individual items as unkown items and can't decide how to merge them.
What you should have in consideration, is if this is the behaviour you want for your application. Numpy arrays are specially efficient with fixed-size numeric matrices. If you are playing with arrays of objects, I don't see why would you like to use Numpy instead of regular python lists.
np.hstack works in this case
In [69]: np.hstack(myArray)
Out[69]: array([1, 2, 3])

Categories

Resources